Skip to main content
Use multi-language support when your agent serves callers who speak different languages. A multilingual agent detects the caller’s language, switches mid-conversation if needed, and uses language-appropriate voices and content – all inside a single project. multi-language-settings
The new multilingual UI replaces the previous approach of defining languages in the start_function or maintaining one project per language. You can now add and manage languages directly in Agent Studio.

Setting up multi-language support

Multi-language settings showing Response language and Additional languages fields
1

Add languages

Go to Configure > General and find the Additional languages field. Select up to 10 additional languages from the dropdown. The Response language set during project creation becomes the main language.
2

Configure voices per language

Each language has its own voice configuration. Go to Channels > Voice > Agent Voice where you’ll see a voice card for each configured language. The main language card is tagged as “Main language”.
  • Select a voice for each language from the Voice Library
  • You can configure separate voices for Agent voice and Disclaimer per language
  • Multi-voice is supported per language, so you can assign multiple voices to a single language
3

Add translations (optional)

If you need manual translation overrides for specific content, use the Translations page under Channels > Response Control.
4

Test in Agent Chat

Use the language dropdown in Agent Chat to select a language and test your agent’s behavior in each configured language.

Supported languages

PolyAI supports over 40 spoken languages and 140 text languages. Spoken languages include English, Spanish, French, German, Italian, Portuguese, Dutch, Polish, Russian, Japanese, Korean, Chinese Mandarin, Arabic, Hindi, and many more.

How multilingual agents work

Multilingual agents can:
  • Detect the caller’s language automatically with ASR
  • Switch languages mid-conversation if the caller changes language
  • Automatically switch voices when the language changes, if a voice is configured for that language
  • Maintain language-specific knowledge using language variants on Managed Topics
  • Filter content by language using <language:xx> tags in prompts
  • Handle mixed-language queries (code-switching)
Auto voice switching only works for the main language and configured additional languages. If the conversation changes to an unsupported language, the voice stays the same.

Configuring voices per language

When multilingual support is enabled, the Agent Voice page shows separate voice sections organized by language. Agent voice set language by voice
  1. Go to Channels > Voice > Agent Voice
  2. You’ll see an Agent tab and a Disclaimer tab
  3. On the Agent tab, each language has its own voice card
  4. Click into a language card to select or change the voice
  5. To assign multiple voices to a language, add them from the voice card – multi-voice is supported per language
Voice quality tips:
  • Use native voices – don’t use an English voice for Spanish
  • Match regional accents – use Mexican Spanish for Mexico, Castilian for Spain
  • Test pronunciation for language-specific characters
  • Multilingual TTS models are convenient but may have slightly lower quality than language-specific models
You can also configure voices programmatically. See Voice classes for available providers including ElevenLabs, Cartesia, Hume, Rime, Minimax, PlayHT, and Google TTS.

Conditional content filtering

Use <language:xx> tags to serve language-specific content within a single prompt, without needing separate variants:
<language:en>
Please hold while I check your account.
</language:en>
<language:es>
Por favor espere mientras reviso su cuenta.
</language:es>
This works in:

Language variants on Managed Topics

Managed Topics support language variants so you can manage multilingual knowledge base content within a single agent. Each topic can have language-specific versions of its content and sample questions.
  1. Go to Build > Knowledge > Managed Topics
  2. Create or edit a topic
  3. Add language variants for each supported language
  4. Translate sample questions and content for each variant
Sample questions must be in the same language as caller inputs – they are compared with user inputs during the retrieval process.

Language-specific pronunciation rules

Pronunciation rules in Response Control are organized by language. Each language has its own set of rules, displayed as separate collapsible cards. Rules within a language card only apply to responses in that language. Rules with no language specified apply globally.

What to translate

Some project content needs translation, and some does not:
AreaElementTranslate?Notes
KnowledgeSample questionsMust match user input language for retrieval
ContentTranslate for brand accuracy and better output
Topic names and actionsKeep in English (used internally, not user-facing)
SMSSMS contentTranslate anything user-facing
ASR & VoiceASR keywords and correctionsLeave in native language – these may differ significantly from English
Response control and pronunciationsLeave in native language – these may differ significantly from English
FunctionsPython codeLeave in English
Function names and descriptionsLeave in English
Hard-coded responses and LLM promptsTranslate only user-facing content (e.g., utterances)

General rules

  • Keep instructions in English (e.g., “Ask for the user’s phone number”)
  • Translate example utterances or scripted responses
  • If it’s directed at the agent, keep it in English. If it’s going to be spoken aloud directly to the customer, translate it.
Ask the user for their number by saying "¿Me puedes dar tu número de teléfono?"

Function examples

If you’re using a function with a hard-coded response, translate the user-facing string:
return {
  "utterance": "Respuesta fija en español aquí"
}
If you’re re-prompting the LLM, you only need to translate example responses:
return {
  "content": "Inject prompt here"
}

Accessing the current language in functions

You can access the caller’s detected language in functions:
def dynamic_response():
    current_language = conv.language

    if current_language == "es":
        return {"utterance": "Respuesta en español"}
    else:
        return {"utterance": "Response in English"}

Accessing translations in functions

For hard-coded utterances that need language-specific versions, use the translations object:
conv.translations.tn_name
Or for translation keys with special characters:
getattr(conv.translations, "name with special chars!!!!")

Testing multilingual agents

The Agent Chat panel includes a language dropdown that lets you select a language to test with – similar to how you select variants.
  1. Open Agent Chat
  2. Select a language from the dropdown (defaults to the main language on the first turn)
  3. Interact with your agent and verify it responds correctly
  4. Switch languages mid-conversation to test detection and voice switching

Reviewing multilingual conversations

Language information is visible across Agent Studio: Language label in conversation review
  • Conversation review list – a language column shows which language was used in each conversation
  • Conversation review detail – per-turn language information appears alongside the transcript
  • Audio management – cached audio files include language metadata so you can identify and manage TTS audio per language

Translations

Manually override auto-translations for specific content in your agent’s responses.

Multi-language updates

Maintain and optimize your multilingual agent over time.

Voice Library

Browse and select voices per language for your agent.

Pronunciations

Configure language-specific pronunciation rules for natural speech.
Last modified on April 20, 2026