Skip to main content
Choose the LLM that powers your agent. The model directly affects response quality, latency, and cost — a wrong choice can make your agent slow, expensive, or inaccurate.
Recommended: PolyAI’s Raven model family is purpose-built for conversational AI and is the recommended choice for most deployments. Raven supports 24+ languages, delivers sub-300ms latency, and produces concise, natural responses without extensive prompt engineering. Raven 3.5 powers both voice and chat, with auto-reasoning and out-of-domain detection.

Available models

Configuring the model

Model selection
1

Open model settings

Navigate to Channels > Voice > Voice configuration or Channels > Chat > Chat configuration to select the model for each channel.
2

Select a model

Choose the desired model from the dropdown.
3

Save changes

Click Save to apply your changes.

Choosing the right model

For most use cases, start with Raven — it is optimized for the PolyAI platform and delivers the best out-of-the-box experience for both voice and chat. General-purpose models like GPT and Claude require extensive prompting to work well for customer service conversations. With Raven, the right conversational behavior is built in.
ScenarioRecommended model
Voice agentRaven 3.5
Chat agentRaven 3.5
Voice + chat (multichannel)Raven 3.5 for both channels
Multilingual agentRaven 3.5 — 24+ languages with near-perfect language consistency
Cost-sensitive, simple tasksGPT-5 nano or GPT-4.1 nano
Specific compliance requirementsEvaluate based on your provider requirements

Raven

Full details on PolyAI’s proprietary model family — capabilities, versions, and supported languages.

Bring your own model

Connect your own LLM endpoint to PolyAI.

Voice configuration

Select the model for your voice channel.

Chat configuration

Select the model for your chat channel.
Last modified on March 31, 2026