Before you begin, make sure you’ve signed up for an account.
Prerequisites
- A PolyAI account with access to Agent Studio
- Basic information about your use case (e.g., customer support, reservations, FAQ)
Create your agent (~2 min)
From the home page, click + Agent to start the agent creation wizard. You can create a blank agent or import an existing configuration.Configure the basics:
- Agent name – Internal identifier for your project
- Response language – Primary language for responses (see multilingual support for additional languages)
- Voice – Select from available text-to-speech (TTS) voices
- Welcome greeting – First message users receive (can be customized later in agent settings)

Add knowledge (~5 min)
Navigate to Build > Knowledge > Managed Topics in the sidebar.Click Add topic and provide:
Optional: Add more topics to expand your agent’s capabilities. You can also:
- Topic name – What this topic covers (e.g., “Store hours”)
- Sample questions – up to 20 ways users might ask (e.g., “When are you open?”)
- Answer – The response your agent should give

Changes are saved as Drafts. Publish to Sandbox to test them. Learn more about environments and versions.
- Upload PDFs or URLs to auto-generate topics
- Connect external knowledge sources like Zendesk or Google Sheets using the Connected tab in Knowledge
- Add actions to trigger handoffs, SMS, or other behaviors
Test your agent (~2 min)
Click the phone icon in the top-right corner to start an in-browser call.Select Sandbox from the environment dropdown and begin speaking.
You can also test via:

- Webchat – Click the chat icon to open a text-based conversation (see webchat setup)
- Phone – Connect a number and call it directly
Deploy to production (~1 min)
Once testing is complete, promote your agent through the deployment pipeline:
- Go to Deployments > Environments in the sidebar
- Click Promote to Pre-release for user acceptance testing (if available in your project)
- Click Promote to Live to make your agent production-ready

Some projects use a simplified pipeline that promotes directly from Sandbox to Live, skipping Pre-release. You can roll back to any previous version if issues arise — see the deployment pipeline guide for details.
Next steps
Add advanced features
Use functions to connect APIs, retrieve data, and add dynamic behavior
Build conversation flows
Create multi-step workflows for bookings, forms, and complex interactions
Configure voice settings
Customize TTS, add multiple voices, and tune audio management
Monitor performance
Track conversations, analyze metrics, and improve your agent over time
How your agent works
When a user interacts with your agent, the following happens:- User speaks or types – Audio is captured (voice) or text is received (chat)
- ASR transcribes – Speech is converted to text (voice only).
- LLM processes – The model retrieves relevant knowledge and generates a response. Configure which model to use in agent settings.
- TTS synthesizes – Text is converted back to speech (voice only). Customize voices in voice configuration.
- Response delivered – The agent replies to the user

