In Agent Studio, tools are defined as Python
functions – “tool” is the UI name and “function” is the code identifier. This lesson uses both interchangeably.Why tools exist
The Agent tab (personality, role, rules) only gets you so far. Without tools, the agent can’t retrieve user data, execute actions, save state, or integrate with external systems. Prompt engineering also doesn’t scale – putting dozens of user journeys into one rules box gives the LLM too much context and makes it harder to reason about any single scenario. Tools solve both problems: external integration and fine-grained control over what the LLM knows and does at each step.How LLMs use tools
Before looking at Agent Studio specifically, it helps to understand how LLMs use tools in general – because this pattern is the same across all modern AI systems. An LLM can produce two kinds of output:Text output
The model speaks to the user using natural language.“The weather in Paris is usually mild in October.”
Tool call
The model communicates with a system – a function, API, or piece of code – to fetch data or trigger an action.call:
get_weather with {city: "Paris"}Creating a function in Agent Studio
In Agent Studio, tools are called functions. You write them in Python and they are available for the LLM to call during a conversation. To create a function, go to Build → Functions and click the + button. Every function has:- Name – how the function is identified (used by the LLM to decide when to call it)
- Description – what the function does (also read by the LLM when deciding whether to call it)
- Parameters – the inputs the function needs, each with a name, description, and type
- Python code – what the function actually does when called
Example: a simple addition function
first_number– The first number the user wants to add (type: number)second_number– The second number the user wants to add (type: number)
Making a function visible to the LLM
Creating a function is not enough. The LLM will not call a function it does not know exists. To make a function available to the LLM, you must reference it somewhere – in a topic action, a flow step, or directly in the Behavior field using the@function_name syntax.
When you reference a function, Agent Studio highlights it and registers it in the LLM request. The LLM will then see the function’s full definition (name, description, parameters) and can choose to call it.
What the LLM actually sees
When you reference a function, it is added to the prompt sent to the LLM. The prompt is structured roughly like this:| Section | What it contains |
|---|---|
| Base system prompt – intro | Your Personality and Role fields concatenated |
| Base system prompt – behavior | Everything in the Behavior field |
| Context information | Knowledge content retrieved for this turn (empty if no relevant topic matched) |
| Conversation history | The full transcript so far, alternating assistant / user roles |
| Functions | Definitions of any functions that have been referenced |
The two-request pattern
When the LLM calls a function, it takes two LLM requests to produce the final response to the user.Request 1: the LLM decides to call a function
The LLM receives the user input, sees the available function definitions, and outputs a tool call rather than text.The response content is empty. The tool call object contains the function name and the parameter values the LLM extracted from the conversation.
The function runs
Agent Studio executes the Python function with the provided parameters and gets back a result.
What the conversation history looks like after a tool call
function role is the third role alongside user and assistant. This is how function results are fed back into the LLM’s context.
Try it yourself
Create a function
In Build → Functions, create a function called
get_store_hours with no parameters. Return a string like "The store is open Monday–Friday, 9am to 6pm.".Reference it
In Build → Agent → Behavior, add a reference to
@get_store_hours. Confirm it highlights.Test it
Open Chat and ask “What are your opening hours?” Enable the Tool calls layer in Conversation Diagnosis to confirm:
- The
get_store_hourstool call appears with its parameters - The returned value is shown in the turn timeline
- The agent’s spoken response incorporates the returned value
← Previous: Complex topics
Lesson 1 of 8
Next: Return values →
Lesson 3 of 8

