Skip to main content
Level 2 – Lesson 2 of 8 – This lesson explains what tools are, why they exist, and how the LLM uses them. By the end you will understand the full request-response loop and be able to create and reference a tool correctly.
In Agent Studio, tools are defined as Python functions – “tool” is the UI name and “function” is the code identifier. This lesson uses both interchangeably.

Why tools exist

The Agent tab (personality, role, rules) only gets you so far. Without tools, the agent can’t retrieve user data, execute actions, save state, or integrate with external systems. Prompt engineering also doesn’t scale – putting dozens of user journeys into one rules box gives the LLM too much context and makes it harder to reason about any single scenario. Tools solve both problems: external integration and fine-grained control over what the LLM knows and does at each step.

How LLMs use tools

Before looking at Agent Studio specifically, it helps to understand how LLMs use tools in general – because this pattern is the same across all modern AI systems. An LLM can produce two kinds of output:

Text output

The model speaks to the user using natural language.“The weather in Paris is usually mild in October.”

Tool call

The model communicates with a system – a function, API, or piece of code – to fetch data or trigger an action.call: get_weather with {city: "Paris"}
The LLM mediates between the user and the system. It takes a user request, decides whether it needs to call a tool to answer it, calls that tool, receives a result, and then reports the result back to the user as text.
User → LLM → tool call → system

User ← LLM ← result ← system
This is the core loop that powers almost everything beyond basic FAQ responses.

Creating a function in Agent Studio

In Agent Studio, tools are called functions. You write them in Python and they are available for the LLM to call during a conversation. To create a function, go to Build → Functions and click the + button. Every function has:
  • Name – how the function is identified (used by the LLM to decide when to call it)
  • Description – what the function does (also read by the LLM when deciding whether to call it)
  • Parameters – the inputs the function needs, each with a name, description, and type
  • Python code – what the function actually does when called
The LLM reads the function name, description, and parameter descriptions when deciding whether to call the function and what to pass as arguments. Name and describe everything clearly — this directly affects whether the LLM uses your function correctly.

Example: a simple addition function

def add_two_numbers(conv, first_number: float, second_number: float) -> str:
    total = first_number + second_number
    return f"The total is {total}"
Parameters:
  • first_numberThe first number the user wants to add (type: number)
  • second_numberThe second number the user wants to add (type: number)
Functions called by the LLM must return either a string or a dictionary with specific keys. Returning an integer, list, or other type will cause an error that the LLM will encounter and retry up to three times before giving up.

Making a function visible to the LLM

Creating a function is not enough. The LLM will not call a function it does not know exists. To make a function available to the LLM, you must reference it somewhere – in a topic action, a flow step, or directly in the Behavior field using the @function_name syntax. When you reference a function, Agent Studio highlights it and registers it in the LLM request. The LLM will then see the function’s full definition (name, description, parameters) and can choose to call it.
A common mistake is creating a function and testing the agent, only to find the LLM never calls it. If the function was not triggered, confirm you have referenced it with @function_name in a topic action, a flow step, or the Behavior field. Use the Tool calls layer in Conversation Diagnosis to verify what the agent actually did.

What the LLM actually sees

When you reference a function, it is added to the prompt sent to the LLM. The prompt is structured roughly like this:
SectionWhat it contains
Base system prompt – introYour Personality and Role fields concatenated
Base system prompt – behaviorEverything in the Behavior field
Context informationKnowledge content retrieved for this turn (empty if no relevant topic matched)
Conversation historyThe full transcript so far, alternating assistant / user roles
FunctionsDefinitions of any functions that have been referenced
The LLM does not see the Python code inside the function. It only sees the function’s name, description, and parameters. The Greeting is different – it is hard-coded text played at the start of every call and is not generated by the LLM. It does appear in conversation history so the LLM knows how the call opened, but no LLM request is made to produce it. This means your function name, description, and parameter descriptions are all part of the prompt. Write them with the same care as any other prompt text.

The two-request pattern

When the LLM calls a function, it takes two LLM requests to produce the final response to the user.
1

Request 1: the LLM decides to call a function

The LLM receives the user input, sees the available function definitions, and outputs a tool call rather than text.The response content is empty. The tool call object contains the function name and the parameter values the LLM extracted from the conversation.
2

The function runs

Agent Studio executes the Python function with the provided parameters and gets back a result.
3

Request 2: the LLM reports the result

The function result is inserted into the conversation history under a function role. The LLM sees this result and produces a text response to communicate the result to the user.
Enable the Tool calls toggle in Conversation Diagnosis to see what parameters the agent passed to the function and what was returned. The function call and its result both appear in the turn timeline.

What the conversation history looks like after a tool call

assistant: "Hi, thanks for calling. How can I help?"
user: "What's 2 plus 3?"
assistant: [calls add_two_numbers with first_number=2, second_number=3]
function: "The total is 5"
assistant: "2 plus 3 equals 5."
The function role is the third role alongside user and assistant. This is how function results are fed back into the LLM’s context.

Try it yourself

1

Create a function

In Build → Functions, create a function called get_store_hours with no parameters. Return a string like "The store is open Monday–Friday, 9am to 6pm.".
2

Reference it

In Build → Agent → Behavior, add a reference to @get_store_hours. Confirm it highlights.
3

Test it

Open Chat and ask “What are your opening hours?” Enable the Tool calls layer in Conversation Diagnosis to confirm:
  • The get_store_hours tool call appears with its parameters
  • The returned value is shown in the turn timeline
  • The agent’s spoken response incorporates the returned value

← Previous: Complex topics

Lesson 1 of 8

Next: Return values →

Lesson 3 of 8
Last modified on April 22, 2026