Skip to content

Tool calling

Tool calling, also known as function calling, is a structured way to give LLMs the ability to make requests back to the application that called it. You define the tools you want to make available to the model, and the model will make tool requests to your app as necessary to fulfill the prompts you give it.

The use cases of tool calling generally fall into a few themes:

Giving an LLM access to information it wasn’t trained with

  • Frequently changing information, such as a stock price or the current weather.
  • Information specific to your app domain, such as product information or user profiles.

Note the overlap with retrieval augmented generation (RAG), which is also a way to let an LLM integrate factual information into its generations. RAG is a heavier solution that is most suited when you have a large amount of information or the information that’s most relevant to a prompt is ambiguous. On the other hand, if retrieving the information the LLM needs is a simple function call or database lookup, tool calling is more appropriate.

Introducing a degree of determinism into an LLM workflow

  • Performing calculations that the LLM cannot reliably complete itself.
  • Forcing an LLM to generate verbatim text under certain circumstances, such as when responding to a question about an app’s terms of service.

Performing an action when initiated by an LLM

  • Turning on and off lights in an LLM-powered home assistant
  • Reserving table reservations in an LLM-powered restaurant agent

If you want to run the code examples on this page, first complete the steps in the Get started guide. All of the examples assume that you have already set up a project with Genkit dependencies installed.

Genkit is designed to be flexible enough to use potentially any generative AI model service. Its core libraries define the common interface for working with models, and model plugins define the implementation details for working with a specific model and its API.

Use the ai.defineTool() method to create tool definitions:

import 'package:genkit/genkit.dart';
import 'package:genkit_google_genai/genkit_google_genai.dart';
import 'package:schemantic/schemantic.dart';
part 'main.g.dart';
@Schema()
abstract class $WeatherInput {
String get location;
}
void main() async {
final ai = Genkit(plugins: [googleAI()]);
final getWeather = ai.defineTool(
name: 'getWeather',
description: 'Gets the current weather in a given location',
inputSchema: WeatherInput.$schema,
fn: (input, _) async {
// Simulate API call
return 'The current weather in ${input.location} is 63°F and sunny.';
},
);
}

When writing a tool definition, take special care with the wording and descriptiveness of the name and description parameters. They are vital for the LLM to make effective use of the available tools.

Include defined tools in your prompts to generate content.

Using ai.generate():

final response = await ai.generate(
model: googleAI.gemini('gemini-2.5-flash'),
prompt: 'What is the weather in Baltimore?',
tools: ['getWeather'], // Reference tool by name
);
print(response.text);

Genkit will automatically handle the tool call loop:

  1. Model requests getWeather with location="Baltimore".
  2. Genkit executes getWeather.
  3. Genkit sends result back to model.
  4. Model generates final response.

If you want full control over this tool-calling loop, for example to apply more complicated logic, set the returnToolRequests parameter to true.

final response = await ai.generate(
model: googleAI.gemini('gemini-2.5-flash'),
prompt: 'What is the weather in Baltimore?',
tools: ['getWeather'],
returnToolRequests: true,
);
for (final request in response.toolRequests) {
if (request.toolRequest.name == 'getWeather') {
// Handle explicitly...
}
}