Implementing agentic patterns
Building powerful AI systems involves more than just calling a model; it requires structuring interactions in a way that balances reliability with flexibility. This is the core idea behind the agentic scale.
At one end of the scale, you have Workflows: structured, predictable sequences of tasks. They are highly reliable but less flexible. At the other end, you have Agents: autonomous systems that can reason, plan, and use tools to handle complex, unpredictable tasks. They are highly flexible but can be less reliable.
The key to building effective AI is to find the right point on this scale for your use case, often creating a hybrid that combines the best of both worlds. This guide explores key patterns along the agentic scale and shows you how to implement them using Genkit’s core primitives like flows, tools, and interrupts.
All of the code samples in this guide can be found in the agentic-patterns sample on GitHub.
Patterns on the agentic scale
Section titled “Patterns on the agentic scale”We will cover the following patterns, moving from more structured workflows to more autonomous agents:
- Sequential Processing: The simplest workflow, decomposing a task into a fixed sequence of LLM calls.
- Conditional Routing: Adding branching logic to a workflow based on an LLM’s output.
- Parallel Execution: Running multiple LLM calls concurrently for speed or to gather diverse perspectives.
- Tool Calling: Introducing flexibility by allowing an LLM to call external functions to retrieve information or perform actions.
- Iterative Refinement: Creating a feedback loop where an LLM critiques and improves its own work.
- Autonomous Operation: Building agents that can independently plan and execute tasks to achieve a goal.
- Stateful Interactions: Turning any workflow into a stateful, conversational experience by managing history.
Workflow: Sequential processing
Section titled “Workflow: Sequential processing”This is the simplest workflow pattern, where a task is broken down into a fixed sequence of steps. Each step processes the output of the previous one. Genkit flows are the ideal tool for orchestrating these sequences.
A key advantage of this pattern is the ability to use different models for different steps. For example, you could use a fast, cheaper model to generate an initial idea, and then a more powerful model to elaborate on it. You can also create multi-modal scenarios, like using one model to generate a text prompt for an image generation model.
In this example, the flow first generates a story idea and then uses that idea to write the beginning of the story.
import 'package:genkit/genkit.dart';import 'package:genkit_google_genai/genkit_google_genai.dart';import 'package:schemantic/schemantic.dart';
@Schema()abstract class $StoryInput { @Field(defaultValue: 'dinosaurs') String get topic;}
@Schema()abstract class $StoryIdea { /// A short, compelling story concept String get idea;}
ai.defineFlow( name: 'storyWriterFlow', inputSchema: StoryInput.$schema, outputSchema: .string(), fn: (input, _) async { // Step 1: Generate a creative story idea final ideaResponse = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'Generate a unique story idea about a ${input.topic}.', outputSchema: StoryIdea.$schema, );
final storyIdea = ideaResponse.output?.idea; if (storyIdea == null) { throw Exception('Failed to generate a story idea.'); }
// Step 2: Use the idea to write the beginning of the story final storyResponse = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'Write the opening paragraph for a story based on this idea: $storyIdea', );
return storyResponse.text; },);This flow uses a text model to generate a detailed prompt for an image generation model, creating a piece of art based on a simple concept.
@Schema()abstract class $ImageGeneratorInput { @Field(defaultValue: 'a futuristic city') String get concept;}
ai.defineFlow( name: 'imageGeneratorFlow', inputSchema: ImageGeneratorInput.$schema, outputSchema: .string(), fn: (input, _) async { // Step 1: Use a text model to generate a rich image prompt final promptResponse = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'Create a detailed, artistic prompt for an image generation model. The concept is: "${input.concept}".', );
final imagePrompt = promptResponse.text;
// Step 2: Use the generated prompt to create an image final imageResponse = await ai.generate( model: googleAI.gemini('gemini-2.5-flash-image'), prompt: imagePrompt, config: { 'responseModalities': ['image'], }, );
final imageUrl = imageResponse.media?.url; if (imageUrl == null) { throw Exception('Failed to generate an image.'); } return imageUrl; },);Workflow: Conditional routing
Section titled “Workflow: Conditional routing”This pattern adds branching logic to a workflow. An initial LLM call classifies the input, and the flow then routes the task to a specialized downstream path.
This is a great place to optimize for cost and latency. The initial classification step can often be handled by a smaller, faster model (like gemini-2.5-flash or even gemini-2.5-flash-lite), while the more complex downstream tasks can be routed to more powerful models.
This flow determines if a user’s request is a simple question or a request for creative writing and handles it accordingly.
import 'package:genkit/genkit.dart';import 'package:genkit_google_genai/genkit_google_genai.dart';import 'package:schemantic/schemantic.dart';
@Schema()abstract class $RouterInput { @Field(defaultValue: 'How do I bake a cake?') String get query;}
@Schema()abstract class $IntentClassification { String get intent;}
ai.defineFlow( name: 'routerFlow', inputSchema: RouterInput.$schema, outputSchema: .string(), fn: (input, _) async { // Step 1: Classify the user's intent final intentResponse = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'Classify the user\'s query as either a \'question\' or a \'creative\' request. Query: "${input.query}"', outputSchema: IntentClassification.$schema, );
final intent = intentResponse.output?.intent;
// Step 2: Route based on the intent if (intent == 'question') { final answerResponse = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'Answer the following question: ${input.query}', ); return answerResponse.text; } else if (intent == 'creative') { final creativeResponse = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'Write a short poem about: ${input.query}', ); return creativeResponse.text; } else { return "Sorry, I couldn't determine how to handle your request."; } },);Workflow: Parallel execution
Section titled “Workflow: Parallel execution”This pattern executes multiple LLM calls simultaneously, either to perform independent sub-tasks faster (Sectioning) or to generate multiple diverse outputs for comparison (Voting). Promise.all() within a Genkit flow is perfect for this.
This example uses sectioning to generate a product name and a marketing tagline at the same time.
import 'package:genkit/genkit.dart';import 'package:genkit_google_genai/genkit_google_genai.dart';import 'package:schemantic/schemantic.dart';
@Schema()abstract class $ProductInput { @Field(defaultValue: 'a solar-powered coffee maker') String get product;}
@Schema()abstract class $MarketingCopy { String get name; String get tagline;}
ai.defineFlow( name: 'marketingCopyFlow', inputSchema: ProductInput.$schema, outputSchema: MarketingCopy.$schema, fn: (input, _) async { // Task 1: Generate a creative name final nameFuture = ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'Generate a creative name for a new product: ${input.product}.', );
// Task 2: Generate a catchy tagline final taglineFuture = ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'Generate a catchy tagline for a new product: ${input.product}.', );
final results = await Future.wait([nameFuture, taglineFuture]);
return MarketingCopy( name: results[0].text, tagline: results[1].text, ); },);Hybrid: Tool calling
Section titled “Hybrid: Tool calling”This is where workflows start becoming more agentic. Instead of following a fixed path, the LLM can dynamically decide to call external functions (tools) to retrieve information or perform actions. This allows the workflow to interact with the outside world.
This flow provides an LLM with a getWeather tool. The LLM can then decide whether to call this tool based on the user’s prompt.
import 'package:genkit/genkit.dart';import 'package:genkit_google_genai/genkit_google_genai.dart';import 'package:schemantic/schemantic.dart';
@Schema()abstract class $ToolCallingInput { @Field(defaultValue: 'What is the weather in New York?') String get prompt;}
@Schema()abstract class $ToolCallingWeatherInput { String get location;}
// Define a tool that can be called by the LLMfinal getWeather = ai.defineTool( name: 'getWeather', description: 'Get the current weather in a given location.', inputSchema: ToolCallingWeatherInput.$schema, outputSchema: .string(), fn: (input, _) async { // In a real app, you would call a weather API here. return 'The weather in ${input.location} is 75°F and sunny.'; },);
ai.defineFlow( name: 'toolCallingFlow', inputSchema: ToolCallingInput.$schema, outputSchema: .string(), fn: (input, _) async { final response = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: input.prompt, toolNames: [getWeather.name], );
return response.text; },);A more advanced form of tool use is Agentic RAG (Retrieval-Augmented Generation). Here, the agent uses a retrieval tool to fetch relevant documents from a vector store and uses them to answer a question.
import 'package:genkit/genkit.dart';import 'package:genkit_google_genai/genkit_google_genai.dart';import 'package:schemantic/schemantic.dart';
@Schema()abstract class $AgenticRagRequest { @Field(defaultValue: 'What kind of burgers do you have?') String get question;}
@Schema()abstract class $MenuRagToolRequest { String get query;}
// 1. Retrieval tool (Naive substring search)final menuRagTool = ai.defineTool( name: 'menuRagTool', description: 'Use to retrieve information from the Genkit Grub Pub menu.', inputSchema: MenuRagToolRequest.$schema, outputSchema: .list(DocumentData.$schema), fn: (input, _) async { final queryWords = input.query .toLowerCase() .split(RegExp(r'\s+')) .where((w) => w.isNotEmpty) .map((w) { if (w.endsWith('s') && w.length > 3) return w.substring(0, w.length - 1); if (w.endsWith('ing') && w.length > 5) return w.substring(0, w.length - 3); return w; }).toList();
if (queryWords.isEmpty) return [];
final docs = menuItems.where((item) { final lowerItem = item.toLowerCase(); // Return true if any of the query word stems are found in the item. return queryWords.any((word) => lowerItem.contains(word)); }).map((item) => DocumentData(content: [TextPart(text: item)])).toList();
return docs; },);
// 2. Agentic RAG flowai.defineFlow( name: 'agenticRagFlow', inputSchema: AgenticRagRequest.$schema, outputSchema: .string(), fn: (input, _) async { final response = await ai.generate( messages: [ Message( role: Role.system, content: [ TextPart( text: 'You are a helpful AI assistant that can answer questions about the food available on the menu at Genkit Grub Pub. ' 'Use the provided tool to answer questions. ' 'If you don\'t know, do not make up an answer. ' 'Do not add or change items on the menu.', ), ], ), ], prompt: input.question, toolNames: [menuRagTool.name], ); return response.text; },);Hybrid: Iterative refinement
Section titled “Hybrid: Iterative refinement”This pattern creates a feedback loop to improve output quality. An “optimizer” LLM generates content, and an “evaluator” LLM provides critiques. The process repeats until the output meets a desired standard, moving further toward agent-like behavior.
This flow writes a short blog post, then repeatedly evaluates and refines it until the evaluator is satisfied.
import 'package:genkit/genkit.dart';import 'package:genkit_google_genai/genkit_google_genai.dart';import 'package:schemantic/schemantic.dart';
@Schema()abstract class $IterativeRefinementInput { @Field(defaultValue: 'the benefits of agentic AI') String get topic;}
@Schema()abstract class $Evaluation { String get critique; bool get satisfied;}
ai.defineFlow( name: 'iterativeRefinementFlow', inputSchema: IterativeRefinementInput.$schema, outputSchema: .string(), fn: (input, _) async { var content = ''; var attempts = 0;
// Step 1: Generate the initial draft final draftResponse = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'Write a short, single-paragraph blog post about: ${input.topic}.', ); content = draftResponse.text;
// Step 2: Iteratively refine the content while (attempts < 3) { attempts++;
// The "Evaluator" provides feedback final evaluationResponse = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'Critique the following blog post. Is it clear, concise, and engaging? Provide specific feedback for improvement. Post: "$content"', outputSchema: Evaluation.$schema, );
final evaluation = evaluationResponse.output; if (evaluation == null) { throw Exception('Failed to evaluate content.'); }
if (evaluation.satisfied) { break; // Exit loop if content is good enough }
// The "Optimizer" refines the content based on feedback final optimizationResponse = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'Revise the following blog post based on the feedback provided.\nPost: "$content"\nFeedback: "${evaluation.critique}"', ); content = optimizationResponse.text; }
return content; },);Agent: Autonomous operation
Section titled “Agent: Autonomous operation”At the far end of the scale, an autonomous agent can independently plan and execute a series of steps to achieve a goal, using a set of tools. Genkit’s tool-calling mechanism, combined with interrupts for human-in-the-loop scenarios, provides a robust foundation for building these systems.
This example shows a simple research agent that can search the web and ask for clarification. It will continue to execute until it believes the task is complete or it reaches its turn limit.
import 'package:genkit/genkit.dart';import 'package:genkit_google_genai/genkit_google_genai.dart';import 'package:schemantic/schemantic.dart';
@Schema()abstract class $AutonomousOperationInput { @Field(defaultValue: 'Research the current state of Genkit Dart support.') String get goal;}
@Schema()abstract class $AgentSearchInput { String get query;}
@Schema()abstract class $AgentAskUserInput { String get question;}
// A tool for the agent to search the webfinal webSearch = ai.defineTool( name: 'webSearch', description: 'Search the web for information on a given topic.', inputSchema: AgentSearchInput.$schema, outputSchema: .string(), fn: (input, _) async { // In a real app, you would implement a web search API call here. return 'You found search results for: ${input.query}'; },);
// A tool for the agent to ask the user a questionfinal askUser = ai.defineTool( name: 'askUser', description: 'Ask the user a clarifying question.', inputSchema: AgentAskUserInput.$schema, outputSchema: .string(), fn: (input, context) async { // This tool interrupts the flow to ask the user a question. context.interrupt(input.question); return ''; // Will not be reached after interrupt },);
ai.defineFlow( name: 'researchAgent', inputSchema: AutonomousOperationInput.$schema, outputSchema: .string(), fn: (input, _) async { var response = await ai.generate( messages: [ Message( role: Role.system, content: [ TextPart( text: 'You are a research agent. Your goal is to help the user with their research goal. ' 'Use the provided tools to search the web and ask the user for more information if needed. ' 'Plan your steps and execute them autonomously.', ), ], ), ], prompt: input.goal, toolNames: [webSearch.name, askUser.name], );
// Handle potential interrupts (human-in-the-loop) while (response.finishReason == FinishReason.interrupted) { final interrupts = response.interrupts; if (interrupts.isEmpty) { break; }
final resumeResponses = <InterruptResponse>[]; for (final interrupt in interrupts) { if (interrupt.toolRequest.name == 'askUser') { final question = interrupt.metadata?['interrupt'] as String?; // In a real app, you'd prompt the user here. For this sample: final simulatedAnswer = 'The user answered: "Sample answer for \'$question\'"'; resumeResponses.add(InterruptResponse(interrupt.toolRequestPart!, simulatedAnswer)); } }
response = await ai.generate( messages: response.messages, toolNames: [webSearch.name, askUser.name], interruptRespond: resumeResponses, ); }
return response.text; },);Bonus: Stateful interactions
Section titled “Bonus: Stateful interactions”Any of the patterns above can be turned into a stateful, conversational interaction by managing conversation history. This allows the agent or workflow to remember previous turns in the conversation and maintain context.
The key is to:
- Load the history for the current session.
- Append the new user message to the history.
- Call
ai.generate()with the full message history. This is where you can plug in any of the other patterns (like tool calling or routing) to make your conversational agent more powerful. - Save the updated history (including the model’s response) for the next turn.
This example shows a simple chat flow that maintains state.
import 'package:genkit/genkit.dart';import 'package:genkit_google_genai/genkit_google_genai.dart';import 'package:schemantic/schemantic.dart';
@Schema()abstract class $ChatInput { @Field(defaultValue: 'session123') String get sessionId; @Field(defaultValue: 'Hello!') String get message;}
void defineStatefulInteractionFlows(Genkit ai) { // In-memory session store (simulation) final Map<String, List<Message>> sessionHistory = {};
ai.defineFlow( name: 'statefulChatFlow', inputSchema: ChatInput.$schema, outputSchema: .string(), fn: (input, _) async {final history = sessionHistory[input.sessionId] ?? [];
final response = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), messages: [ Message( role: Role.system, content: [TextPart(text: 'You are a helpful and friendly AI assistant.')]), ...history, ], prompt: input.message, );
// Simple update of history (the SDK also handles history in GenerateResponse) sessionHistory[input.sessionId] = response.messages;
return response.text; }, );}