Skip to content

Defining AI workflows

AI workflows typically require more than just a model call. They need pre- and post-processing steps like retrieving context, managing session history, reformatting inputs, validating outputs, or combining multiple model responses.

A flow is a special Genkit function that wraps your AI logic to provide:

  • Type-safe inputs and outputs: Define schemas using Schematic for static and runtime validation
  • Streaming support: Stream partial responses or custom data
  • Developer UI integration: Test and debug flows with visual traces
  • Easy deployment: Deploy as HTTP endpoints to any platform

Flows are lightweight. They’re written like regular functions with minimal abstraction.

In its simplest form, a flow just wraps a function. The following example wraps a function that makes a model generation request:

import 'package:genkit/genkit.dart';
import 'package:genkit_google_genai/genkit_google_genai.dart';
import 'package:schemantic/schemantic.dart';
part 'main.g.dart';
@Schema()
abstract class $MenuSuggestionInput {
String get theme;
}
@Schema()
abstract class $MenuSuggestionOutput {
String get menuItem;
}
void main() {
final ai = Genkit(plugins: [googleAI()]);
final menuSuggestionFlow = ai.defineFlow(
name: 'menuSuggestionFlow',
inputSchema: MenuSuggestionInput.$schema,
outputSchema: MenuSuggestionOutput.$schema,
fn: (input, _) async {
final response = await ai.generate(
model: googleAI.gemini('gemini-2.5-flash'),
prompt: 'Invent a menu item for a ${input.theme} themed restaurant.',
);
return MenuSuggestionOutput(menuItem: response.text);
},
);
}

Just by wrapping your generate calls like this, you add some functionality: doing so lets you run the flow from the Genkit CLI and from the developer UI, and is a requirement for several of Genkit’s features, including deployment and observability (later sections discuss these topics).

One of the most important advantages Genkit flows have over directly calling a model API is type safety of both inputs and outputs. When defining flows, you can define schemas for them.

You can define schemas using the @Schema annotation from the schemantic package. This generates Dart classes with JSON serialization and schema definitions.

  • Better developer experience: Schematic-based schemas provide a better experience in the Developer UI by giving you labeled input fields.
  • Future-proof API design: Schematic-based schemas allow for easy extensibility in the future.

All examples in this documentation use Schematic-based schemas to follow these best practices.

Here’s a refinement of the last example, which defines a flow that takes a string as input and outputs an object:

@Schema()
abstract class $MenuSuggestionInput {
String get theme;
}
@Schema()
abstract class $MenuItem {
String get dishname;
String get description;
}
final menuSuggestionFlow = ai.defineFlow(
name: 'menuSuggestionFlow',
inputSchema: MenuSuggestionInput.$schema,
outputSchema: MenuItem.$schema,
fn: (input, _) async {
final response = await ai.generate(
model: googleAI.gemini('gemini-2.5-flash'),
prompt: 'Invent a menu item for a ${input.theme} themed restaurant.',
outputSchema: MenuItem.$schema,
);
if (response.output == null) {
throw Exception("Response doesn't satisfy schema.");
}
return response.output!;
},
);

Note that the schema of a flow does not necessarily have to line up with the schema of the model generation calls within the flow (in fact, a flow might not even contain model calls). Here’s a variation of the example that uses the structured output to format a simple string, which the flow returns.

@Schema()
abstract class $MenuSuggestionInput {
String get theme;
}
@Schema()
abstract class $MenuItem {
String get dishname;
String get description;
}
@Schema()
abstract class $FormattedMenuOutput {
String get formattedMenuItem;
}
final menuSuggestionFlow = ai.defineFlow(
name: 'menuSuggestionFlow',
inputSchema: MenuSuggestionInput.$schema,
outputSchema: FormattedMenuOutput.$schema,
fn: (input, _) async {
final response = await ai.generate(
model: googleAI.gemini('gemini-2.5-flash'),
prompt: 'Invent a menu item for a ${input.theme} themed restaurant.',
outputSchema: MenuItem.$schema,
);
if (response.output == null) {
throw Exception("Response doesn't satisfy schema.");
}
final output = response.output!;
return FormattedMenuOutput(
formattedMenuItem: '**${output.dishname}**: ${output.description}',
);
},
);

Once you’ve defined a flow, you can call it from your code:

final response = await menuSuggestionFlow(
MenuSuggestionInput(theme: 'bistro'),
);

The argument to the flow must conform to the input schema.

If you defined an output schema, the flow response will conform to it. For example, if you set the output schema to MenuItemSchema, the flow output will contain its properties:

final output = await menuSuggestionFlow(
MenuSuggestionInput(theme: 'bistro'),
);
print(output.dishname);
print(output.description);

Flows support streaming using an interface similar to the model generation streaming interface. Streaming is useful when your flow generates a large amount of output, because you can present the output to the user as it’s being generated, which improves the perceived responsiveness of your app. As a familiar example, chat-based LLM interfaces often stream their responses to the user as they are generated.

Here’s an example of a flow that supports streaming:

@Schema()
abstract class $MenuSuggestionInput {
String get theme;
}
@Schema()
abstract class $MenuOutput {
String get theme;
String get menuItem;
}
final menuSuggestionFlow = ai.defineFlow(
name: 'menuSuggestionFlow',
inputSchema: MenuSuggestionInput.$schema,
outputSchema: MenuOutput.$schema,
streamSchema: .string(),
fn: (input, ctx) async {
final stream = ai.generateStream(
model: googleAI.gemini('gemini-2.5-flash'),
prompt: 'Invent a menu item for a ${input.theme} themed restaurant.',
);
await for (final chunk in stream) {
if (ctx.streamingRequested) {
ctx.sendChunk(chunk.text);
}
}
final response = await stream.onResult;
return MenuOutput(
theme: input.theme,
menuItem: response.text,
);
},
);

Streaming flows are also callable, but they immediately return a specialized response object (FlowStreamResponse) rather than a future. This object contains a stream property which you can iterate over.

final response = menuSuggestionFlow.stream(
MenuSuggestionInput(theme: 'bistro'),
);
await for (final chunk in response.stream) {
print('chunk: $chunk');
}

You can also get the complete output of the flow. The final output is available as a future on the response object.

final output = await response.output;

You can run flows from the command line using the Genkit CLI tool:

For streaming flows, you can print the streaming output to the console by adding the -s flag:

Terminal window
genkit flow:run menuSuggestionFlow '{"theme": "French"}' -s

Running a flow from the command line is useful for testing a flow, or for running flows that perform tasks needed on an ad hoc basis—for example, to run a flow that ingests a document into your vector database.

One of the advantages of encapsulating AI logic within a flow is that you can test and debug the flow independently from your app using the Genkit developer UI.

To start the developer UI, run the following command from your project directory:

From the Run tab of developer UI, you can run any of the flows defined in your project:

You can deploy your flows directly as web API endpoints, ready for you to call from your app clients. Deployment is discussed in detail on several other pages, but this section gives brief overviews of your deployment options.

Once your flow is deployed, you can call it with a POST request:

Terminal window
curl -X POST "http://localhost:3400/menuSuggestionFlow" \
-H "Content-Type: application/json" -d '{"data": {"theme": "banana"}}'

For streaming responses, you can add the Accept: text/event-stream header:

Terminal window
curl -X POST "http://localhost:3400/menuSuggestionFlow" \
-H "Content-Type: application/json" \
-H "Accept: text/event-stream" \
-d '{"data": {"theme": "banana"}}'

You can also use the Genkit web client library to call flows from web applications. See Accessing flows from the client for detailed examples of using the runFlow() and streamFlow() functions.

For detailed deployment instructions and platform-specific guides, see: