Skip to content

Flutter Integration

Genkit provides a seamless experience for integrating AI features into your Flutter applications. The genkit CLI tooling natively supports Flutter, making it easy to run your Genkit server and Flutter app together while managing environment variables and API keys automatically.

The easiest way to start Genkit Dev UI alongside your Flutter app is by using the start:flutter command:

Terminal window
genkit start:flutter -- -d chrome

This acts as a convenient shortcut for:

Terminal window
genkit start --write-env-file=genkit.env --experimental-reflection-v2 -- flutter run --dart-define-from-file=genkit.env -d chrome

For more control, if you want to run the Genkit tooling separately from Flutter, you can start them as two separate processes:

Terminal window
# Start the Genkit server in the background
genkit start --write-env-file=genkit.env --experimental-reflection-v2 &
# Run your Flutter app
flutter run --dart-define-from-file=genkit.env -d chrome

This approach generates a genkit.env file with the server configuration and passes it into the Flutter runtime as defined variables using --dart-define-from-file.

There are three primary ways to leverage Genkit within a Flutter application. You can see examples of these patterns in the genkit-dart/testapps/flutter_genai/ test application.

In this approach, you use Genkit directly within the Flutter application to call model APIs (e.g., Gemini) from the client device.

This pattern involves orchestrating Genkit locally on the Flutter client, but offloading the actual generation logic to safe, remote AI models hosted on a secure server.

By defining models remotely on a server (e.g., using a Genkit Shelf backend), the mobile app handles orchestration while ensuring sensitive API keys remain securely on your backend.

Key Points:

  • The client app does not need your LLM provider API keys.
  • The heavy lifting of the model is executed remotely over HTTP.
  • You configure a connection to the server when initializing the remote model in Flutter.

To consume remote models, configure defineRemoteModel in your Flutter application to point to the backend endpoints instead of using local model plugins.

See the Consuming remote models section in the Models guide for details on pointing Genkit at these endpoints, and the Serving Models section in the Shelf guide for how to expose the model securely on your server.

This is the most secure and robust approach where the Flutter client does minimal work. The main AI logic, prompts, orchestration, and model invocation are unified into a remotely hosted Flow.

Your Flutter app simply calls the server hosted Flow as an HTTP endpoint. See the Shelf Integration documentation for setting up Genkit flows as HTTP endpoints using shelfHandler or startFlowServer.