Flutter Integration
Genkit provides a seamless experience for integrating AI features into your Flutter applications. The genkit CLI tooling natively supports Flutter, making it easy to run your Genkit server and Flutter app together while managing environment variables and API keys automatically.
Running your Flutter app with Genkit
Section titled “Running your Flutter app with Genkit”The easiest way to start Genkit Dev UI alongside your Flutter app is by using the start:flutter command:
genkit start:flutter -- -d chromeThis acts as a convenient shortcut for:
genkit start --write-env-file=genkit.env --experimental-reflection-v2 -- flutter run --dart-define-from-file=genkit.env -d chromeFor more control, if you want to run the Genkit tooling separately from Flutter, you can start them as two separate processes:
# Start the Genkit server in the backgroundgenkit start --write-env-file=genkit.env --experimental-reflection-v2 &
# Run your Flutter appflutter run --dart-define-from-file=genkit.env -d chromeThis approach generates a genkit.env file with the server configuration and passes it into the Flutter runtime as defined variables using --dart-define-from-file.
Use cases for Flutter + Genkit
Section titled “Use cases for Flutter + Genkit”There are three primary ways to leverage Genkit within a Flutter application. You can see examples of these patterns in the genkit-dart/testapps/flutter_genai/ test application.
1. Fully client-side
Section titled “1. Fully client-side”In this approach, you use Genkit directly within the Flutter application to call model APIs (e.g., Gemini) from the client device.
2. Remote models (hybrid approach)
Section titled “2. Remote models (hybrid approach)”This pattern involves orchestrating Genkit locally on the Flutter client, but offloading the actual generation logic to safe, remote AI models hosted on a secure server.
By defining models remotely on a server (e.g., using a Genkit Shelf backend), the mobile app handles orchestration while ensuring sensitive API keys remain securely on your backend.
Key Points:
- The client app does not need your LLM provider API keys.
- The heavy lifting of the model is executed remotely over HTTP.
- You configure a connection to the server when initializing the remote model in Flutter.
To consume remote models, configure defineRemoteModel in your Flutter application to point to the backend endpoints instead of using local model plugins.
See the Consuming remote models section in the Models guide for details on pointing Genkit at these endpoints, and the Serving Models section in the Shelf guide for how to expose the model securely on your server.
3. Move logic to a flow on the server
Section titled “3. Move logic to a flow on the server”This is the most secure and robust approach where the Flutter client does minimal work. The main AI logic, prompts, orchestration, and model invocation are unified into a remotely hosted Flow.
Your Flutter app simply calls the server hosted Flow as an HTTP endpoint. See the Shelf Integration documentation for setting up Genkit flows as HTTP endpoints using shelfHandler or startFlowServer.