Google Generative AI plugin
The genkit_google_genai package provides the GoogleAI plugin for accessing Google’s generative AI models via the Google Gemini API.
Installation
Section titled “Installation”dart pub add genkit_google_genaiConfiguration
Section titled “Configuration”To use the Google Gemini API, you need an API key.
import 'package:genkit/genkit.dart';import 'package:genkit_google_genai/genkit_google_genai.dart';
void main() { final ai = Genkit( plugins: [ googleAI(apiKey: 'YOUR_API_KEY'), // Optional if GEMINI_API_KEY env var is set ], );}Authentication
Section titled “Authentication”Requires a Gemini API Key, which you can get from Google AI Studio.
- Environment variables: Set
GEMINI_API_KEY - Plugin configuration: Pass
apiKeywhen initializing the plugin (shown above) - Per-request: Override the API key for specific requests in the config:
final response = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'Your prompt here', config: GeminiOptions( apiKey: 'different-api-key', // Use a different API key for this request ),);Language Models
Section titled “Language Models”You can create models that call the Google Generative AI API. The models support all standard Genkit features including tool calls, streaming, and multimodal input.
Available Models
Section titled “Available Models”Gemini 3 Series - Latest experimental models with state-of-the-art reasoning:
gemini-3-pro-previewgemini-3-flash-previewgemini-3-pro-image-preview
Gemini 2.5 Series - Latest stable models:
gemini-2.5-progemini-2.5-flashgemini-2.5-flash-lite
Gemma 3 Series - Open models:
gemma-3-27b-itgemma-3-12b-itgemma-3-4b-itgemma-3-1b-it
Basic Usage
Section titled “Basic Usage”import 'package:genkit/genkit.dart';import 'package:genkit_google_genai/genkit_google_genai.dart';
void main() async { final ai = Genkit(plugins: [googleAI()]);
final response = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'Explain how neural networks learn in simple terms.', );
print(response.text);}Structured Output
Section titled “Structured Output”Use the schemantic package to define strongly-typed schemas for structured output.
import 'package:genkit/genkit.dart';import 'package:genkit_google_genai/genkit_google_genai.dart';import 'package:schemantic/schemantic.dart';
// part 'character_profile.g.dart'; // Generated by build_runner
@Schema()abstract class $CharacterProfile { String get name; String get bio; int get age;}
final response = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), outputSchema: CharacterProfile.$schema, prompt: 'Generate a profile for a fictional character',);
final profile = CharacterProfile.fromJson(response.output!);print('${profile.name} (${profile.age}): ${profile.bio}');Schema Limitations
Section titled “Schema Limitations”The Gemini API has specific limitations for JSON schemas:
- Unions:
oneOfallows only object targets. Primitive unions (e.g.,String | int) are not supported. - Validation: Regex patterns, min/max length, and other validation keywords are often ignored or may cause errors.
Thinking and Reasoning
Section titled “Thinking and Reasoning”Gemini 2.5 and newer models support “Thinking” to improve reasoning for complex tasks.
Thinking Budget (Gemini 2.5):
final response = await ai.generate( model: googleAI.gemini('gemini-2.5-pro'), prompt: 'what is heavier, one kilo of steel or one kilo of feathers', config: GeminiOptions( thinkingConfig: ThinkingConfig( thinkingBudget: 2048, includeThoughts: true, ), ),);Multimodal Input
Section titled “Multimodal Input”Gemini models can accept various media types as input.
Video:
final response = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'What happens in this video?', messages: [ Message( role: Role.user, content: [ MediaPart( media: Media( url: 'https://download.samplelib.com/mp4/sample-5s.mp4', contentType: 'video/mp4', ), ), ], ), ],);Audio:
final response = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'Transcribe this audio', messages: [ Message( role: Role.user, content: [ MediaPart( media: Media( url: 'https://www2.cs.uic.edu/~i101/SoundFiles/BabyElephantWalk60.wav', contentType: 'audio/wav', ), ), ], ), ],);Safety Settings
Section titled “Safety Settings”Configure content filtering for different harm categories:
import 'package:genkit_google_genai/genkit_google_genai.dart';
final response = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'Your prompt here', config: GeminiOptions( safetySettings: [ SafetySettings( category: 'HARM_CATEGORY_HATE_SPEECH', threshold: 'BLOCK_MEDIUM_AND_ABOVE', ), ], ),);Google Search Grounding
Section titled “Google Search Grounding”Enable Google Search to provide answers with current information.
final response = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'What are the top tech news stories this week?', config: GeminiOptions( googleSearch: GoogleSearch(), ),);Code Execution
Section titled “Code Execution”Enable the model to write and execute Python code for calculations.
final response = await ai.generate( model: googleAI.gemini('gemini-2.5-pro'), prompt: 'Calculate the 20th Fibonacci number', config: GeminiOptions( codeExecution: true, ),);Embedding Models
Section titled “Embedding Models”final embeddings = await ai.embedMany( embedder: googleAI.textEmbedding('text-embedding-004'), documents: [ DocumentData(content: [TextPart(text: 'Hello world')]), ],);
print(embeddings[0].embedding);Image Models
Section titled “Image Models”final response = await ai.generate( model: googleAI.gemini('gemini-2.5-flash-image'), prompt: 'A banana riding a bike',);
print(response.media);Speech Models
Section titled “Speech Models”The Google GenAI plugin supports Gemini text-to-speech models, including multi-speaker support.
import 'dart:convert';import 'dart:io';import 'package:genkit/genkit.dart';import 'package:genkit_google_genai/genkit_google_genai.dart';
final response = await ai.generate( model: googleAI.gemini('gemini-2.5-flash-preview-tts'), prompt: 'Say that Genkit is an amazing AI framework', config: GeminiTtsOptions( responseModalities: ['AUDIO'], speechConfig: SpeechConfig( voiceConfig: VoiceConfig( prebuiltVoiceConfig: PrebuiltVoiceConfig(voiceName: 'Puck'), ), ), ),);
if (response.media != null) { // Save the audio file final dataUrl = response.media!.url; final base64Data = dataUrl.split(',')[1]; final bytes = base64Decode(base64Data); await File('output.pcm').writeAsBytes(bytes);}Unsupported Features
Section titled “Unsupported Features”The following features documented in other languages are not yet fully supported in the Dart SDK:
- Context Caching: Automatic context caching is not explicitly exposed/documented for Dart yet.
- Google Maps Grounding: Not yet exposed in options.
- Files API: No helper methods for uploading files (use direct HTTP or Google Cloud libs).