Skip to content

Google Generative AI plugin

The genkit_google_genai package provides the GoogleAI plugin for accessing Google’s generative AI models via the Google Gemini API.

Terminal window
dart pub add genkit_google_genai

To use the Google Gemini API, you need an API key.

import 'package:genkit/genkit.dart';
import 'package:genkit_google_genai/genkit_google_genai.dart';
void main() {
final ai = Genkit(
plugins: [
googleAI(apiKey: 'YOUR_API_KEY'), // Optional if GEMINI_API_KEY env var is set
],
);
}

Requires a Gemini API Key, which you can get from Google AI Studio.

  1. Environment variables: Set GEMINI_API_KEY
  2. Plugin configuration: Pass apiKey when initializing the plugin (shown above)
  3. Per-request: Override the API key for specific requests in the config:
final response = await ai.generate(
model: googleAI.gemini('gemini-2.5-flash'),
prompt: 'Your prompt here',
config: GeminiOptions(
apiKey: 'different-api-key', // Use a different API key for this request
),
);

You can create models that call the Google Generative AI API. The models support all standard Genkit features including tool calls, streaming, and multimodal input.

Gemini 3 Series - Latest experimental models with state-of-the-art reasoning:

  • gemini-3-pro-preview
  • gemini-3-flash-preview
  • gemini-3-pro-image-preview

Gemini 2.5 Series - Latest stable models:

  • gemini-2.5-pro
  • gemini-2.5-flash
  • gemini-2.5-flash-lite

Gemma 3 Series - Open models:

  • gemma-3-27b-it
  • gemma-3-12b-it
  • gemma-3-4b-it
  • gemma-3-1b-it
import 'package:genkit/genkit.dart';
import 'package:genkit_google_genai/genkit_google_genai.dart';
void main() async {
final ai = Genkit(plugins: [googleAI()]);
final response = await ai.generate(
model: googleAI.gemini('gemini-2.5-flash'),
prompt: 'Explain how neural networks learn in simple terms.',
);
print(response.text);
}

Use the schemantic package to define strongly-typed schemas for structured output.

import 'package:genkit/genkit.dart';
import 'package:genkit_google_genai/genkit_google_genai.dart';
import 'package:schemantic/schemantic.dart';
// part 'character_profile.g.dart'; // Generated by build_runner
@Schema()
abstract class $CharacterProfile {
String get name;
String get bio;
int get age;
}
final response = await ai.generate(
model: googleAI.gemini('gemini-2.5-flash'),
outputSchema: CharacterProfile.$schema,
prompt: 'Generate a profile for a fictional character',
);
final profile = CharacterProfile.fromJson(response.output!);
print('${profile.name} (${profile.age}): ${profile.bio}');

The Gemini API has specific limitations for JSON schemas:

  • Unions: oneOf allows only object targets. Primitive unions (e.g., String | int) are not supported.
  • Validation: Regex patterns, min/max length, and other validation keywords are often ignored or may cause errors.

Gemini 2.5 and newer models support “Thinking” to improve reasoning for complex tasks.

Thinking Budget (Gemini 2.5):

final response = await ai.generate(
model: googleAI.gemini('gemini-2.5-pro'),
prompt: 'what is heavier, one kilo of steel or one kilo of feathers',
config: GeminiOptions(
thinkingConfig: ThinkingConfig(
thinkingBudget: 2048,
includeThoughts: true,
),
),
);

Gemini models can accept various media types as input.

Video:

final response = await ai.generate(
model: googleAI.gemini('gemini-2.5-flash'),
prompt: 'What happens in this video?',
messages: [
Message(
role: Role.user,
content: [
MediaPart(
media: Media(
url: 'https://download.samplelib.com/mp4/sample-5s.mp4',
contentType: 'video/mp4',
),
),
],
),
],
);

Audio:

final response = await ai.generate(
model: googleAI.gemini('gemini-2.5-flash'),
prompt: 'Transcribe this audio',
messages: [
Message(
role: Role.user,
content: [
MediaPart(
media: Media(
url: 'https://www2.cs.uic.edu/~i101/SoundFiles/BabyElephantWalk60.wav',
contentType: 'audio/wav',
),
),
],
),
],
);

Configure content filtering for different harm categories:

import 'package:genkit_google_genai/genkit_google_genai.dart';
final response = await ai.generate(
model: googleAI.gemini('gemini-2.5-flash'),
prompt: 'Your prompt here',
config: GeminiOptions(
safetySettings: [
SafetySettings(
category: 'HARM_CATEGORY_HATE_SPEECH',
threshold: 'BLOCK_MEDIUM_AND_ABOVE',
),
],
),
);

Enable Google Search to provide answers with current information.

final response = await ai.generate(
model: googleAI.gemini('gemini-2.5-flash'),
prompt: 'What are the top tech news stories this week?',
config: GeminiOptions(
googleSearch: GoogleSearch(),
),
);

Enable the model to write and execute Python code for calculations.

final response = await ai.generate(
model: googleAI.gemini('gemini-2.5-pro'),
prompt: 'Calculate the 20th Fibonacci number',
config: GeminiOptions(
codeExecution: true,
),
);
final embeddings = await ai.embedMany(
embedder: googleAI.textEmbedding('text-embedding-004'),
documents: [
DocumentData(content: [TextPart(text: 'Hello world')]),
],
);
print(embeddings[0].embedding);
final response = await ai.generate(
model: googleAI.gemini('gemini-2.5-flash-image'),
prompt: 'A banana riding a bike',
);
print(response.media);

The Google GenAI plugin supports Gemini text-to-speech models, including multi-speaker support.

import 'dart:convert';
import 'dart:io';
import 'package:genkit/genkit.dart';
import 'package:genkit_google_genai/genkit_google_genai.dart';
final response = await ai.generate(
model: googleAI.gemini('gemini-2.5-flash-preview-tts'),
prompt: 'Say that Genkit is an amazing AI framework',
config: GeminiTtsOptions(
responseModalities: ['AUDIO'],
speechConfig: SpeechConfig(
voiceConfig: VoiceConfig(
prebuiltVoiceConfig: PrebuiltVoiceConfig(voiceName: 'Puck'),
),
),
),
);
if (response.media != null) {
// Save the audio file
final dataUrl = response.media!.url;
final base64Data = dataUrl.split(',')[1];
final bytes = base64Decode(base64Data);
await File('output.pcm').writeAsBytes(bytes);
}

The following features documented in other languages are not yet fully supported in the Dart SDK:

  • Context Caching: Automatic context caching is not explicitly exposed/documented for Dart yet.
  • Google Maps Grounding: Not yet exposed in options.
  • Files API: No helper methods for uploading files (use direct HTTP or Google Cloud libs).