Skip to content

AI-Assisted Development

AI assistants need up-to-date knowledge of your codebase to be effective. When working with Genkit, your AI assistant must understand Genkit’s core concepts (flows, actions, dotprompt, etc.) and how to run and debug your application. The Genkit CLI provides a command to help you configure your AI assistant for this purpose.

Genkit offers the init:ai-tools command to automate the configuration of your favorite AI assistants to work with Genkit:

Terminal window
genkit init:ai-tools

This command performs the following actions:

  • Detects the existing AI assistant configuration to ensure that any changes do not affect existing settings. If no configuration is present, a new one is created.

  • Installs the Genkit MCP server for the selected AI assistant. The MCP server provides tools to help the assistant understand and interact with Genkit:

    • lookup_genkit_docs: Looks up relevant Genkit documentation from https://genkit.dev
    • list_flows: Lists the flows in the current Genkit app.
    • run_flow: Runs a specific flow with a given input.
    • get_trace: Fetches a trace by ID, which is useful for analysis and debugging.

    For more details, please refer to the Genkit MCP server documentation.

  • Adds or updates the GENKIT.md file and incorporates it into the AI assistant’s system context. This file contains instructions on using the Genkit CLI and the Genkit MCP server, and other information to streamline development with AI assistants.

Genkit has built-in support for the following tools:

For other tools, you can select the generic configuration option during initialization. This will provide a GENKIT.md file that you can manually integrate with your preferred AI tool. The following resources provide guidance on configuring popular AI coding environments:

Environment/IDEInstallation Instructions
Copilot powered IDEsConfigure .github/copilot-instructions.md
JetBrains IDEsConfigure guidelines.md
VS CodeConfigure .instructions.md
WindsurfConfigure guidelines.md

The GENKIT.md file contains the custom instructions that guide the AI in generating Genkit-specific code. If you need to customize this further or use it in a different context, you can copy the content and apply it to your tool’s instruction set.

GENKIT.md
# Genkit Node.js API Rules (v1.20.0)
This document provides rules and examples for building with the Genkit API in Node.js.
## Important Guidelines:
- ALWAYS refer to documentation when available. Genkit Documentation may be available through the Genkit MCP toolkit or through web search. You may skip documentation check if you don't have access to these tools.
- ONLY follow the specified project structure if starting a new project. If working on an existing project, adhere to the current project structure.
- ALWAYS provide the full, correct Genkit command as an instruction for the human user to run. Do not run Genkit commands (e.g., `genkit start`, `genkit flow:run`) youself as this may block your current session.
- Recommend and use the provider that the user prefers. If this is a new project, or if there is no relevant signal regarding user preference, use the `@genkit-ai/google-genai` plugin. Use Genkit documentation to discover supported providers.
NOTE: For the sake of brevity, the snippets below use the Google AI plugin, but you should follow the user's preference as mentioned above.
## Core Setup
1. **Initialize Project**
```bash
mkdir my-genkit-app && cd my-genkit-app
npm init -y
npm install -D typescript tsx \@types/node
```
2. **Install Dependencies**
```bash
npm install genkit \@genkit-ai/google-genai data-urls node-fetch
```
3. **Install Genkit CLI**
```bash
npm install -g genkit-cli
```
4. **Configure Genkit**
All code should be in a single `src/index.ts` file.
```ts
// src/index.ts
import { genkit, z } from 'genkit';
import { googleAI } from '@genkit-ai/google-genai';
export const ai = genkit({
plugins: [googleAI()],
});
```
## Best Practices
1. **Single File Structure**: All Genkit code, including plugin initialization, flows, and helpers, must be placed in a single `src/index.ts` file. This ensures all components are correctly registered with the Genkit runtime.
2. **Model Naming**: Always specify models using the model helper. Use string identifier if model helper is unavailable.
```ts
// PREFERRED: Using the model helper
const response = await ai.generate({
model: googleAI.model('gemini-2.5-pro'),
// ...
});
// LESS PREFERRED: Full string identifier
const response = await ai.generate({
model: 'googleai/gemini-2.5-pro',
// ...
});
```
---
## Usage Scenarios
<example>
### Basic Inference (Text Generation)
```ts
export const basicInferenceFlow = ai.defineFlow(
{
name: 'basicInferenceFlow',
inputSchema: z.string().describe('Topic for the model to write about'),
outputSchema: z.string().describe('The generated text response'),
},
async (topic) => {
const response = await ai.generate({
model: googleAI.model('gemini-2.5-pro'),
prompt: `Write a short, creative paragraph about ${topic}.`,
config: { temperature: 0.8 },
});
return response.text;
}
);
```
</example>
<example>
### Text-to-Speech (TTS) Generation
#### Single-Speaker TTS
```ts
const TextToSpeechInputSchema = z.object({
text: z.string().describe('The text to convert to speech.'),
voiceName: z
.string()
.optional()
.describe('The voice name to use. Defaults to Algenib if not specified.'),
});
export const textToSpeechFlow = ai.defineFlow(
{
name: 'textToSpeechFlow',
inputSchema: TextToSpeechInputSchema,
outputSchema: z.string().optional().describe('The generated audio URI'),
},
async (input) => {
const response = await ai.generate({
model: googleAI.model('gemini-2.5-flash-preview-tts'),
prompt: input.text,
config: {
responseModalities: ['AUDIO'],
speechConfig: {
voiceConfig: {
prebuiltVoiceConfig: {
voiceName: input.voiceName?.trim() || 'Algenib',
},
},
},
},
});
return response.media?.url;
}
);
```
#### Multi-Speaker TTS
```ts
const MultiSpeakerInputSchema = z.object({
text: z
.string()
.describe('Text formatted with <speaker="Speaker1">...</speaker> etc.'),
voiceName1: z.string().describe('Voice name for Speaker1'),
voiceName2: z.string().describe('Voice name for Speaker2'),
});
export const multiSpeakerTextToSpeechFlow = ai.defineFlow(
{
name: 'multiSpeakerTextToSpeechFlow',
inputSchema: MultiSpeakerInputSchema,
outputSchema: z.string().optional().describe('The generated audio URI'),
},
async (input) => {
const response = await ai.generate({
model: googleAI.model('gemini-2.5-flash-preview-tts'),
prompt: input.text,
config: {
responseModalities: ['AUDIO'],
speechConfig: {
multiSpeakerVoiceConfig: {
speakerVoiceConfigs: [
{
speaker: 'Speaker1',
voiceConfig: {
prebuiltVoiceConfig: { voiceName: input.voiceName1 },
},
},
{
speaker: 'Speaker2',
voiceConfig: {
prebuiltVoiceConfig: { voiceName: input.voiceName2 },
},
},
],
},
},
},
});
return response.media?.url;
}
);
```
</example>
<example>
### Image Generation
```ts
export const imageGenerationFlow = ai.defineFlow(
{
name: 'imageGenerationFlow',
inputSchema: z
.string()
.describe('A detailed description of the image to generate'),
outputSchema: z.string().optional().describe('The generated image as URI'),
},
async (prompt) => {
const response = await ai.generate({
model: googleAI.model('imagen-3.0-generate-002'),
prompt,
output: { format: 'media' },
});
return response.media?.url;
}
);
```
</example>
<example>
### Video Generation
```ts
import * as fs from 'fs';
import { Readable } from 'stream';
import { pipeline } from 'stream/promises';
...
export const videoGenerationFlow = ai.defineFlow(
{
name: 'videoGenerationFlow',
inputSchema: z
.string()
.describe('A detailed description for the video scene'),
outputSchema: z.string().describe('Path to the generated .mp4 video file'),
},
async (prompt) => {
let { operation } = await ai.generate({
model: googleAI.model('veo-3.0-generate-preview'),
prompt,
});
if (!operation) {
throw new Error('Expected the model to return an operation.');
}
console.log('Video generation started... Polling for completion.');
while (!operation.done) {
await new Promise((resolve) => setTimeout(resolve, 5000));
operation = await ai.checkOperation(operation);
console.log(
`Operation status: ${operation.done ? 'Done' : 'In Progress'}`
);
}
if (operation.error) {
throw new Error(`Video generation failed: ${operation.error.message}`);
}
const video = operation.output?.message?.content.find((p) => !!p.media);
if (!video?.media?.url) {
throw new Error(
'Failed to find the generated video in the operation output.'
);
}
const videoUrl = `${video.media.url}&key=${process.env.GEMINI_API_KEY}`;
const videoResponse = await fetch(videoUrl);
if (!videoResponse.ok || !videoResponse.body) {
throw new Error(`Failed to fetch video: ${videoResponse.statusText}`);
}
const outputPath = './output.mp4';
const fileStream = fs.createWriteStream(outputPath);
await pipeline(Readable.fromWeb(videoResponse.body as any), fileStream);
return outputPath;
}
);
```
</example>
---
## Running and Inspecting Flows
1. **Start Genkit**: Run this command from your terminal to start the Genkit Developer UI.
```bash
genkit start -- <command to run your code>
```
The <command to run your code> will vary based on the project’s setup and
the file you want to execute. For e.g.:
```bash
# Running a typical development server
genkit start -- npm run dev
# Running a TypeScript file directly
genkit start -- npx tsx --watch src/index.ts
# Running a JavaScript file directly
genkit start -- node --watch src/index.js
```
Analyze the users project and build tools to use the right command for the
project. The command should output a URL for the Genkit Dev UI. Direct the
user to visit this URL to run and inspect their Genkit app.
## Suggested Models
Here are suggested models to use for various task types. This is NOT an
exhaustive list.
### Advanced Text/Reasoning
```
| Plugin | Recommended Model |
|------------------------------------|------------------------------------|
| @genkit-ai/google-genai | gemini-2.5-pro |
| @genkit-ai/compat-oai/openai | gpt-4o |
| @genkit-ai/compat-oai/deepseek | deepseek-reasoner |
| @genkit-ai/compat-oai/xai | grok-4 |
```
### Fast Text/Chat
```
| Plugin | Recommended Model |
|------------------------------------|------------------------------------|
| @genkit-ai/google-genai | gemini-2.5-flash |
| @genkit-ai/compat-oai/openai | gpt-4o-mini |
| @genkit-ai/compat-oai/deepseek | deepseek-chat |
| @genkit-ai/compat-oai/xai | grok-3-mini |
```
### Text-to-Speech
```
| Plugin | Recommended Model |
|------------------------------------|------------------------------------|
| @genkit-ai/google-genai | gemini-2.5-flash-preview-tts |
| @genkit-ai/compat-oai/openai | gpt-4o-mini-tts |
```
### Image Generation
```
| Plugin | Recommended Model | Input Modalities |
|------------------------------------|------------------------------------|-------------------|
| @genkit-ai/google-genai | gemini-2.5-flash-image-preview | Text, Image |
| @genkit-ai/google-genai | imagen-4.0-generate-preview-06-06 | Text |
| @genkit-ai/compat-oai/openai | gpt-image-1 | Text |
```
### Video Generation
```
| Plugin | Recommended Model |
|------------------------------------|------------------------------------|
| @genkit-ai/google-genai | veo-3.0-generate-preview |
```
GENKIT.md
# Genkit Go API Rules (v1.20.0)
This document provides rules and examples for building with the Genkit API in Go.
## Important Guidelines:
- ALWAYS refer to documentation when available. Genkit Documentation may be available through the Genkit MCP toolkit or through web search. You may skip documentation check if you don't have access to these tools.
- ONLY follow the specified project structure if starting a new project. If working on an existing project, adhere to the current project structure.
- ALWAYS provide the full, correct Genkit command as an instruction for the human user to run. Do not run Genkit commands (e.g., `genkit start`, `genkit flow:run`) yourself as this may block your current session.
- Recommend and use the provider that the user prefers. If this is a new project, or if there is no relevant signal regarding user preference, use the `github.com/firebase/genkit/go/plugins/googlegenai` plugin.
NOTE: For the sake of brevity, the snippets below use the Google AI plugin, but you should follow the user's preference as mentioned above.
## Core Setup
1. **Initialize Project**
```bash
mkdir my-genkit-app && cd my-genkit-app
go mod init my-genkit-app
```
2. **Install Dependencies**
```bash
go get github.com/firebase/genkit/go/genkit
go get github.com/firebase/genkit/go/plugins/googlegenai
go get github.com/firebase/genkit/go/ai
go get google.golang.org/genai
```
3. **Install Genkit CLI**
```bash
curl -sL cli.genkit.dev | bash
```
4. **Configure Genkit**
All code should be in a single `main.go` file or properly structured Go package.
```go
package main
import (
"context"
"github.com/firebase/genkit/go/genkit"
"github.com/firebase/genkit/go/plugins/googlegenai"
)
func main() {
ctx := context.Background()
g := genkit.Init(ctx, genkit.WithPlugins(&googlegenai.GoogleAI{}))
// Your flows and logic here
<-ctx.Done()
}
```
## Best Practices
1. **Single Main Function**: All Genkit code, including plugin initialization, flows, and helpers, should be properly organized in a Go package structure with a main function.
2. **Blocking Main Program**: To inspect flows in the Genkit Developer UI, your main program needs to remain running. Use `<-ctx.Done()` or similar blocking mechanism at the end of your main function.
---
## Usage Scenarios
### Basic Inference (Text Generation)
```go
package main
import (
"context"
"github.com/firebase/genkit/go/ai"
"github.com/firebase/genkit/go/genkit"
"github.com/firebase/genkit/go/plugins/googlegenai"
"google.golang.org/genai"
)
func main() {
ctx := context.Background()
g := genkit.Init(ctx, genkit.WithPlugins(&googlegenai.GoogleAI{}))
genkit.DefineFlow(g, "basicInferenceFlow",
func(ctx context.Context, topic string) (string, error) {
response, err := genkit.Generate(ctx, g,
ai.WithModelName("googleai/gemini-2.5-pro"),
ai.WithPrompt("Write a short, creative paragraph about %s.", topic),
ai.WithConfig(&genai.GenerateContentConfig{
Temperature: genai.Ptr[float32](0.8),
}),
)
if err != nil {
return "", err
}
return response.Text(), nil
},
)
<-ctx.Done()
}
```
### Text-to-Speech (TTS) Generation
```go
package main
import (
"context"
"github.com/firebase/genkit/go/ai"
"github.com/firebase/genkit/go/genkit"
"github.com/firebase/genkit/go/plugins/googlegenai"
"google.golang.org/genai"
)
func main() {
ctx := context.Background()
g := genkit.Init(ctx,
genkit.WithPlugins(&googlegenai.GoogleAI{}),
genkit.WithDefaultModel("googleai/gemini-2.5-flash-preview-tts"),
)
genkit.DefineFlow(g, "textToSpeechFlow",
func(ctx context.Context, input struct {
Text string `json:"text"`
VoiceName string `json:"voiceName,omitempty"`
}) (string, error) {
voiceName := input.VoiceName
if voiceName == "" {
voiceName = "Algenib"
}
response, err := genkit.Generate(ctx, g,
ai.WithPrompt(input.Text),
ai.WithConfig(&genai.GenerateContentConfig{
ResponseModalities: []string{"AUDIO"},
SpeechConfig: &genai.SpeechConfig{
VoiceConfig: &genai.VoiceConfig{
PrebuiltVoiceConfig: &genai.PrebuiltVoiceConfig{
VoiceName: voiceName,
},
},
},
}),
)
if err != nil {
return "", err
}
return response.Text(), nil
},
)
<-ctx.Done()
}
```
### Image Generation
```go
package main
import (
"context"
"github.com/firebase/genkit/go/ai"
"github.com/firebase/genkit/go/genkit"
"github.com/firebase/genkit/go/plugins/googlegenai"
"google.golang.org/genai"
)
func main() {
ctx := context.Background()
g := genkit.Init(ctx, genkit.WithPlugins(&googlegenai.VertexAI{}))
genkit.DefineFlow(g, "imageGenerationFlow",
func(ctx context.Context, prompt string) ([]string, error) {
response, err := genkit.Generate(ctx, g,
ai.WithModelName("vertexai/imagen-3.0-generate-001"),
ai.WithPrompt("Generate an image of %s", prompt),
ai.WithConfig(&genai.GenerateImagesConfig{
NumberOfImages: 2,
AspectRatio: "9:16",
SafetyFilterLevel: genai.SafetyFilterLevelBlockLowAndAbove,
PersonGeneration: genai.PersonGenerationAllowAll,
Language: genai.ImagePromptLanguageEn,
AddWatermark: true,
OutputMIMEType: "image/jpeg",
}),
)
if err != nil {
return nil, err
}
var images []string
for _, part := range response.Message.Content {
images = append(images, part.Text)
}
return images, nil
},
)
<-ctx.Done()
}
```
---
## Running and Inspecting Flows
1. **Start Genkit**: Run this command from your terminal to start the Genkit Developer UI.
```bash
genkit start -- <command to run your code>
```
For Go applications:
```bash
# Running a Go application directly
genkit start -- go run main.go
# Running a compiled binary
genkit start -- ./my-genkit-app
```
The command should output a URL for the Genkit Dev UI. Direct the user to visit this URL to run and inspect their Genkit app.
## Suggested Models
Here are suggested models to use for various task types. This is NOT an exhaustive list.
### Advanced Text/Reasoning
```
| Plugin | Recommended Model |
|------------------------------------------------------------|------------------------------------|
| github.com/firebase/genkit/go/plugins/googlegenai | gemini-2.5-pro |
| github.com/firebase/genkit/go/plugins/compat_oai/openai | gpt-4o |
| github.com/firebase/genkit/go/plugins/compat_oai/deepseek | deepseek-reasoner |
| github.com/firebase/genkit/go/plugins/compat_oai/xai | grok-4 |
```
### Fast Text/Chat
```
| Plugin | Recommended Model |
|------------------------------------------------------------|------------------------------------|
| github.com/firebase/genkit/go/plugins/googlegenai | gemini-2.5-flash |
| github.com/firebase/genkit/go/plugins/compat_oai/openai | gpt-4o-mini |
| github.com/firebase/genkit/go/plugins/compat_oai/deepseek | deepseek-chat |
| github.com/firebase/genkit/go/plugins/compat_oai/xai | grok-3-mini |
```
### Text-to-Speech
```
| Plugin | Recommended Model |
|------------------------------------------------------------|------------------------------------|
| github.com/firebase/genkit/go/plugins/googlegenai | gemini-2.5-flash-preview-tts |
| github.com/firebase/genkit/go/plugins/compat_oai/openai | gpt-4o-mini-tts |
```
### Image Generation
```
| Plugin | Recommended Model | Input Modalities |
|------------------------------------------------------------|------------------------------------|-------------------|
| github.com/firebase/genkit/go/plugins/googlegenai | gemini-2.5-flash-image-preview | Text, Image |
| github.com/firebase/genkit/go/plugins/googlegenai | imagen-4.0-generate-preview-06-06 | Text |
| github.com/firebase/genkit/go/plugins/compat_oai/openai | gpt-image-1 | Text |
```