Skip to content

Vertex AI plugin

The Vertex AI plugin provides access to Google Cloud’s enterprise-grade AI platform, offering advanced features beyond basic model access. Use this for enterprise applications that need grounding, Vector Search, Model Garden, or evaluation capabilities.

Accessing Google GenAI Models via Vertex AI

Section titled “Accessing Google GenAI Models via Vertex AI”

All languages support accessing Google’s generative AI models (Gemini, Imagen, etc.) through Vertex AI with enterprise authentication and features.

The unified Google GenAI plugin provides access to models via Vertex AI using the vertexAI initializer:

Terminal window
npm i --save @genkit-ai/google-genai
import { genkit } from 'genkit';
import { vertexAI } from '@genkit-ai/google-genai';
const ai = genkit({
plugins: [
vertexAI({ location: 'us-central1' }), // Regional endpoint
// vertexAI({ location: 'global' }), // Global endpoint
],
});

Authentication Methods:

  • Application Default Credentials (ADC): The standard method for most Vertex AI use cases, especially in production. It uses the credentials from the environment (e.g., service account on GCP, user credentials from gcloud auth application-default login locally). This method requires a Google Cloud Project with billing enabled and the Vertex AI API enabled.
  • Vertex AI Express Mode: A streamlined way to try out many Vertex AI features using just an API key, without needing to set up billing or full project configurations. This is ideal for quick experimentation and has generous free tier quotas. Learn More about Express Mode.
// Using Vertex AI Express Mode (Easy to start, some limitations)
// Get an API key from the Vertex AI Studio Express Mode setup.
vertexAI({ apiKey: process.env.VERTEX_EXPRESS_API_KEY }),

Note: When using Express Mode, you do not provide projectId and location in the plugin config.

import { genkit } from 'genkit';
import { vertexAI } from '@genkit-ai/google-genai';
const ai = genkit({
plugins: [vertexAI({ location: 'us-central1' })],
});
const response = await ai.generate({
model: vertexAI.model("gemini-pro-latest"),
prompt: 'Explain Vertex AI in simple terms.',
});
console.log(response.text());
const embeddings = await ai.embed({
embedder: vertexAI.embedder('text-embedding-005'),
content: 'Embed this text.',
});
const response = await ai.generate({
model: vertexAI.model('imagen-3.0-generate-002'),
prompt: 'A beautiful watercolor painting of a castle in the mountains.',
});
const generatedImage = response.media();
const response = await ai.generate({
model: googleAI.model("gemini-3.1-pro-preview"),
prompt: 'what is heavier, one kilo of steel or one kilo of feathers',
config: {
thinkingConfig: {
thinkingLevel: "HIGH", // Or 'LOW' or 'MEDIUM'
includeThoughts: true,
},
},
});
const { message } = await ai.generate({
model: googleAI.model('gemini-2.5-pro'),
prompt: 'what is heavier, one kilo of steel or one kilo of feathers',
config: {
thinkingConfig: {
thinkingBudget: 1024,
includeThoughts: true,
},
},
});

Access third-party models through Vertex AI Model Garden:

import { vertexModelGarden } from "@genkit-ai/vertexai/modelgarden";
const ai = genkit({
plugins: [vertexAIModelGarden({ location: "us-central1" })],
});
const response = await ai.generate({
model: vertexModelGarden.model("claude-sonnet-4-6"),
prompt: 'What should I do when I visit Melbourne?',
});

For the full list of available Claude models see: Available Claude models

const ai = genkit({
plugins: [vertexAIModelGarden({ location: "us-central1" })],
});
const response = await ai.generate({
model: vertexModelGarden.model(
"meta/llama-4-maverick-17b-128e-instruct-maas",
),
prompt: "Write a function that adds two numbers together",
});

For the full list of available Llama models see: Fully-managed Llama models

const ai = genkit({
plugins: [vertexAIModelGarden({ location: "us-central1" })],
});
const response = await ai.generate({
model: vertexModelGarden.model("mistral-medium-3"),
prompt: 'Write a function that adds two numbers together',
config: {
temperature: 0.7,
maxOutputTokens: 1024,
topP: 0.9,
topK: 40,
},
});

For the full list of available Mistral AI models see: Mistral AI models

Use Vertex AI Rapid Evaluation API for model evaluation:

import {
vertexAIEvaluation,
VertexAIEvaluationMetricType,
} from "@genkit-ai/vertexai/evaluation";
const ai = genkit({
plugins: [
vertexAIEvaluation({
location: 'us-central1',
metrics: [
VertexAIEvaluationMetricType.SAFETY,
{
type: VertexAIEvaluationMetricType.ROUGE,
metricSpec: {
rougeType: 'rougeLsum',
},
},
],
}),
],
});

Available metrics:

  • BLEU: Translation quality
  • ROUGE: Summarization quality
  • Fluency: Text fluency
  • Safety: Content safety
  • Groundedness: Factual accuracy
  • Summarization Quality/Helpfulness/Verbosity: Summary evaluation

Run evaluations:

Terminal window
genkit eval:run
genkit eval:flow -e vertexai/safety

Use Vertex AI Vector Search for enterprise-grade vector operations:

  1. Create a Vector Search index in the Google Cloud Console
  2. Configure dimensions based on your embedding model:
    • gemini-embedding-001: 768 dimensions
    • text-multilingual-embedding-002: 768 dimensions
    • multimodalEmbedding001: 128, 256, 512, or 1408 dimensions
  3. Deploy the index to a standard endpoint
import { vertexAIVectorSearch } from '@genkit-ai/vertexai/vectorsearch';
import {
getFirestoreDocumentIndexer,
getFirestoreDocumentRetriever,
} from "@genkit-ai/vertexai/vectorsearch";
const ai = genkit({
plugins: [
vertexAIVectorSearch({
projectId: 'your-project-id',
location: 'us-central1',
vectorSearchOptions: [
{
indexId: 'your-index-id',
indexEndpointId: 'your-endpoint-id',
deployedIndexId: 'your-deployed-index-id',
publicDomainName: 'your-domain-name',
documentRetriever: firestoreDocumentRetriever,
documentIndexer: firestoreDocumentIndexer,
embedder: vertexAI.embedder('gemini-embedding-001'),
},
],
}),
],
});
import {
vertexAiIndexerRef,
vertexAiRetrieverRef,
} from "@genkit-ai/vertexai/vectorsearch";
// Index documents
await ai.index({
indexer: vertexAiIndexerRef({
indexId: 'your-index-id',
}),
documents,
});
// Retrieve similar documents
const results = await ai.retrieve({
retriever: vertexAiRetrieverRef({
indexId: 'your-index-id',
}),
query: queryDocument,
});
  • Learn about generating content to understand how to use these models effectively
  • Explore evaluation to leverage Vertex AI’s evaluation metrics
  • See RAG to implement retrieval-augmented generation with Vector Search
  • Check out creating flows to build structured AI workflows
  • For simple API key access, see the Google AI plugin