Skip to content

AWS Bedrock Plugin

This Genkit plugin allows you to use AWS Bedrock through their official APIs. AWS Bedrock is a fully managed service that provides access to foundation models from leading AI companies through a single API. The plugin enables you to use these models for text generation, embeddings, and image generation. It supports features like tool calling, streaming, multimodal inputs, and cross-region inference for improved performance and resiliency.

Install the plugin in your project with npm or pnpm:

Terminal window
npm install genkitx-aws-bedrock

If you are using Genkit version <v0.9.0, please use the plugin version v1.9.0. If you are using Genkit >=v0.9.0, please use the plugin version >=v1.10.0 due to the new plugins API.

  • Text Generation: Support for multiple foundation models (Amazon Nova, Anthropic Claude, Meta Llama, etc.)
  • Embeddings: Support for text embedding models from Amazon Titan and Cohere
  • Streaming: Full streaming support for real-time responses
  • Tool Calling: Complete function calling capabilities
  • Multimodal Support: Support for text + image inputs (vision models)
  • Cross-Region Inference: Support for inference profiles to improve performance and resiliency
import { genkit } from 'genkit';
import { awsBedrock, amazonNovaProV1 } from 'genkitx-aws-bedrock';
const ai = genkit({
plugins: [awsBedrock({ region: 'us-east-1' })],
model: amazonNovaProV1,
});
// Basic usage
const response = await ai.generate({
prompt: 'What are the key benefits of using AWS Bedrock for AI applications?',
});
console.log(await response.text);

The plugin supports multiple authentication methods depending on your environment.

You can configure the plugin by calling the genkit function with your AWS region and model:

import { genkit, z } from 'genkit';
import { awsBedrock, amazonNovaProV1 } from 'genkitx-aws-bedrock';
const ai = genkit({
plugins: [awsBedrock({ region: '<my-region>' })],
model: amazonNovaProV1,
});

In production environments, it is often necessary to install an additional library to handle authentication. One approach is to use the @aws-sdk/credential-providers package:

import { fromEnv } from '@aws-sdk/credential-providers';
const ai = genkit({
plugins: [
awsBedrock({
region: 'us-east-1',
credentials: fromEnv(),
}),
],
});

Ensure you have a .env file with the necessary AWS credentials. Remember that the .env file must be added to your .gitignore to prevent sensitive credentials from being exposed.

AWS_ACCESS_KEY_ID =
AWS_SECRET_ACCESS_KEY =

For local development, you can directly supply the credentials:

const ai = genkit({
plugins: [
awsBedrock({
region: 'us-east-1',
credentials: {
accessKeyId: awsAccessKeyId.value(),
secretAccessKey: awsSecretAccessKey.value(),
},
}),
],
});

Each approach allows you to manage authentication effectively based on your environment needs.

If you want to use a model that uses Cross-region Inference Endpoints, you can specify the region in the model configuration. Cross-region inference uses inference profiles to increase throughput and improve resiliency by routing your requests across multiple AWS Regions during peak utilization bursts:

import { genkit, z } from 'genkit';
import { awsBedrock, amazonNovaProV1, anthropicClaude35SonnetV2 } from 'genkitx-aws-bedrock';
const ai = genkit({
plugins: [awsBedrock()],
model: anthropicClaude35SonnetV2('us'),
});

You can check more information about the available models in the AWS Bedrock Plugin documentation.

  • Text Generation: Support for multiple foundation models (Amazon Nova, Anthropic Claude, Meta Llama, etc.)
  • Embeddings: Support for text embedding models from Amazon Titan and Cohere
  • Streaming: Full streaming support for real-time responses
  • Tool Calling: Complete function calling capabilities
  • Multimodal Support: Support for text + image inputs (vision models)
  • Cross-Region Inference: Support for inference profiles to improve performance and resiliency

If you want to use a model that is not exported by this plugin, you can register it using the customModels option when initializing the plugin:

import { genkit, z } from 'genkit';
import { awsBedrock } from 'genkitx-aws-bedrock';
const ai = genkit({
plugins: [
awsBedrock({
region: 'us-east-1',
customModels: ['openai.gpt-oss-20b-1:0'], // Register custom models
}),
],
});
// Use the custom model by specifying its name as a string
export const customModelFlow = ai.defineFlow(
{
name: 'customModelFlow',
inputSchema: z.string(),
outputSchema: z.string(),
},
async (subject) => {
const llmResponse = await ai.generate({
model: 'aws-bedrock/openai.gpt-oss-20b-1:0', // Use any registered custom model
prompt: `Tell me about ${subject}`,
});
return llmResponse.text;
},
);

Alternatively, you can define a custom model outside of the plugin initialization:

import { defineAwsBedrockModel } from 'genkitx-aws-bedrock';
const customModel = defineAwsBedrockModel('openai.gpt-oss-20b-1:0', {
region: 'us-east-1',
});
const response = await ai.generate({
model: customModel,
prompt: 'Hello!',
});

This plugin supports all currently available Chat/Completion and Embeddings models from AWS Bedrock. This plugin supports image input and multimodal models.

An AWS Bedrock plugin for Genkit Go that provides text generation, image generation, and embedding capabilities using AWS Bedrock foundation models via the Converse API.

Terminal window
go get github.com/xavidop/genkit-aws-bedrock-go
  • Text Generation: Support for multiple foundation models via AWS Bedrock Converse API
  • Image Generation: Support for image generation models like Amazon Titan Image Generator
  • Embeddings: Support for text embedding models from Amazon Titan and Cohere
  • Streaming: Full streaming support for real-time responses
  • Tool Calling: Complete function calling capabilities with schema validation and type conversion
  • Multimodal Support: Support for text + image inputs (vision models)
  • Schema Management: Automatic conversion between Genkit and AWS Bedrock schemas
  • Type Safety: Robust type conversion for tool parameters (handles AWS document.Number types)
package main
import (
"context"
"log"
"github.com/firebase/genkit/go/ai"
"github.com/firebase/genkit/go/genkit"
bedrock "github.com/xavidop/genkit-aws-bedrock-go"
)
func main() {
ctx := context.Background()
bedrockPlugin := &bedrock.Bedrock{
Region: "us-east-1",
}
// Initialize Genkit
g := genkit.Init(ctx,
genkit.WithPlugins(bedrockPlugin),
genkit.WithDefaultModel("bedrock/anthropic.claude-sonnet-4-5-20250929-v1:0"), // Set default model
)
bedrock.DefineCommonModels(bedrockPlugin, g) // Optional: Define common models for easy access
log.Println("Starting basic Bedrock example...")
// Example: Generate text (basic usage)
response, err := genkit.Generate(ctx, g,
ai.WithPrompt("What are the key benefits of using AWS Bedrock for AI applications?"),
)
if err != nil {
log.Printf("Error generating text: %v", err)
} else {
log.Printf("Generated response: %s", response.Text())
}
log.Println("Basic Bedrock example completed")
}
package main
import (
"context"
"log"
"github.com/firebase/genkit/go/ai"
"github.com/firebase/genkit/go/genkit"
bedrock "github.com/xavidop/genkit-aws-bedrock-go"
)
func main() {
ctx := context.Background()
// Initialize Bedrock plugin
bedrockPlugin := &bedrock.Bedrock{
Region: "us-east-1", // Optional, defaults to AWS_REGION or us-east-1
}
// Initialize Genkit
g := genkit.Init(ctx,
genkit.WithPlugins(bedrockPlugin),
)
// Define a Claude 3 model
claudeModel := bedrockPlugin.DefineModel(g, bedrock.ModelDefinition{
Name: "anthropic.claude-sonnet-4-5-20250929-v1:0",
Type: "text",
}, nil)
// Generate text
response, err := genkit.Generate(ctx, g,
ai.WithModel(claudeModel),
ai.WithMessages(ai.NewUserMessage(
ai.NewTextPart("Hello! How are you?"),
)),
)
if err != nil {
log.Fatal(err)
}
log.Println(response.Text())
}

The plugin supports various configuration options:

bedrockPlugin := &bedrock.Bedrock{
Region: "us-west-2", // AWS region
MaxRetries: 3, // Max retry attempts
RequestTimeout: 30 * time.Second, // Request timeout
AWSConfig: customAWSConfig, // Custom AWS config (optional)
}
OptionTypeDefaultDescription
Regionstring"us-east-1"AWS region for Bedrock
MaxRetriesint3Maximum retry attempts
RequestTimeouttime.Duration30sRequest timeout
AWSConfig*aws.ConfignilCustom AWS configuration

The plugin uses the standard AWS SDK v2 configuration methods:

  1. Environment Variables:

    Terminal window
    export AWS_ACCESS_KEY_ID="your-access-key"
    export AWS_SECRET_ACCESS_KEY="your-secret-key"
    export AWS_REGION="us-east-1"
  2. AWS Credentials File (~/.aws/credentials):

    [default]
    aws_access_key_id = your-access-key
    aws_secret_access_key = your-secret-key
    region = us-east-1
  3. IAM Roles (when running on AWS services like EC2, ECS, Lambda)

  4. AWS SSO/CLI (aws configure sso)

Create an IAM policy with these permissions:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["bedrock:InvokeModel", "bedrock:InvokeModelWithResponseStream"],
"Resource": ["arn:aws:bedrock:*::foundation-model/*"]
}
]
}
// Prompt caching helps to save input token costs and reduce latency for repeated contexts.
// The first cache point must be defined after 1,024 tokens for most models.
// More about prompt caching: https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-caching.html
response, err := genkit.Generate(ctx, g,
ai.WithMessages(
ai.NewSystemMessage(
ai.NewTextPart(sysprompt), // A big system prompt that is reused
bedrock.NewCachePointPart(), // A cache point after the system prompt
),
ai.NewUserTextMessage(input),
),
)