AWS Bedrock Plugin
This Genkit plugin allows you to use AWS Bedrock through their official APIs. AWS Bedrock is a fully managed service that provides access to foundation models from leading AI companies through a single API. The plugin enables you to use these models for text generation, embeddings, and image generation. It supports features like tool calling, streaming, multimodal inputs, and cross-region inference for improved performance and resiliency.
Installation
Section titled “Installation”Install the plugin in your project with npm or pnpm:
npm install genkitx-aws-bedrockVersions
Section titled “Versions”If you are using Genkit version <v0.9.0, please use the plugin version v1.9.0. If you are using Genkit >=v0.9.0, please use the plugin version >=v1.10.0 due to the new plugins API.
Features
Section titled “Features”- Text Generation: Support for multiple foundation models (Amazon Nova, Anthropic Claude, Meta Llama, etc.)
- Embeddings: Support for text embedding models from Amazon Titan and Cohere
- Streaming: Full streaming support for real-time responses
- Tool Calling: Complete function calling capabilities
- Multimodal Support: Support for text + image inputs (vision models)
- Cross-Region Inference: Support for inference profiles to improve performance and resiliency
Quick Start
Section titled “Quick Start”import { genkit } from 'genkit';import { awsBedrock, amazonNovaProV1 } from 'genkitx-aws-bedrock';
const ai = genkit({ plugins: [awsBedrock({ region: 'us-east-1' })], model: amazonNovaProV1,});
// Basic usageconst response = await ai.generate({ prompt: 'What are the key benefits of using AWS Bedrock for AI applications?',});
console.log(await response.text);Configuration
Section titled “Configuration”The plugin supports multiple authentication methods depending on your environment.
Standard Initialization
Section titled “Standard Initialization”You can configure the plugin by calling the genkit function with your AWS region and model:
import { genkit, z } from 'genkit';import { awsBedrock, amazonNovaProV1 } from 'genkitx-aws-bedrock';
const ai = genkit({ plugins: [awsBedrock({ region: '<my-region>' })], model: amazonNovaProV1,});Production Environment Authentication
Section titled “Production Environment Authentication”In production environments, it is often necessary to install an additional library to handle authentication. One approach is to use the @aws-sdk/credential-providers package:
import { fromEnv } from '@aws-sdk/credential-providers';const ai = genkit({ plugins: [ awsBedrock({ region: 'us-east-1', credentials: fromEnv(), }), ],});Ensure you have a .env file with the necessary AWS credentials. Remember that the .env file must be added to your .gitignore to prevent sensitive credentials from being exposed.
AWS_ACCESS_KEY_ID =AWS_SECRET_ACCESS_KEY =Local Environment Authentication
Section titled “Local Environment Authentication”For local development, you can directly supply the credentials:
const ai = genkit({ plugins: [ awsBedrock({ region: 'us-east-1', credentials: { accessKeyId: awsAccessKeyId.value(), secretAccessKey: awsSecretAccessKey.value(), }, }), ],});Each approach allows you to manage authentication effectively based on your environment needs.
Configuration with Inference Endpoint
Section titled “Configuration with Inference Endpoint”If you want to use a model that uses Cross-region Inference Endpoints, you can specify the region in the model configuration. Cross-region inference uses inference profiles to increase throughput and improve resiliency by routing your requests across multiple AWS Regions during peak utilization bursts:
import { genkit, z } from 'genkit';import { awsBedrock, amazonNovaProV1, anthropicClaude35SonnetV2 } from 'genkitx-aws-bedrock';
const ai = genkit({ plugins: [awsBedrock()], model: anthropicClaude35SonnetV2('us'),});You can check more information about the available models in the AWS Bedrock Plugin documentation.
Features
Section titled “Features”- Text Generation: Support for multiple foundation models (Amazon Nova, Anthropic Claude, Meta Llama, etc.)
- Embeddings: Support for text embedding models from Amazon Titan and Cohere
- Streaming: Full streaming support for real-time responses
- Tool Calling: Complete function calling capabilities
- Multimodal Support: Support for text + image inputs (vision models)
- Cross-Region Inference: Support for inference profiles to improve performance and resiliency
Using Custom Models
Section titled “Using Custom Models”If you want to use a model that is not exported by this plugin, you can register it using the customModels option when initializing the plugin:
import { genkit, z } from 'genkit';import { awsBedrock } from 'genkitx-aws-bedrock';
const ai = genkit({ plugins: [ awsBedrock({ region: 'us-east-1', customModels: ['openai.gpt-oss-20b-1:0'], // Register custom models }), ],});
// Use the custom model by specifying its name as a stringexport const customModelFlow = ai.defineFlow( { name: 'customModelFlow', inputSchema: z.string(), outputSchema: z.string(), }, async (subject) => { const llmResponse = await ai.generate({ model: 'aws-bedrock/openai.gpt-oss-20b-1:0', // Use any registered custom model prompt: `Tell me about ${subject}`, }); return llmResponse.text; },);Alternatively, you can define a custom model outside of the plugin initialization:
import { defineAwsBedrockModel } from 'genkitx-aws-bedrock';
const customModel = defineAwsBedrockModel('openai.gpt-oss-20b-1:0', { region: 'us-east-1',});
const response = await ai.generate({ model: customModel, prompt: 'Hello!',});Supported models
Section titled “Supported models”This plugin supports all currently available Chat/Completion and Embeddings models from AWS Bedrock. This plugin supports image input and multimodal models.
An AWS Bedrock plugin for Genkit Go that provides text generation, image generation, and embedding capabilities using AWS Bedrock foundation models via the Converse API.
Installation
Section titled “Installation”go get github.com/xavidop/genkit-aws-bedrock-goFeatures
Section titled “Features”- Text Generation: Support for multiple foundation models via AWS Bedrock Converse API
- Image Generation: Support for image generation models like Amazon Titan Image Generator
- Embeddings: Support for text embedding models from Amazon Titan and Cohere
- Streaming: Full streaming support for real-time responses
- Tool Calling: Complete function calling capabilities with schema validation and type conversion
- Multimodal Support: Support for text + image inputs (vision models)
- Schema Management: Automatic conversion between Genkit and AWS Bedrock schemas
- Type Safety: Robust type conversion for tool parameters (handles AWS document.Number types)
Quick Start
Section titled “Quick Start”package main
import ( "context" "log"
"github.com/firebase/genkit/go/ai" "github.com/firebase/genkit/go/genkit" bedrock "github.com/xavidop/genkit-aws-bedrock-go")
func main() { ctx := context.Background() bedrockPlugin := &bedrock.Bedrock{ Region: "us-east-1", }
// Initialize Genkit g := genkit.Init(ctx, genkit.WithPlugins(bedrockPlugin), genkit.WithDefaultModel("bedrock/anthropic.claude-sonnet-4-5-20250929-v1:0"), // Set default model )
bedrock.DefineCommonModels(bedrockPlugin, g) // Optional: Define common models for easy access
log.Println("Starting basic Bedrock example...")
// Example: Generate text (basic usage) response, err := genkit.Generate(ctx, g, ai.WithPrompt("What are the key benefits of using AWS Bedrock for AI applications?"), ) if err != nil { log.Printf("Error generating text: %v", err) } else { log.Printf("Generated response: %s", response.Text()) }
log.Println("Basic Bedrock example completed")}Using Custom Models
Section titled “Using Custom Models”package main
import ( "context" "log"
"github.com/firebase/genkit/go/ai" "github.com/firebase/genkit/go/genkit" bedrock "github.com/xavidop/genkit-aws-bedrock-go")
func main() { ctx := context.Background()
// Initialize Bedrock plugin bedrockPlugin := &bedrock.Bedrock{ Region: "us-east-1", // Optional, defaults to AWS_REGION or us-east-1 }
// Initialize Genkit g := genkit.Init(ctx, genkit.WithPlugins(bedrockPlugin), )
// Define a Claude 3 model claudeModel := bedrockPlugin.DefineModel(g, bedrock.ModelDefinition{ Name: "anthropic.claude-sonnet-4-5-20250929-v1:0", Type: "text", }, nil)
// Generate text response, err := genkit.Generate(ctx, g, ai.WithModel(claudeModel), ai.WithMessages(ai.NewUserMessage( ai.NewTextPart("Hello! How are you?"), )), )
if err != nil { log.Fatal(err) }
log.Println(response.Text())}Configuration Options
Section titled “Configuration Options”The plugin supports various configuration options:
bedrockPlugin := &bedrock.Bedrock{ Region: "us-west-2", // AWS region MaxRetries: 3, // Max retry attempts RequestTimeout: 30 * time.Second, // Request timeout AWSConfig: customAWSConfig, // Custom AWS config (optional)}Available Configuration
Section titled “Available Configuration”| Option | Type | Default | Description |
|---|---|---|---|
Region | string | "us-east-1" | AWS region for Bedrock |
MaxRetries | int | 3 | Maximum retry attempts |
RequestTimeout | time.Duration | 30s | Request timeout |
AWSConfig | *aws.Config | nil | Custom AWS configuration |
AWS Setup and Authentication
Section titled “AWS Setup and Authentication”The plugin uses the standard AWS SDK v2 configuration methods:
Authentication Methods
Section titled “Authentication Methods”-
Environment Variables:
Terminal window export AWS_ACCESS_KEY_ID="your-access-key"export AWS_SECRET_ACCESS_KEY="your-secret-key"export AWS_REGION="us-east-1" -
AWS Credentials File (
~/.aws/credentials):[default]aws_access_key_id = your-access-keyaws_secret_access_key = your-secret-keyregion = us-east-1 -
IAM Roles (when running on AWS services like EC2, ECS, Lambda)
-
AWS SSO/CLI (
aws configure sso)
Required IAM Permissions
Section titled “Required IAM Permissions”Create an IAM policy with these permissions:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["bedrock:InvokeModel", "bedrock:InvokeModelWithResponseStream"], "Resource": ["arn:aws:bedrock:*::foundation-model/*"] } ]}Prompt Caching
Section titled “Prompt Caching”// Prompt caching helps to save input token costs and reduce latency for repeated contexts.// The first cache point must be defined after 1,024 tokens for most models.// More about prompt caching: https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-caching.htmlresponse, err := genkit.Generate(ctx, g, ai.WithMessages( ai.NewSystemMessage( ai.NewTextPart(sysprompt), // A big system prompt that is reused bedrock.NewCachePointPart(), // A cache point after the system prompt
), ai.NewUserTextMessage(input), ),)