Pause generation using interrupts
Interrupts are a special kind of tool that can pause the LLM generation-and-tool-calling loop to return control back to you. When you’re ready, you can then resume generation by sending replies that the LLM processes for further generation.
The most common uses for interrupts fall into a few categories:
- Human-in-the-Loop: Enabling the user of an interactive AI to clarify needed information or confirm the LLM’s action before it is completed, providing a measure of safety and confidence.
- Async Processing: Starting an asynchronous task that can only be completed out-of-band, such as sending an approval notification to a human reviewer or kicking off a long-running background process.
- Exit from an Autonomous Task: Providing the model a way to mark a task as complete, in a workflow that might iterate through a long series of tool calls.
Before you begin
Section titled “Before you begin”All of the examples documented here assume that you have already set up a project with Genkit dependencies installed. If you want to run the code examples on this page, first complete the steps in the Get started guide.
Before diving too deeply, you should also be familiar with the following concepts:
- Generating content with AI models.
- Genkit’s system for defining input and output schemas.
- General methods of tool-calling.
Overview of interrupts
Section titled “Overview of interrupts”At a high level, this is what an interrupt looks like when interacting with an LLM:
- The calling application prompts the LLM with a request. The prompt includes a list of tools, including at least one for an interrupt that the LLM can use to generate a response.
- The LLM generates either a complete response or a tool call request in a specific format. To the LLM, an interrupt call looks like any other tool call.
- If the LLM calls an interrupting tool, the Genkit library automatically pauses generation rather than immediately passing responses back to the model for additional processing.
- The developer checks whether an interrupt call is made, and performs whatever task is needed to collect the information needed for the interrupt response.
- The developer resumes generation by passing an interrupt response to the model. This action triggers a return to Step 2.
Define manual-response interrupts
Section titled “Define manual-response interrupts”The most common kind of interrupt allows the LLM to request clarification from the user, for example by asking a multiple-choice question.
For this use case, use the Genkit instance’s tool() decorator with ctx.interrupt():
from genkit import Genkit, ToolRunContext, tool_responsefrom genkit.plugins.google_genai import GoogleAIfrom pydantic import BaseModel, Field
ai = Genkit( plugins=[GoogleAI()], model='googleai/gemini-2.5-flash',)
class QuestionInput(BaseModel): """Input schema for the question tool.""" question: str = Field(description='the question to ask') choices: list[str] = Field(description='the choices to display to the user') allow_other: bool = Field(default=False, description='when true, allow write-ins')
@ai.tool()def ask_question(input: QuestionInput, ctx: ToolRunContext) -> str: """Use this to ask the user a clarifying question.""" # Interrupt with metadata that the caller can use. ctx.interrupt({ 'question': input.question, 'choices': input.choices, 'allow_other': input.allow_other, })Note that the output type of an interrupt tool corresponds to the response data you will provide when resuming, as opposed to something that will be automatically populated by the tool function.
Use interrupts
Section titled “Use interrupts”Interrupts are passed into the tools list when generating content, just like
other types of tools. You can pass both normal tools and interrupts to the
same generate call:
response = await ai.generate( prompt='Ask me a movie trivia question.', tools=['ask_question'],)Genkit immediately returns a response on receipt of an interrupt tool call.
Respond to interrupts
Section titled “Respond to interrupts”If you’ve passed one or more interrupts to your generate call, you need to check the response for interrupts so that you can handle them:
# You can check the 'finish_reason' attribute of the responseif response.finish_reason == 'interrupted': print("Generation interrupted.")
# Or you can check if any interrupt requests are on the responseif response.interrupts: print(f"Interrupts found: {len(response.interrupts)}") for interrupt in response.interrupts: # Access the interrupt metadata tool_input = interrupt.tool_request.input print(f"Question: {tool_input.get('question')}") print(f"Choices: {tool_input.get('choices')}")Responding to an interrupt is done using the tool_responses option on a subsequent
generate call, making sure to pass in the existing message history. Use the
tool_response helper function to construct the response:
from genkit import tool_response
# Get the user's answer (e.g., from user input)user_answer = 'b' # User selected option b
# Resume generation with the tool responseresponse = await ai.generate( messages=response.messages, tool_responses=[tool_response(response.interrupts[0], user_answer)], tools=['ask_question'],)Handle multiple interrupts in a loop
Section titled “Handle multiple interrupts in a loop”For interactive applications, you’ll often need to handle multiple interrupts in a loop until the model completes its task:
async def interactive_session(): response = await ai.generate( prompt='Help me plan a backyard BBQ.', system='Ask clarifying questions until you have a complete solution.', tools=['ask_question'], )
# Continue until no more interrupts while response.interrupts: answers = []
# Handle all interrupts (multiple can occur at once) for interrupt in response.interrupts: tool_input = interrupt.tool_request.input question = tool_input.get('question', 'Unknown question') choices = tool_input.get('choices', [])
# Display to user and get their answer print(f"\nQuestion: {question}") for i, choice in enumerate(choices): print(f" {i + 1}. {choice}")
user_input = input("Your answer: ") answers.append(tool_response(interrupt, user_input))
# Resume generation with all answers response = await ai.generate( messages=response.messages, tool_responses=answers, tools=['ask_question'], )
# No more interrupts - print final response print(f"\nFinal response: {response.text}")Tools with confirmation interrupts
Section titled “Tools with confirmation interrupts”Another common pattern is the need to confirm an action that the LLM suggests before actually performing it. For example, a payments app might want the user to confirm certain kinds of transfers.
class TransferInput(BaseModel): """Input for money transfer.""" to_account: str = Field(description='destination account ID') amount: float = Field(description='amount to transfer')
class TransferOutput(BaseModel): """Output from money transfer.""" status: str message: str
@ai.tool()def transfer_money(input: TransferInput, ctx: ToolRunContext) -> TransferOutput: """Transfers money to another account.""" # Require confirmation for large transfers if input.amount > 100: ctx.interrupt({ 'confirm': f'Please confirm transfer of ${input.amount:.2f} to {input.to_account}', 'amount': input.amount, 'to_account': input.to_account, }) # Not reached on first call
# Execute the transfer (only reached after confirmation or for small amounts) return TransferOutput( status='completed', message=f'Transferred ${input.amount:.2f} to {input.to_account}', )To handle the confirmation:
response = await ai.generate( prompt='Transfer $500 to account ABC123', tools=['transfer_money'],)
if response.interrupts: interrupt = response.interrupts[0] confirm_msg = interrupt.tool_request.input.get('confirm') print(confirm_msg)
if input("Confirm? (y/n): ").lower() == 'y': # Provide confirmation response response = await ai.generate( messages=response.messages, tool_responses=[tool_response(interrupt, {'confirmed': True})], tools=['transfer_money'], ) else: # Provide rejection response response = await ai.generate( messages=response.messages, tool_responses=[tool_response(interrupt, {'status': 'cancelled', 'message': 'User rejected'})], tools=['transfer_money'], )
print(response.text)Using interrupts with flows
Section titled “Using interrupts with flows”You can also use interrupts within flows for more structured applications:
from genkit import Genkit, ToolRunContext, tool_responsefrom genkit.plugins.google_genai import GoogleAIfrom pydantic import BaseModel, Field
ai = Genkit(plugins=[GoogleAI()])
class TriviaQuestion(BaseModel): """A trivia question with multiple choice answers.""" question: str = Field(description='the trivia question') answers: list[str] = Field(description='multiple choice answers')
@ai.tool()def present_question(input: TriviaQuestion, ctx: ToolRunContext) -> None: """Presents a trivia question to the user.""" ctx.interrupt(input.model_dump())
@ai.flow()async def play_trivia(theme: str) -> str: """Plays a trivia game on the given theme.""" response = await ai.generate( prompt=f'Ask me a trivia question about {theme}.', tools=['present_question'], )
if response.interrupts: interrupt = response.interrupts[0] question_data = interrupt.tool_request.input
# In a real app, you'd get this from user input return f"Question: {question_data.get('question')}\nAnswers: {question_data.get('answers')}"
return response.text