This blog is part of the ADK Masterclass - Hands-On Series. Callbacks allow us to intercept and customize agent behavior at various points in the execution lifecycle—from before a model call to after a tool execution.
View Code on GitHubTable of Contents
1. What are Callbacks?
Callbacks are functions that ADK calls at specific points during agent execution. They let us:
- Log and monitor: Track what the agent is doing
- Modify behavior: Change inputs/outputs dynamically
- Add guardrails: Validate or filter content
- Implement custom logic: Add business rules
2. Why Use Callbacks?
Without callbacks, our agent is a black box—we can't see what's happening inside or modify its behavior without changing the core logic. Callbacks solve this by giving us hooks at critical points:
- Observability: Log every LLM call and tool execution for debugging and monitoring
- Safety: Block harmful content before it reaches users or external systems
- Customization: Inject dynamic context, modify prompts, or transform responses
- Cost Control: Track token usage, implement rate limiting, or cache responses
3. Types of Callbacks
ADK provides six callback hooks. They fire at different stages of the agent's work:
| Callback | Fires When | Good For |
|---|---|---|
before_agent_callback |
Start of request handling | Setup, auth checks, logging start |
after_agent_callback |
End of request handling | Cleanup, metrics, logging end |
before_model_callback |
Before each LLM call | Modify prompts, inject context |
after_model_callback |
After each LLM response | Filter content, transform output |
before_tool_callback |
Before tool runs | Validate args, add auth headers |
after_tool_callback |
After tool completes | Cache results, transform data |
The Return Value Rule
Every callback follows the same pattern:
- Return
None: Let the agent continue normally - Return a value: Skip the normal step and use your value instead
For example, if before_model_callback returns a response object, the LLM is never called—your response is used directly. This is how you implement caching or block certain requests.
4. Tutorial
Prerequisites
- Google AI Studio API Key
- Python 3.9+ installed
Setup Environment
# Create and activate virtual environment
python3 -m venv .venv
source .venv/bin/activate
# Install dependencies
pip install google-adk python-dotenv
# Set our API key
export GOOGLE_API_KEY=our_api_key_here
4.1. Logging Callback
Let's create a callback that logs all model interactions:
from google.adk.agents import Agent
from google.adk.agents.callback_context import CallbackContext
from google.genai import types
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def logging_before_model(
callback_context: CallbackContext,
llm_request: types.GenerateContentConfig
) -> types.GenerateContentConfig | None:
"""Log before each model call."""
logger.info(f"Agent: {callback_context.agent_name}")
logger.info(f"Request contents: {len(llm_request.contents)} messages")
return None # Return None to continue with original request
def logging_after_model(
callback_context: CallbackContext,
llm_response: types.GenerateContentResponse
) -> types.GenerateContentResponse | None:
"""Log after each model response."""
if llm_response.candidates:
text = llm_response.candidates[0].content.parts[0].text
logger.info(f"Response preview: {text[:100]}...")
return None # Return None to continue with original response
root_agent = Agent(
model="gemini-2.5-flash",
name="logged_agent",
instruction="You are a helpful assistant.",
before_model_callback=logging_before_model,
after_model_callback=logging_after_model,
)
4.2. Guardrail Callback
Let's create a callback that filters sensitive content:
from google.adk.agents import Agent
from google.adk.agents.callback_context import CallbackContext
from google.genai import types
BLOCKED_WORDS = ["password", "secret", "api_key"]
def content_filter_callback(
callback_context: CallbackContext,
llm_response: types.GenerateContentResponse
) -> types.GenerateContentResponse | None:
"""Filter sensitive content from responses."""
if not llm_response.candidates:
return None
text = llm_response.candidates[0].content.parts[0].text
# Check for blocked words
for word in BLOCKED_WORDS:
if word.lower() in text.lower():
# Return modified response
return types.GenerateContentResponse(
candidates=[
types.Candidate(
content=types.Content(
role="model",
parts=[types.Part(text="I cannot share sensitive information.")]
)
)
]
)
return None # Continue with original response
root_agent = Agent(
model="gemini-2.5-flash",
name="filtered_agent",
instruction="You are a helpful assistant.",
after_model_callback=content_filter_callback,
)
4.3. Tool Validation Callback
We can validate tool parameters before execution:
from google.adk.agents import Agent
from google.adk.agents.callback_context import CallbackContext
from google.adk.tools import BaseTool
from typing import Any
def validate_tool_callback(
callback_context: CallbackContext,
tool: BaseTool,
args: dict[str, Any]
) -> dict | None:
"""Validate tool arguments before execution."""
tool_name = tool.name
# Example: Validate URL parameters
if "url" in args:
url = args["url"]
if not url.startswith(("http://", "https://")):
return {"error": "Invalid URL format. Must start with http:// or https://"}
# Example: Validate numeric ranges
if "amount" in args:
amount = args["amount"]
if amount < 0 or amount > 10000:
return {"error": "Amount must be between 0 and 10000"}
return None # Continue with tool execution
root_agent = Agent(
model="gemini-2.5-flash",
name="validated_agent",
instruction="You are a helpful assistant.",
before_tool_callback=validate_tool_callback,
)
5. Common Callback Patterns
- Observability: Log all interactions for debugging and monitoring
- Guardrails: Filter harmful or sensitive content
- Rate Limiting: Control how often tools are called
- Caching: Cache tool results to avoid redundant calls
- Authentication: Inject auth tokens before tool calls
- Metrics: Track latency, token usage, and costs
Next Steps
With callbacks covered, the next module explores Artifacts—how agents can store and manage files, images, and other data produced during execution.