Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
328 changes: 316 additions & 12 deletions examples/tracing/google-adk/google_adk_tracing.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,13 @@
"\n",
"This notebook demonstrates how to trace Google Agent Development Kit (ADK) agents with Openlayer.\n",
"\n",
"## Features\n",
"\n",
"- **Full Agent Tracing**: Capture agent execution, LLM calls, and tool usage\n",
"- **Token Usage Tracking**: Automatically captures prompt, completion, and total tokens\n",
"- **All 6 ADK Callbacks**: Trace before_agent, after_agent, before_model, after_model, before_tool, after_tool\n",
"- **Google Cloud Coexistence**: Use both Google Cloud telemetry (Cloud Trace) AND Openlayer simultaneously\n",
"\n",
"## Prerequisites\n",
"\n",
"Install the required packages:\n",
Expand All @@ -18,6 +25,15 @@
"```\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install google-adk wrapt"
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand All @@ -36,12 +52,12 @@
"import os\n",
"\n",
"# Openlayer configuration\n",
"os.environ[\"OPENLAYER_API_KEY\"] = \"your-api-key-here\"\n",
"os.environ[\"OPENLAYER_INFERENCE_PIPELINE_ID\"] = \"your-pipeline-id-here\"\n",
"os.environ[\"OPENLAYER_API_KEY\"] = \"your-api-key\"\n",
"os.environ[\"OPENLAYER_INFERENCE_PIPELINE_ID\"] = \"your-pipeline-id\"\n",
"\n",
"# Google AI API configuration (Option 1: Using Google AI Studio)\n",
"# Get your API key from: https://aistudio.google.com/apikey\n",
"os.environ[\"GOOGLE_API_KEY\"] = \"your-google-ai-api-key-here\"\n",
"os.environ[\"GOOGLE_API_KEY\"] = \"your-google-api-key\"\n",
"\n",
"# Google Cloud Vertex AI configuration (Option 2: Using Google Cloud)\n",
"# Uncomment these if you're using Vertex AI instead of Google AI\n",
Expand All @@ -56,7 +72,9 @@
"source": [
"## Enable Google ADK Tracing\n",
"\n",
"Enable tracing before creating any agents. This patches Google ADK globally to send traces to Openlayer:\n"
"Enable tracing before creating any agents. This patches Google ADK globally to send traces to Openlayer.\n",
"\n",
"**Note:** By default, ADK's built-in OpenTelemetry tracing remains active, allowing you to send data to both Google Cloud (Cloud Trace, Cloud Monitoring) AND Openlayer. If you only want Openlayer, use `trace_google_adk(disable_adk_otel=True)`.\n"
]
},
{
Expand Down Expand Up @@ -102,7 +120,7 @@
"\n",
"# Create a basic agent\n",
"agent = LlmAgent(\n",
" model=\"gemini-2.0-flash-exp\",\n",
" model=\"gemini-2.5-flash\",\n",
" name=\"Assistant\",\n",
" instruction=\"You are a helpful assistant. Provide concise and accurate responses.\"\n",
")\n",
Expand Down Expand Up @@ -190,7 +208,7 @@
"\n",
"# Create agent with tools (pass functions directly)\n",
"tool_agent = LlmAgent(\n",
" model=\"gemini-2.0-flash-exp\",\n",
" model=\"gemini-2.5-flash\",\n",
" name=\"ToolAgent\",\n",
" instruction=\"You are a helpful assistant with access to weather and calculation tools. Use them when appropriate.\",\n",
" tools=[get_weather, calculate]\n",
Expand Down Expand Up @@ -229,6 +247,273 @@
"await run_tool_agent()\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example 3: Agent with All 6 Callbacks\n",
"\n",
"Google ADK supports 6 types of callbacks that allow you to observe, customize, and control agent behavior. Openlayer automatically traces all of them:\n",
"\n",
"| Callback | Description | When Called |\n",
"|----------|-------------|-------------|\n",
"| `before_agent_callback` | Agent pre-processing | Before the agent starts its main work |\n",
"| `after_agent_callback` | Agent post-processing | After the agent finishes all its steps |\n",
"| `before_model_callback` | LLM pre-call | Before sending a request to the LLM |\n",
"| `after_model_callback` | LLM post-call | After receiving a response from the LLM |\n",
"| `before_tool_callback` | Tool pre-execution | Before executing a tool |\n",
"| `after_tool_callback` | Tool post-execution | After a tool finishes |\n",
"\n",
"Reference: https://google.github.io/adk-docs/callbacks/\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from typing import Any, Dict, Optional\n",
"\n",
"from google.adk.tools import ToolContext\n",
"from google.adk.models import LlmRequest, LlmResponse\n",
"from google.adk.tools.base_tool import BaseTool\n",
"from google.adk.agents.callback_context import CallbackContext\n",
"\n",
"# ============================================================================\n",
"# Define all 6 callback functions\n",
"# ============================================================================\n",
"\n",
"# 1. Before Agent Callback\n",
"# Called before the agent starts processing a request\n",
"def before_agent_callback(callback_context: CallbackContext) -> Optional[Any]:\n",
" \"\"\"\n",
" Called before the agent starts its main work.\n",
" \n",
" Use cases:\n",
" - Input validation\n",
" - Session initialization\n",
" - Logging request start\n",
" - Adding default context\n",
" \"\"\"\n",
" print(f\"[before_agent] Agent '{callback_context.agent_name}' starting\") # noqa: T201\n",
" print(f\"[before_agent] Invocation ID: {callback_context.invocation_id}\") # noqa: T201\n",
" # Return None to allow the agent to proceed normally\n",
" # Return a Content object to skip the agent and return that content directly\n",
" return None\n",
"\n",
"\n",
"# 2. After Agent Callback\n",
"# Called after the agent finishes processing\n",
"def after_agent_callback(callback_context: CallbackContext) -> Optional[Any]:\n",
" \"\"\"\n",
" Called after the agent has finished all its steps.\n",
" \n",
" Use cases:\n",
" - Response post-processing\n",
" - Logging request completion\n",
" - Cleanup operations\n",
" - Analytics\n",
" \"\"\"\n",
" print(f\"[after_agent] Agent '{callback_context.agent_name}' finished\") # noqa: T201\n",
" # Return None to use the agent's original response\n",
" # Return a Content object to replace the agent's response\n",
" return None\n",
"\n",
"\n",
"# 3. Before Model Callback\n",
"# Called before each LLM call\n",
"def before_model_callback(\n",
" _callback_context: CallbackContext, \n",
" llm_request: LlmRequest\n",
") -> Optional[LlmResponse]:\n",
" \"\"\"\n",
" Called before sending a request to the LLM.\n",
" \n",
" Use cases:\n",
" - Request modification (add system prompts)\n",
" - Content filtering / guardrails\n",
" - Caching (return cached response)\n",
" - Rate limiting\n",
" \"\"\"\n",
" print(f\"[before_model] Calling model: {llm_request.model}\") # noqa: T201\n",
" print(f\"[before_model] Request has {len(llm_request.contents)} content items\") # noqa: T201\n",
" # Return None to proceed with the LLM call\n",
" # Return an LlmResponse to skip the LLM and use that response instead\n",
" return None\n",
"\n",
"\n",
"# 4. After Model Callback\n",
"# Called after receiving LLM response\n",
"def after_model_callback(\n",
" _callback_context: CallbackContext, \n",
" llm_response: LlmResponse\n",
") -> Optional[LlmResponse]:\n",
" \"\"\"\n",
" Called after receiving a response from the LLM.\n",
" \n",
" Use cases:\n",
" - Response validation\n",
" - Content filtering\n",
" - Response transformation\n",
" - Logging/analytics\n",
" \"\"\"\n",
" print(\"[after_model] Received response from LLM\") # noqa: T201\n",
" if hasattr(llm_response, 'usage_metadata') and llm_response.usage_metadata:\n",
" print(f\"[after_model] Tokens used: {llm_response.usage_metadata.total_token_count}\") # noqa: T201\n",
" # Return None to use the original response\n",
" # Return a modified LlmResponse to replace it\n",
" return None\n",
"\n",
"\n",
"# 5. Before Tool Callback\n",
"# Called before tool execution\n",
"def before_tool_callback(\n",
" tool: BaseTool, \n",
" args: Dict[str, Any], \n",
" _tool_context: ToolContext\n",
") -> Optional[Dict]:\n",
" \"\"\"\n",
" Called before executing a tool.\n",
" \n",
" Use cases:\n",
" - Argument validation\n",
" - Authorization checks\n",
" - Tool call logging\n",
" - Mocking tool responses for testing\n",
" \"\"\"\n",
" print(f\"[before_tool] Executing tool: {tool.name}\") # noqa: T201\n",
" print(f\"[before_tool] Arguments: {args}\") # noqa: T201\n",
" # Return None to proceed with the tool execution\n",
" # Return a dict to skip the tool and use that as the response\n",
" return None\n",
"\n",
"\n",
"# 6. After Tool Callback\n",
"# Called after tool execution\n",
"def after_tool_callback(\n",
" tool: BaseTool, \n",
" _args: Dict[str, Any], \n",
" _tool_context: ToolContext, \n",
" tool_response: Dict\n",
") -> Optional[Dict]:\n",
" \"\"\"\n",
" Called after a tool finishes execution.\n",
" \n",
" Use cases:\n",
" - Response transformation\n",
" - Error handling\n",
" - Logging tool results\n",
" - Caching responses\n",
" \"\"\"\n",
" print(f\"[after_tool] Tool '{tool.name}' completed\") # noqa: T201\n",
" print(f\"[after_tool] Response: {tool_response}\") # noqa: T201\n",
" # Return None to use the original tool response\n",
" # Return a modified dict to replace the response\n",
" return None\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# ============================================================================\n",
"# Create agent with all callbacks\n",
"# ============================================================================\n",
"\n",
"# Define a tool for the callback agent to use\n",
"def get_current_time() -> str:\n",
" \"\"\"Returns the current time.\n",
" \n",
" Returns:\n",
" str: The current time as a formatted string.\n",
" \"\"\"\n",
" from datetime import datetime\n",
" return f\"The current time is {datetime.now().strftime('%H:%M:%S')}\"\n",
"\n",
"\n",
"# Use different session IDs for callback agent\n",
"CALLBACK_USER_ID = \"user_789\"\n",
"CALLBACK_SESSION_ID = \"session_789\"\n",
"\n",
"# Create agent with ALL 6 callbacks\n",
"callback_agent = LlmAgent(\n",
" model=\"gemini-2.5-flash\",\n",
" name=\"CallbackDemoAgent\",\n",
" instruction=\"You are a helpful assistant. Use the get_current_time tool when asked about time.\",\n",
" tools=[get_current_time],\n",
" # Register all 6 callbacks\n",
" before_agent_callback=before_agent_callback,\n",
" after_agent_callback=after_agent_callback,\n",
" before_model_callback=before_model_callback,\n",
" after_model_callback=after_model_callback,\n",
" before_tool_callback=before_tool_callback,\n",
" after_tool_callback=after_tool_callback,\n",
")\n",
"\n",
"# Create runner for callback agent\n",
"callback_runner = Runner(\n",
" agent=callback_agent,\n",
" app_name=APP_NAME,\n",
" session_service=session_service\n",
")\n",
"\n",
"# Define async function to run the callback agent\n",
"async def run_callback_agent():\n",
" # Create session\n",
" await session_service.create_session(\n",
" app_name=APP_NAME,\n",
" user_id=CALLBACK_USER_ID,\n",
" session_id=CALLBACK_SESSION_ID\n",
" )\n",
" \n",
" # Run the agent with a query that will trigger tool use\n",
" query = \"What time is it right now?\"\n",
" content = types.Content(role='user', parts=[types.Part(text=query)])\n",
" \n",
" # Process events and get response\n",
" async for event in callback_runner.run_async(\n",
" user_id=CALLBACK_USER_ID,\n",
" session_id=CALLBACK_SESSION_ID,\n",
" new_message=content\n",
" ):\n",
" if event.is_final_response() and event.content:\n",
" print(f\"Final Response: {event.content.parts[0].text.strip()}\") # noqa: T201\n",
"\n",
"# Run the async function\n",
"await run_callback_agent()\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### What You'll See in Openlayer\n",
"\n",
"After running the callback agent, you'll see in your Openlayer dashboard:\n",
"\n",
"1. **Agent Step** (`CallbackDemoAgent`):\n",
" - Shows which callbacks are registered\n",
" - Contains all nested steps in chronological order\n",
"\n",
"2. **All Callbacks as Siblings** (direct children of Agent):\n",
" - `Callback: before_agent` - First, before any processing\n",
" - `Callback: before_model` - Before each LLM call\n",
" - `LLM Call: gemini-2.0-flash-exp` - The actual LLM invocation\n",
" - `Callback: after_model` - After each LLM call (includes token counts)\n",
" - `Callback: before_tool` - Before tool execution\n",
" - `Tool: get_current_time` - The actual tool execution\n",
" - `Callback: after_tool` - After tool completion\n",
" - `Callback: after_agent` - Last, after all processing complete\n",
"\n",
"3. **Token Usage** (captured on LLM Call steps):\n",
" - Prompt tokens\n",
" - Completion tokens \n",
" - Total tokens\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand All @@ -240,11 +525,30 @@
"1. Go to https://app.openlayer.com\n",
"2. Navigate to your inference pipeline\n",
"3. View the traces tab to see:\n",
" - Agent execution steps\n",
" - LLM calls with token counts\n",
" - Tool executions with inputs and outputs\n",
" - Latency for each operation\n",
" - Complete execution hierarchy\n"
" - **Agent execution steps** with nested hierarchy\n",
" - **LLM calls** with token counts (prompt, completion, total)\n",
" - **Tool executions** with inputs and outputs\n",
" - **All 6 callback types** traced as separate steps\n",
" - **Latency** for each operation\n",
" - **Complete execution hierarchy** showing the flow\n",
"\n",
"The traces will show the hierarchy of operations in chronological order:\n",
"```\n",
"Agent: CallbackDemoAgent\n",
"├── Callback: before_agent (CallbackDemoAgent)\n",
"├── Callback: before_model (CallbackDemoAgent)\n",
"├── LLM Call: gemini-2.0-flash-exp\n",
"├── Callback: after_model (CallbackDemoAgent)\n",
"├── Callback: before_tool (CallbackDemoAgent)\n",
"├── Tool: get_current_time\n",
"├── Callback: after_tool (CallbackDemoAgent)\n",
"├── Callback: before_model (CallbackDemoAgent)\n",
"├── LLM Call: gemini-2.0-flash-exp\n",
"├── Callback: after_model (CallbackDemoAgent)\n",
"└── Callback: after_agent (CallbackDemoAgent)\n",
"```\n",
"\n",
"**Note:** All callbacks are direct children of the Agent step, appearing in chronological order alongside LLM calls and tool executions. This provides a clear timeline view of the agent execution flow.\n"
]
},
{
Expand Down Expand Up @@ -285,7 +589,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
"version": "3.12.8"
}
},
"nbformat": 4,
Expand Down
Loading