diff --git a/.gitignore b/.gitignore index aa58fe9e6..00891f9bc 100644 --- a/.gitignore +++ b/.gitignore @@ -259,3 +259,9 @@ Dockerfile 01-tutorials/03-AgentCore-identity/07-Outbound_Auth_3LO_ECS_Fargate/backend/runtime/requirements.txt 01-tutorials/03-AgentCore-identity/07-Outbound_Auth_3LO_ECS_Fargate/backend/session_binding/requirements.txt 01-tutorials/03-AgentCore-identity/07-Outbound_Auth_3LO_ECS_Fargate/cdk.context.json + +### ASH ### +.ash/ + +### PowerPoint ### +01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/images/planner-executor-architecture.pptx diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/.env.example b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/.env.example new file mode 100644 index 000000000..43b641eec --- /dev/null +++ b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/.env.example @@ -0,0 +1,9 @@ +# AWS Configuration +AWS_REGION=us-west-2 + +# Auth0 OAuth Configuration (optional — leave empty to use IAM auth) +# See the Auth0 Setup section in the notebook for instructions. +AUTH0_DOMAIN= +AUTH0_CLIENT_ID= +AUTH0_CLIENT_SECRET= +AUTH0_AUDIENCE=https://bedrock-agentcore.us-west-2.amazonaws.com diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/00-deploy-sample-a2a-agents.ipynb b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/00-deploy-sample-a2a-agents.ipynb new file mode 100644 index 000000000..351287408 --- /dev/null +++ b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/00-deploy-sample-a2a-agents.ipynb @@ -0,0 +1,803 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "md-title", + "metadata": {}, + "source": [ + "# Deploy Sample A2A Agents\n", + "\n", + "## Overview\n", + "\n", + "This notebook deploys a set of **sample e-commerce A2A agents** that handle stateful, multi-step operations requiring reasoning across tool calls. Each agent runs as an Amazon Bedrock AgentCore Runtime and exposes its capability via the A2A JSON-RPC protocol.\n", + "\n", + "## What you will build\n", + "\n", + "Deploy **3 A2A agents** using Amazon Bedrock AgentCore Runtime.\n", + "\n", + "By the end you will have:\n", + "- **3 A2A agents** running on AgentCore Runtime, each backed by a Strands agent\n", + "- A **shared sample database** (DynamoDB) seeded with orders, payments, and inventory\n", + "- End-to-end tests that exercise every agent via `invoke_agent_runtime`\n", + "\n", + "For MCP tools, see `00_deploy_sample_mcp_tools.ipynb`.\n", + "\n", + "## Architecture\n", + "\n", + "```\n", + "A2A Agents on Runtime (3)\n", + "──────────────────────────────────────────────────────────\n", + "AgentCore Runtime (A2A / HTTP JSON-RPC)\n", + " │\n", + " ├─ Agent: PaymentRefundAgent\n", + " │ Tools: get_order_payment_info → issue_refund → get_refund_status\n", + " │ Tables: orders, payments, refunds\n", + " │\n", + " ├─ Agent: InventoryReserveAgent\n", + " │ Tools: reserve_inventory → get_reservation_status / release_reservation\n", + " │ Tables: inventory, reservations\n", + " │\n", + " └─ Agent: ShippingUpdateAgent\n", + " Tools: assign_carrier → create_shipment → update_shipment_status\n", + " Tables: orders, shipments\n", + "\n", + "Shared: DynamoDB (6 tables) ─── seeded with orders, payments, inventory\n", + "Invocation: invoke_agent_runtime (A2A JSON-RPC message/send)\n", + "```\n", + "\n", + "## Why A2A?\n", + "\n", + "Each agent handles operations that require **multi-step reasoning** across dependent tool calls — the result of one step determines what to do next:\n", + "\n", + "- **PaymentRefundAgent**: validate order and payment → decide partial vs full → issue → confirm\n", + "- **InventoryReserveAgent**: check available stock → reserve → support rollback if downstream steps fail\n", + "- **ShippingUpdateAgent**: pick carrier based on weight and destination → create shipment → confirm status\n", + "\n", + "## Prerequisites\n", + "- AWS account with Bedrock model access enabled (`us-west-2`)\n", + "- IAM permissions: DynamoDB, IAM, Bedrock AgentCore, ECR, CodeBuild\n", + "- Python 3.10+\n", + "\n", + "---\n", + "## Agents Overview\n", + "\n", + "| Agent | A2A Tool Name | Tools | DynamoDB Tables |\n", + "|---|---|---|---|\n", + "| `PaymentRefundAgent` | `payment_refund_tool` | get_order_payment_info, issue_refund, get_refund_status | orders, payments, refunds |\n", + "| `InventoryReserveAgent` | `inventory_reserve_tool` | reserve_inventory, get_reservation_status, release_reservation | inventory, reservations |\n", + "| `ShippingUpdateAgent` | `shipping_update_tool` | assign_carrier, create_shipment, update_shipment_status | orders, shipments |\n", + "\n", + "---\n", + "## Sections\n", + "1. **Setup** — clients and config\n", + "2. **DynamoDB** — create tables or point to existing\n", + "3. **Deploy** — package and launch each A2A agent\n", + "4. **Test** — invoke each agent end-to-end\n", + "5. **Cleanup**" + ] + }, + { + "cell_type": "markdown", + "id": "md-setup", + "metadata": {}, + "source": [ + "---\n", + "## 1. Setup" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-install", + "metadata": {}, + "outputs": [], + "source": [ + "!pip install strands-agents \"strands-agents[a2a]\" bedrock-agentcore bedrock-agentcore-starter-toolkit fastapi uvicorn boto3 -q" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-setup", + "metadata": {}, + "outputs": [], + "source": [ + "import boto3, json, time, os, sys, pathlib, shutil, uuid\n", + "from datetime import datetime\n", + "\n", + "\n", + "# ── Config ─────────────────────────────────────────────────────────────────\n", + "# Set AWS credentials if not using Amazon SageMaker notebook\n", + "#AWS_PROFILE = \"configured-aws-profile\"\n", + "\n", + "AWS_REGION = \"us-west-2\"\n", + "MODEL_ID = \"us.anthropic.claude-sonnet-4-20250514-v1:0\"\n", + "\n", + "timestamp = datetime.now().strftime(\"%Y%m%d%H%M%S\")\n", + "\n", + "session = boto3.Session(region_name=AWS_REGION)\n", + "iam_client = session.client(\"iam\")\n", + "agentcore_client = session.client(\"bedrock-agentcore-control\")\n", + "ac_data_client = session.client(\"bedrock-agentcore\")\n", + "logs_client = session.client(\"logs\")\n", + "ACCOUNT_ID = session.client(\"sts\").get_caller_identity()[\"Account\"]\n", + "\n", + "print(f\"Account : {ACCOUNT_ID}\")\n", + "print(f\"Region : {AWS_REGION}\")\n", + "print(f\"Timestamp : {timestamp}\")" + ] + }, + { + "cell_type": "markdown", + "id": "md-ddb", + "metadata": {}, + "source": [ + "---\n", + "## 2. DynamoDB\n", + "\n", + "**Option A (Default)** — Reuse tables from a prior deployment of MCP tools: the cell below auto-detects the table prefix from the MCP tools config.\n", + "\n", + "If you have executed 00_deploy_sample_mcp_tools.ipynb and would like to use A2A agents also working on the same DynamoDB tables, use Option A." + ] + }, + { + "cell_type": "markdown", + "id": "7c03e3e8", + "metadata": {}, + "source": [ + "**Option A (Default)**" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-ddb-config", + "metadata": {}, + "outputs": [], + "source": [ + "# ── Auto-detect table prefix from MCP tools deployment config ────────────\n", + "import pathlib\n", + "mcp_config_path = pathlib.Path(\"utils/mcp_tools_config.json\")\n", + "if mcp_config_path.exists():\n", + " _mcp_cfg = json.loads(mcp_config_path.read_text())\n", + " EXISTING_TABLE_PREFIX = f\"tools-{_mcp_cfg['timestamp']}-\"\n", + " print(f\"Auto-detected from MCP config: {EXISTING_TABLE_PREFIX}\")\n", + "else:\n", + " EXISTING_TABLE_PREFIX = \"\"\n", + " print(\"No MCP config found — run 00_deploy_sample_mcp_tools first, or set EXISTING_TABLE_PREFIX manually.\")\n", + "\n", + "if EXISTING_TABLE_PREFIX:\n", + " TABLE_PREFIX = EXISTING_TABLE_PREFIX\n", + " ddb_client = session.client(\"dynamodb\", region_name=AWS_REGION)\n", + " # Tables used by A2A agents: orders, payments, refunds, inventory, reservations, shipments\n", + " TABLE_NAMES = {\n", + " \"orders\": TABLE_PREFIX + \"orders\",\n", + " \"payments\": TABLE_PREFIX + \"payments\",\n", + " \"refunds\": TABLE_PREFIX + \"refunds\",\n", + " \"inventory\": TABLE_PREFIX + \"inventory\",\n", + " \"reservations\": TABLE_PREFIX + \"reservations\",\n", + " \"shipments\": TABLE_PREFIX + \"shipments\",\n", + " }\n", + " # Create A2A-only tables (refunds, reservations) if they don't exist yet\n", + " A2A_ONLY_TABLES = {\n", + " \"refunds\": \"refund_id\",\n", + " \"reservations\": \"reservation_id\",\n", + " }\n", + " for logical, pk in A2A_ONLY_TABLES.items():\n", + " tname = TABLE_NAMES[logical]\n", + " try:\n", + " ddb_client.create_table(\n", + " TableName=tname,\n", + " BillingMode=\"PAY_PER_REQUEST\",\n", + " KeySchema=[{\"AttributeName\": pk, \"KeyType\": \"HASH\"}],\n", + " AttributeDefinitions=[{\"AttributeName\": pk, \"AttributeType\": \"S\"}],\n", + " )\n", + " print(f\" Created: {tname}\")\n", + " except ddb_client.exceptions.ResourceInUseException:\n", + " print(f\" Already exists: {tname}\")\n", + " # Wait for any newly created tables\n", + " for logical in A2A_ONLY_TABLES:\n", + " ddb_client.get_waiter(\"table_exists\").wait(TableName=TABLE_NAMES[logical])\n", + "\n", + " print(\"\\nReusing existing tables:\")\n", + " for k, v in TABLE_NAMES.items():\n", + " print(f\" {k:15s} → {v}\")\n", + "else:\n", + " print(\"EXISTING_TABLE_PREFIX is empty — run the cell below to create fresh tables.\")" + ] + }, + { + "cell_type": "markdown", + "id": "5364d4e8", + "metadata": {}, + "source": [ + "**Option B** - Create fresh DynamoDB tables \n", + "\n", + "**USE THIS OPTION ONLY IF** you don't want to use the mcp tools created in 00_deploy_sample_mcp_tools.ipynb and want to just use a2a_agents" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-ddb-create", + "metadata": {}, + "outputs": [], + "source": [ + "# DONOT EXECUTE THIS CELL UNLESS YOU DONOT WANT TO REUSE THE TABLES FROM THE PRIOR DEPLOYMENT OF MCP TOOLS\n", + "# Skip this cell if you set EXISTING_TABLE_PREFIX above.\n", + "import decimal\n", + "\n", + "ddb_client = session.client(\"dynamodb\", region_name=AWS_REGION)\n", + "ddb_resource = session.resource(\"dynamodb\", region_name=AWS_REGION)\n", + "\n", + "TABLE_PREFIX = f\"tools-{timestamp}-\"\n", + "\n", + "# Tables used by A2A agents: orders, payments, refunds, inventory, reservations, shipments\n", + "TABLE_DEFS = [\n", + " (\"orders\", \"order_id\", None),\n", + " (\"payments\", \"payment_id\", \"order_id\"), # GSI: order_id-index\n", + " (\"refunds\", \"refund_id\", None),\n", + " (\"inventory\", \"sku\", None),\n", + " (\"reservations\", \"reservation_id\", None),\n", + " (\"shipments\", \"shipment_id\", \"order_id\"), # GSI: order_id-index\n", + "]\n", + "\n", + "TABLE_NAMES = {}\n", + "for logical, pk, gsi_field in TABLE_DEFS:\n", + " tname = TABLE_PREFIX + logical\n", + " TABLE_NAMES[logical] = tname\n", + " kwargs = dict(\n", + " TableName=tname,\n", + " BillingMode=\"PAY_PER_REQUEST\",\n", + " KeySchema=[{\"AttributeName\": pk, \"KeyType\": \"HASH\"}],\n", + " AttributeDefinitions=[{\"AttributeName\": pk, \"AttributeType\": \"S\"}],\n", + " )\n", + " if gsi_field:\n", + " kwargs[\"AttributeDefinitions\"].append({\"AttributeName\": gsi_field, \"AttributeType\": \"S\"})\n", + " kwargs[\"GlobalSecondaryIndexes\"] = [{\n", + " \"IndexName\": f\"{gsi_field}-index\",\n", + " \"KeySchema\": [{\"AttributeName\": gsi_field, \"KeyType\": \"HASH\"}],\n", + " \"Projection\": {\"ProjectionType\": \"ALL\"},\n", + " }]\n", + " try:\n", + " ddb_client.create_table(**kwargs)\n", + " print(f\" Creating: {tname}\")\n", + " except ddb_client.exceptions.ResourceInUseException:\n", + " print(f\" Already exists: {tname}\")\n", + "\n", + "print(\"Waiting for tables to become ACTIVE...\")\n", + "for tname in TABLE_NAMES.values():\n", + " ddb_client.get_waiter(\"table_exists\").wait(TableName=tname)\n", + "print(\"All tables ACTIVE.\")\n", + "\n", + "# ── Seed from utils/sample_db.py ───────────────────────────────────────────\n", + "# Seed orders, payments, inventory with sample data.\n", + "# refunds and reservations start empty (written at runtime by the agents).\n", + "from utils.sample_db import ORDERS, PAYMENTS, INVENTORY\n", + "\n", + "def _to_ddb(obj):\n", + " return json.loads(json.dumps(obj), parse_float=decimal.Decimal)\n", + "\n", + "seed_map = {\n", + " \"orders\": ORDERS,\n", + " \"payments\": PAYMENTS,\n", + " \"inventory\": INVENTORY,\n", + "}\n", + "for logical, data in seed_map.items():\n", + " t = ddb_resource.Table(TABLE_NAMES[logical])\n", + " with t.batch_writer() as batch:\n", + " for item in data.values():\n", + " batch.put_item(Item=_to_ddb(item))\n", + " print(f\" Seeded {len(data):2d} records → {TABLE_NAMES[logical]}\")\n", + "\n", + "print(\"\\nSample database ready.\")\n", + "print(\"\\nTable map:\")\n", + "for k, v in TABLE_NAMES.items():\n", + " print(f\" {k:15s} → {v}\")" + ] + }, + { + "cell_type": "markdown", + "id": "md-deploy", + "metadata": {}, + "source": [ + "---\n", + "## 3. Deploy A2A Agents\n", + "\n", + "Each cell is self-contained — deploy agents independently or run all three in sequence." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-deploy-payment", + "metadata": {}, + "outputs": [], + "source": [ + "# ── Deploy: payment_refund_a2a ─────────────────────────────────────────────\n", + "from bedrock_agentcore_starter_toolkit import Runtime\n", + "\n", + "_template = pathlib.Path(\"utils/agents/payment_refund_agent.py\").read_text()\n", + "_env_vars = (\n", + " 'os.environ.setdefault(\"ORDERS_TABLE\", \"{orders}\")\\n'\n", + " 'os.environ.setdefault(\"PAYMENTS_TABLE\", \"{payments}\")\\n'\n", + " 'os.environ.setdefault(\"REFUNDS_TABLE\", \"{refunds}\")\\n'\n", + ").format(**TABLE_NAMES)\n", + "pathlib.Path(\"payment_refund_agent.py\").write_text(\n", + " _template.replace(\"# TABLE_NAMES_PLACEHOLDER\\n\", _env_vars)\n", + ")\n", + "shutil.copy(\"utils/db.py\", \"db.py\")\n", + "pathlib.Path(\"payment_refund_requirements.txt\").write_text(\n", + " \"strands-agents[a2a]\\nfastapi\\nuvicorn\\nboto3\\n\"\n", + ")\n", + "for stale in [\".bedrock_agentcore.yaml\", \"Dockerfile\", \".dockerignore\"]:\n", + " p = pathlib.Path(stale)\n", + " if p.exists(): p.unlink()\n", + "\n", + "pmt_runtime = Runtime()\n", + "pmt_runtime.configure(\n", + " entrypoint=\"payment_refund_agent.py\",\n", + " auto_create_execution_role=True,\n", + " auto_create_ecr=True,\n", + " requirements_file=\"payment_refund_requirements.txt\",\n", + " region=AWS_REGION,\n", + " agent_name=f\"payment_refund_a2a_{timestamp}\",\n", + " protocol=\"A2A\",\n", + ")\n", + "pmt_launch = pmt_runtime.launch(auto_update_on_conflict=True)\n", + "pmt_agent_id = pmt_launch.agent_id\n", + "pmt_agent_arn = pmt_launch.agent_arn\n", + "print(f\"Agent ID : {pmt_agent_id}\")\n", + "print(f\"Agent ARN : {pmt_agent_arn}\")\n", + "\n", + "_resp = agentcore_client.get_agent_runtime(agentRuntimeId=pmt_agent_id)\n", + "pmt_role_arn = (_resp.get(\"executionRoleArn\") or _resp.get(\"roleArn\") or _resp.get(\"agentRuntimeRoleArn\"))\n", + "pmt_role_name = pmt_role_arn.split(\"/\")[-1]\n", + "iam_client.put_role_policy(\n", + " RoleName=pmt_role_name, PolicyName=\"DynamoDBToolsPolicy\",\n", + " PolicyDocument=json.dumps({\n", + " \"Version\": \"2012-10-17\",\n", + " \"Statement\": [{\"Effect\": \"Allow\",\n", + " \"Action\": [\"dynamodb:GetItem\",\"dynamodb:PutItem\",\"dynamodb:UpdateItem\",\n", + " \"dynamodb:Scan\",\"dynamodb:Query\"],\n", + " \"Resource\": f\"arn:aws:dynamodb:{AWS_REGION}:{ACCOUNT_ID}:table/{TABLE_PREFIX}*\"}]\n", + " })\n", + ")\n", + "print(f\"DynamoDB policy attached: {pmt_role_name}\")\n", + "\n", + "while True:\n", + " status = agentcore_client.get_agent_runtime(agentRuntimeId=pmt_agent_id).get(\"status\")\n", + " if status == \"READY\":\n", + " print(f\"payment_refund_a2a READY: {pmt_agent_arn}\"); break\n", + " print(f\" status: {status}...\"); time.sleep(15)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-deploy-inventory", + "metadata": {}, + "outputs": [], + "source": [ + "# ── Deploy: inventory_reserve_a2a ──────────────────────────────────────────\n", + "from bedrock_agentcore_starter_toolkit import Runtime\n", + "\n", + "_template = pathlib.Path(\"utils/agents/inventory_reserve_agent.py\").read_text()\n", + "_env_vars = (\n", + " 'os.environ.setdefault(\"INVENTORY_TABLE\", \"{inventory}\")\\n'\n", + " 'os.environ.setdefault(\"RESERVATIONS_TABLE\", \"{reservations}\")\\n'\n", + ").format(**TABLE_NAMES)\n", + "pathlib.Path(\"inventory_reserve_agent.py\").write_text(\n", + " _template.replace(\"# TABLE_NAMES_PLACEHOLDER\\n\", _env_vars)\n", + ")\n", + "shutil.copy(\"utils/db.py\", \"db.py\")\n", + "pathlib.Path(\"inventory_reserve_requirements.txt\").write_text(\n", + " \"strands-agents[a2a]\\nfastapi\\nuvicorn\\nboto3\\n\"\n", + ")\n", + "for stale in [\".bedrock_agentcore.yaml\", \"Dockerfile\", \".dockerignore\"]:\n", + " p = pathlib.Path(stale)\n", + " if p.exists(): p.unlink()\n", + "\n", + "inv_runtime = Runtime()\n", + "inv_runtime.configure(\n", + " entrypoint=\"inventory_reserve_agent.py\",\n", + " auto_create_execution_role=True,\n", + " auto_create_ecr=True,\n", + " requirements_file=\"inventory_reserve_requirements.txt\",\n", + " region=AWS_REGION,\n", + " agent_name=f\"inventory_reserve_a2a_{timestamp}\",\n", + " protocol=\"A2A\",\n", + ")\n", + "inv_launch = inv_runtime.launch(auto_update_on_conflict=True)\n", + "inv_agent_id = inv_launch.agent_id\n", + "inv_agent_arn = inv_launch.agent_arn\n", + "print(f\"Agent ID : {inv_agent_id}\")\n", + "print(f\"Agent ARN : {inv_agent_arn}\")\n", + "\n", + "_resp = agentcore_client.get_agent_runtime(agentRuntimeId=inv_agent_id)\n", + "inv_role_arn = (_resp.get(\"executionRoleArn\") or _resp.get(\"roleArn\") or _resp.get(\"agentRuntimeRoleArn\"))\n", + "inv_role_name = inv_role_arn.split(\"/\")[-1]\n", + "iam_client.put_role_policy(\n", + " RoleName=inv_role_name, PolicyName=\"DynamoDBToolsPolicy\",\n", + " PolicyDocument=json.dumps({\n", + " \"Version\": \"2012-10-17\",\n", + " \"Statement\": [{\"Effect\": \"Allow\",\n", + " \"Action\": [\"dynamodb:GetItem\",\"dynamodb:PutItem\",\"dynamodb:UpdateItem\",\n", + " \"dynamodb:Scan\",\"dynamodb:Query\"],\n", + " \"Resource\": f\"arn:aws:dynamodb:{AWS_REGION}:{ACCOUNT_ID}:table/{TABLE_PREFIX}*\"}]\n", + " })\n", + ")\n", + "print(f\"DynamoDB policy attached: {inv_role_name}\")\n", + "\n", + "while True:\n", + " status = agentcore_client.get_agent_runtime(agentRuntimeId=inv_agent_id).get(\"status\")\n", + " if status == \"READY\":\n", + " print(f\"inventory_reserve_a2a READY: {inv_agent_arn}\"); break\n", + " print(f\" status: {status}...\"); time.sleep(15)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-deploy-shipping", + "metadata": {}, + "outputs": [], + "source": [ + "# ── Deploy: shipping_update_a2a ────────────────────────────────────────────\n", + "from bedrock_agentcore_starter_toolkit import Runtime\n", + "\n", + "_template = pathlib.Path(\"utils/agents/shipping_update_agent.py\").read_text()\n", + "_env_vars = (\n", + " 'os.environ.setdefault(\"ORDERS_TABLE\", \"{orders}\")\\n'\n", + " 'os.environ.setdefault(\"SHIPMENTS_TABLE\", \"{shipments}\")\\n'\n", + ").format(**TABLE_NAMES)\n", + "pathlib.Path(\"shipping_update_agent.py\").write_text(\n", + " _template.replace(\"# TABLE_NAMES_PLACEHOLDER\\n\", _env_vars)\n", + ")\n", + "shutil.copy(\"utils/db.py\", \"db.py\")\n", + "pathlib.Path(\"shipping_update_requirements.txt\").write_text(\n", + " \"strands-agents[a2a]\\nfastapi\\nuvicorn\\nboto3\\n\"\n", + ")\n", + "for stale in [\".bedrock_agentcore.yaml\", \"Dockerfile\", \".dockerignore\"]:\n", + " p = pathlib.Path(stale)\n", + " if p.exists(): p.unlink()\n", + "\n", + "ship_runtime = Runtime()\n", + "ship_runtime.configure(\n", + " entrypoint=\"shipping_update_agent.py\",\n", + " auto_create_execution_role=True,\n", + " auto_create_ecr=True,\n", + " requirements_file=\"shipping_update_requirements.txt\",\n", + " region=AWS_REGION,\n", + " agent_name=f\"shipping_update_a2a_{timestamp}\",\n", + " protocol=\"A2A\",\n", + ")\n", + "ship_launch = ship_runtime.launch(auto_update_on_conflict=True)\n", + "ship_agent_id = ship_launch.agent_id\n", + "ship_agent_arn = ship_launch.agent_arn\n", + "print(f\"Agent ID : {ship_agent_id}\")\n", + "print(f\"Agent ARN : {ship_agent_arn}\")\n", + "\n", + "_resp = agentcore_client.get_agent_runtime(agentRuntimeId=ship_agent_id)\n", + "ship_role_arn = (_resp.get(\"executionRoleArn\") or _resp.get(\"roleArn\") or _resp.get(\"agentRuntimeRoleArn\"))\n", + "ship_role_name = ship_role_arn.split(\"/\")[-1]\n", + "iam_client.put_role_policy(\n", + " RoleName=ship_role_name, PolicyName=\"DynamoDBToolsPolicy\",\n", + " PolicyDocument=json.dumps({\n", + " \"Version\": \"2012-10-17\",\n", + " \"Statement\": [{\"Effect\": \"Allow\",\n", + " \"Action\": [\"dynamodb:GetItem\",\"dynamodb:PutItem\",\"dynamodb:UpdateItem\",\n", + " \"dynamodb:Scan\",\"dynamodb:Query\"],\n", + " \"Resource\": f\"arn:aws:dynamodb:{AWS_REGION}:{ACCOUNT_ID}:table/{TABLE_PREFIX}*\"}]\n", + " })\n", + ")\n", + "print(f\"DynamoDB policy attached: {ship_role_name}\")\n", + "\n", + "while True:\n", + " status = agentcore_client.get_agent_runtime(agentRuntimeId=ship_agent_id).get(\"status\")\n", + " if status == \"READY\":\n", + " print(f\"shipping_update_a2a READY: {ship_agent_arn}\"); break\n", + " print(f\" status: {status}...\"); time.sleep(15)" + ] + }, + { + "cell_type": "markdown", + "id": "md-save-config", + "metadata": {}, + "source": [ + "### Save deployment config for downstream notebooks\n", + "\n", + "Writes `util/a2a_agents_config.json` so that `07_planner_executor_live.ipynb` can load\n", + "the A2A agent ARNs and descriptions without manual copy-paste." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-save-config", + "metadata": {}, + "outputs": [], + "source": [ + "import pathlib\n", + "\n", + "a2a_config = {\n", + " \"timestamp\": timestamp,\n", + " \"agents\": {\n", + " \"payment_refund\": {\n", + " \"agent_id\": pmt_agent_id,\n", + " \"agent_arn\": pmt_agent_arn,\n", + " \"name\": \"PaymentRefundAgent\",\n", + " \"description\": \"Issues refunds with multi-step validation: verify order/payment, issue refund, confirm status.\",\n", + " \"skills\": [\"get_order_payment_info\", \"issue_refund\", \"get_refund_status\"]\n", + " },\n", + " \"inventory_reserve\": {\n", + " \"agent_id\": inv_agent_id,\n", + " \"agent_arn\": inv_agent_arn,\n", + " \"name\": \"InventoryReserveAgent\",\n", + " \"description\": \"Reserves inventory for orders to prevent overselling, with rollback support.\",\n", + " \"skills\": [\"reserve_inventory\", \"get_reservation_status\", \"release_reservation\"]\n", + " },\n", + " \"shipping_update\": {\n", + " \"agent_id\": ship_agent_id,\n", + " \"agent_arn\": ship_agent_arn,\n", + " \"name\": \"ShippingUpdateAgent\",\n", + " \"description\": \"Assigns carriers, creates shipments, and updates shipment status for orders.\",\n", + " \"skills\": [\"assign_carrier\", \"create_shipment\", \"update_shipment_status\"]\n", + " }\n", + " }\n", + "}\n", + "\n", + "config_path = pathlib.Path(\"utils/a2a_agents_config.json\")\n", + "config_path.write_text(json.dumps(a2a_config, indent=2))\n", + "print(f\"Saved: {config_path}\")" + ] + }, + { + "cell_type": "markdown", + "id": "md-test", + "metadata": {}, + "source": [ + "---\n", + "## 4. Test\n", + "\n", + "The `a2a_call` helper matches the `bedrock-agentcore-starter-toolkit` invocation pattern:\n", + "- Passes `runtimeSessionId`\n", + "- Injects `Accept: text/event-stream, application/json` via boto3 event hook\n", + "- Handles both streaming and non-streaming EventStream responses" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-a2a-helper", + "metadata": {}, + "outputs": [], + "source": [ + "def a2a_call(agent_arn, prompt, verbose=False):\n", + " \"\"\"Invoke an A2A agent via invoke_agent_runtime using message/send.\"\"\"\n", + " session_id = str(uuid.uuid4())\n", + " message_id = str(uuid.uuid4())\n", + " payload = json.dumps({\n", + " \"jsonrpc\": \"2.0\",\n", + " \"id\": str(uuid.uuid4()),\n", + " \"method\": \"message/send\",\n", + " \"params\": {\n", + " \"message\": {\n", + " \"role\": \"user\",\n", + " \"messageId\": message_id,\n", + " \"parts\": [{\"kind\": \"text\", \"text\": prompt}]\n", + " }\n", + " }\n", + " })\n", + " if verbose:\n", + " print(f\"[a2a_call] ARN : {agent_arn}\")\n", + " print(f\"[a2a_call] session : {session_id}\")\n", + " print(f\"[a2a_call] payload : {payload}\")\n", + "\n", + " def _add_accept(request, **kwargs):\n", + " request.headers.add_header(\"Accept\", \"text/event-stream, application/json\")\n", + "\n", + " ac_data_client.meta.events.register_first(\n", + " \"before-sign.bedrock-agentcore.InvokeAgentRuntime\", _add_accept\n", + " )\n", + " try:\n", + " resp = ac_data_client.invoke_agent_runtime(\n", + " agentRuntimeArn=agent_arn,\n", + " qualifier=\"DEFAULT\",\n", + " runtimeSessionId=session_id,\n", + " contentType=\"application/json\",\n", + " payload=payload,\n", + " )\n", + " finally:\n", + " ac_data_client.meta.events.unregister(\n", + " \"before-sign.bedrock-agentcore.InvokeAgentRuntime\", _add_accept\n", + " )\n", + "\n", + " ct = resp.get(\"contentType\", \"\")\n", + " body = resp[\"response\"]\n", + "\n", + " if \"text/event-stream\" in ct:\n", + " texts = []\n", + " for line in body.iter_lines(chunk_size=1):\n", + " if line:\n", + " line = line.decode(\"utf-8\") if isinstance(line, bytes) else line\n", + " if verbose: print(f\"[stream] {line}\")\n", + " if line.startswith(\"data: \"):\n", + " try:\n", + " chunk = json.loads(line[6:])\n", + " texts.append(chunk if isinstance(chunk, str) else json.dumps(chunk))\n", + " except Exception:\n", + " texts.append(line[6:])\n", + " return \"\\n\".join(texts) or \"(empty streaming response)\"\n", + "\n", + " # Non-streaming: collect EventStream chunks\n", + " chunks = []\n", + " for event in body:\n", + " chunks.append(event.decode(\"utf-8\") if isinstance(event, bytes) else str(event))\n", + " raw = \"\".join(chunks)\n", + " if verbose: print(f\"[raw] {raw}\")\n", + " try:\n", + " data = json.loads(raw)\n", + " # message/send response: result.parts[]\n", + " parts = data.get(\"result\", {}).get(\"parts\", [])\n", + " if parts:\n", + " return \"\\n\".join(p.get(\"text\", \"\") for p in parts if p.get(\"kind\") == \"text\" or p.get(\"type\") == \"text\")\n", + " # fallback: result.status.message.parts (older spec)\n", + " parts = (data.get(\"result\", {}).get(\"status\", {}).get(\"message\", {}).get(\"parts\", []))\n", + " if parts:\n", + " return \"\\n\".join(p.get(\"text\", \"\") for p in parts if p.get(\"type\") == \"text\")\n", + " return json.dumps(data, indent=2)\n", + " except Exception:\n", + " return raw\n", + "\n", + "print(\"a2a_call helper ready (method: message/send).\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-test-payment", + "metadata": {}, + "outputs": [], + "source": [ + "# ── Test: payment_refund_a2a ───────────────────────────────────────────────\n", + "print(\"=\" * 60); print(\"payment_refund_tool (A2A)\"); print(\"=\" * 60)\n", + "print(a2a_call(pmt_agent_arn,\n", + " \"Issue a full refund of $59.98 for order ORD-1001 due to customer request, \"\n", + " \"then confirm the refund status.\"))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-test-inventory", + "metadata": {}, + "outputs": [], + "source": [ + "# ── Test: inventory_reserve_a2a ────────────────────────────────────────────\n", + "print(\"=\" * 60); print(\"inventory_reserve_tool (A2A)\"); print(\"=\" * 60)\n", + "print(a2a_call(inv_agent_arn,\n", + " \"Reserve 5 units of WIDGET-42 for order ORD-1004, then confirm the reservation status.\"))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-test-shipping", + "metadata": {}, + "outputs": [], + "source": [ + "# ── Test: shipping_update_a2a ──────────────────────────────────────────────\n", + "print(\"=\" * 60); print(\"shipping_update_tool (A2A)\"); print(\"=\" * 60)\n", + "print(a2a_call(ship_agent_arn,\n", + " \"Create a shipment for order ORD-1001. Package weighs 2kg, ships to WA. Pick the best carrier.\"))" + ] + }, + { + "cell_type": "markdown", + "id": "md-cleanup", + "metadata": {}, + "source": [ + "---\n", + "## 5. Cleanup" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-cleanup", + "metadata": {}, + "outputs": [], + "source": [ + "# ── Cleanup: delete A2A runtimes, IAM roles, DynamoDB tables ──────────────\n", + "# Set CLEANUP_TIMESTAMP if cleaning up a different deployment.\n", + "CLEANUP_TIMESTAMP = \"\" # leave empty to use current session's timestamp\n", + "\n", + "_ts = CLEANUP_TIMESTAMP or timestamp\n", + "print(f\"Cleaning up deployment: {_ts}\\n\")\n", + "\n", + "# 1. Delete A2A runtimes\n", + "print(\"Deleting A2A runtimes...\")\n", + "for rt in agentcore_client.list_agent_runtimes().get(\"agentRuntimes\", []):\n", + " if _ts in rt.get(\"agentRuntimeName\", \"\"):\n", + " try:\n", + " agentcore_client.delete_agent_runtime(agentRuntimeId=rt[\"agentRuntimeId\"])\n", + " print(f\" Deleted: {rt['agentRuntimeName']}\")\n", + " except Exception as e:\n", + " print(f\" {rt['agentRuntimeId']}: {e}\")\n", + "\n", + "# 2. Delete IAM roles (execution roles created by the toolkit)\n", + "print(\"\\nDeleting IAM roles...\")\n", + "try:\n", + " paginator = iam_client.get_paginator(\"list_roles\")\n", + " for page in paginator.paginate():\n", + " for role in page[\"Roles\"]:\n", + " rname = role[\"RoleName\"]\n", + " if _ts not in rname:\n", + " continue\n", + " try:\n", + " for p in iam_client.list_role_policies(RoleName=rname)[\"PolicyNames\"]:\n", + " iam_client.delete_role_policy(RoleName=rname, PolicyName=p)\n", + " for p in iam_client.list_attached_role_policies(RoleName=rname)[\"AttachedPolicies\"]:\n", + " iam_client.detach_role_policy(RoleName=rname, PolicyArn=p[\"PolicyArn\"])\n", + " iam_client.delete_role(RoleName=rname)\n", + " print(f\" Deleted: {rname}\")\n", + " except Exception as e:\n", + " print(f\" {rname}: {e}\")\n", + "except Exception as e:\n", + " print(f\" IAM scan failed: {e}\")\n", + "\n", + "# 3. Delete DynamoDB tables\n", + "print(\"\\nDeleting DynamoDB tables...\")\n", + "_prefix = f\"tools-{_ts}-\"\n", + "for tname in ddb_client.list_tables().get(\"TableNames\", []):\n", + " if tname.startswith(_prefix):\n", + " try:\n", + " ddb_client.delete_table(TableName=tname)\n", + " print(f\" Deleted: {tname}\")\n", + " except Exception as e:\n", + " print(f\" {tname}: {e}\")\n", + "\n", + "# 4. Remove local temp files\n", + "print(\"\\nRemoving local temp files...\")\n", + "for f in [\"payment_refund_agent.py\", \"payment_refund_requirements.txt\",\n", + " \"inventory_reserve_agent.py\", \"inventory_reserve_requirements.txt\",\n", + " \"shipping_update_agent.py\", \"shipping_update_requirements.txt\",\n", + " \"db.py\", \".bedrock_agentcore.yaml\", \"Dockerfile\", \".dockerignore\"]:\n", + " p = pathlib.Path(f)\n", + " if p.exists():\n", + " p.unlink()\n", + " print(f\" Removed: {f}\")\n", + "\n", + "# 5. Config file\n", + "_cfg = pathlib.Path(\"utils/a2a_agents_config.json\")\n", + "if _cfg.exists():\n", + " _cfg.unlink()\n", + " print(f\" Removed: {_cfg}\")\n", + "\n", + "print(\"\\nCleanup complete.\")" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.14.0" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/00-deploy-sample-mcp-tools.ipynb b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/00-deploy-sample-mcp-tools.ipynb new file mode 100644 index 000000000..0442a924d --- /dev/null +++ b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/00-deploy-sample-mcp-tools.ipynb @@ -0,0 +1,985 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "cell-0", + "metadata": {}, + "source": [ + "# Deploy Sample MCP Tools\n", + "\n", + "## Overview\n", + "\n", + "This notebook deploys a set of **sample e-commerce MCP tools** that simulate a real order management platform. The tools cover the full customer journey — browsing inventory, placing orders, sending notifications, processing payments, and managing shipments.\n", + "\n", + "## What you will build\n", + "\n", + "Deploy **9 MCP tools** using Amazon Bedrock AgentCore Gateway + Lambda.\n", + "\n", + "By the end you will have:\n", + "- **9 MCP tools** running behind an AgentCore Gateway, each backed by an AWS Lambda function\n", + "- A **shared sample database** (DynamoDB) that all tools read from and write to\n", + "- End-to-end tests that exercise every deployed tool\n", + "\n", + "For A2A agents, see `00_deploy_sample_a2a_agents.ipynb`.\n", + "\n", + "## Architecture\n", + "\n", + "```\n", + "MCP Tools via Gateway (9)\n", + "──────────────────────────────────\n", + "AgentCore Gateway (MCP / HTTP)\n", + " │\n", + " ├─ Lambda: order-management\n", + " │ order_lookup_tool\n", + " │ order_update_tool\n", + " │ order_cancel_tool\n", + " │\n", + " ├─ Lambda: notification\n", + " │ email_send_tool\n", + " │ email_template_tool\n", + " │ sms_notify_tool\n", + " │\n", + " └─ Lambda: read-services\n", + " payment_status_tool\n", + " inventory_check_tool\n", + " shipping_track_tool\n", + "\n", + "Shared: DynamoDB (6 tables) ─── seeded from sample_db.py\n", + "```\n", + "\n", + "## Prerequisites\n", + "- AWS account with Bedrock model access enabled (`us-west-2`)\n", + "- IAM permissions: Lambda, DynamoDB, IAM, Cognito, Bedrock AgentCore\n", + "- Python 3.10+\n", + "\n", + "---\n", + "## Tools Overview\n", + "\n", + "| Tool | Protocol | Transport | Purpose |\n", + "|---|---|---|---|\n", + "| `order_lookup_tool` | MCP | `streamable_http` | Look up order details and list orders by customer |\n", + "| `order_update_tool` | MCP | `streamable_http` | Update order status or shipping address |\n", + "| `order_cancel_tool` | MCP | `streamable_http` | Cancel an order |\n", + "| `email_send_tool` | MCP | `streamable_http` | Send transactional emails to customers |\n", + "| `email_template_tool` | MCP | `streamable_http` | Manage reusable email templates |\n", + "| `sms_notify_tool` | MCP | `streamable_http` | Send SMS notifications |\n", + "| `payment_status_tool` | MCP | `streamable_http` | Look up payment status for an order |\n", + "| `inventory_check_tool` | MCP | `streamable_http` | Check available stock for one or more SKUs |\n", + "| `shipping_track_tool` | MCP | `streamable_http` | Track shipments and get delivery estimates |" + ] + }, + { + "cell_type": "markdown", + "id": "cell-1", + "metadata": {}, + "source": [ + "## Step 1: Setup" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-2", + "metadata": {}, + "outputs": [], + "source": [ + "!pip install strands-agents \"strands-agents[a2a]\" bedrock-agentcore bedrock-agentcore-starter-toolkit fastapi uvicorn requests nest_asyncio mcp -q" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-3", + "metadata": {}, + "outputs": [], + "source": [ + "import boto3, json, time, os, zipfile, io, sys, pathlib, shutil\n", + "from datetime import datetime\n", + "\n", + "\n", + "# Configuration \n", + "AWS_REGION = \"us-west-2\"\n", + "\n", + "# Set AWS credentials if not using Amazon SageMaker notebook\n", + "#AWS_PROFILE = \"configured-aws-profile\"\n", + "\n", + "MODEL_ID = \"us.anthropic.claude-sonnet-4-20250514-v1:0\"\n", + "timestamp = datetime.now().strftime(\"%Y%m%d%H%M%S\")\n", + "\n", + "session = boto3.Session(region_name=AWS_REGION)\n", + "lambda_client = session.client(\"lambda\")\n", + "iam_client = session.client(\"iam\")\n", + "cognito_client = session.client(\"cognito-idp\")\n", + "agentcore_client = session.client(\"bedrock-agentcore-control\")\n", + "ac_data_client = session.client(\"bedrock-agentcore\")\n", + "ACCOUNT_ID = session.client(\"sts\").get_caller_identity()[\"Account\"]\n", + "\n", + "print(f\"Account : {ACCOUNT_ID}\")\n", + "print(f\"Region : {AWS_REGION}\")\n", + "print(f\"Timestamp : {timestamp}\")\n", + "print(\"Clients ready.\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "gq2u1b2tj1v", + "metadata": {}, + "outputs": [], + "source": [ + "import decimal, json\n", + "\n", + "ddb = session.resource(\"dynamodb\", region_name=AWS_REGION)\n", + "ddb_client = session.client(\"dynamodb\", region_name=AWS_REGION)\n", + "\n", + "TABLE_PREFIX = f\"tools-{timestamp}-\"\n", + "\n", + "# Tables used by MCP tools: orders, customers, payments, inventory, shipments, templates\n", + "TABLE_DEFS = [\n", + " (\"orders\", \"order_id\", None),\n", + " (\"customers\", \"customer_id\", None),\n", + " (\"payments\", \"payment_id\", \"order_id\"), # GSI: order_id-index\n", + " (\"inventory\", \"sku\", None),\n", + " (\"shipments\", \"shipment_id\", \"order_id\"), # GSI: order_id-index\n", + " (\"templates\", \"template_id\", None),\n", + "]\n", + "\n", + "TABLE_NAMES = {} # logical → actual table name\n", + "\n", + "for logical, pk, gsi_field in TABLE_DEFS:\n", + " tname = TABLE_PREFIX + logical\n", + " TABLE_NAMES[logical] = tname\n", + " kwargs = dict(\n", + " TableName=tname,\n", + " BillingMode=\"PAY_PER_REQUEST\",\n", + " KeySchema=[{\"AttributeName\": pk, \"KeyType\": \"HASH\"}],\n", + " AttributeDefinitions=[{\"AttributeName\": pk, \"AttributeType\": \"S\"}],\n", + " )\n", + " if gsi_field:\n", + " kwargs[\"AttributeDefinitions\"].append({\"AttributeName\": gsi_field, \"AttributeType\": \"S\"})\n", + " kwargs[\"GlobalSecondaryIndexes\"] = [{\n", + " \"IndexName\": f\"{gsi_field}-index\",\n", + " \"KeySchema\": [{\"AttributeName\": gsi_field, \"KeyType\": \"HASH\"}],\n", + " \"Projection\": {\"ProjectionType\": \"ALL\"},\n", + " }]\n", + " try:\n", + " ddb_client.create_table(**kwargs)\n", + " print(f\" Creating: {tname}\")\n", + " except ddb_client.exceptions.ResourceInUseException:\n", + " print(f\" Already exists: {tname}\")\n", + "\n", + "print(\"Waiting for tables to become ACTIVE...\")\n", + "for tname in TABLE_NAMES.values():\n", + " ddb_client.get_waiter(\"table_exists\").wait(TableName=tname)\n", + "print(\"All tables ACTIVE.\\n\")\n", + "\n", + "# ── Seed from utils/sample_db.py ───────────────────────────────────────────\n", + "from utils.sample_db import CUSTOMERS, ORDERS, PAYMENTS, INVENTORY, SHIPMENTS, EMAIL_TEMPLATES\n", + "\n", + "def _to_ddb(obj):\n", + " return json.loads(json.dumps(obj), parse_float=decimal.Decimal)\n", + "\n", + "seed_map = {\n", + " \"orders\": ORDERS,\n", + " \"customers\": CUSTOMERS,\n", + " \"payments\": PAYMENTS,\n", + " \"inventory\": INVENTORY,\n", + " \"shipments\": SHIPMENTS,\n", + " \"templates\": EMAIL_TEMPLATES,\n", + "}\n", + "\n", + "for logical, data in seed_map.items():\n", + " t = ddb.Table(TABLE_NAMES[logical])\n", + " with t.batch_writer() as batch:\n", + " for item in data.values():\n", + " batch.put_item(Item=_to_ddb(item))\n", + " print(f\" Seeded {len(data):2d} records → {TABLE_NAMES[logical]}\")\n", + "\n", + "print(\"\\nSample database ready.\")\n", + "print(\"\\nTable map:\")\n", + "for k, v in TABLE_NAMES.items():\n", + " print(f\" {k:15s} → {v}\")" + ] + }, + { + "cell_type": "markdown", + "id": "cell-4", + "metadata": {}, + "source": [ + "---\n", + "## Step 2: MCP Tools — Order Management\n", + "\n", + "**3 tools, 5 operations** — all served from one Lambda function, three separate Gateway targets.\n", + "\n", + "| Tool | Operations |\n", + "|---|---|\n", + "| `order_lookup_tool` | `get_order`, `list_orders` |\n", + "| `order_update_tool` | `update_order_status`, `update_shipping_addr` |\n", + "| `order_cancel_tool` | `cancel_order` |" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-6", + "metadata": {}, + "outputs": [], + "source": [ + "# ── Lambda IAM role ────────────────────────────────────────────────────────\n", + "lambda_role_name = f\"ToolsLambdaRole-{timestamp}\"\n", + "\n", + "lambda_role_arn = iam_client.create_role(\n", + " RoleName=lambda_role_name,\n", + " AssumeRolePolicyDocument=json.dumps({\n", + " \"Version\": \"2012-10-17\",\n", + " \"Statement\": [{\"Effect\": \"Allow\",\n", + " \"Principal\": {\"Service\": \"lambda.amazonaws.com\"},\n", + " \"Action\": \"sts:AssumeRole\"}]\n", + " })\n", + ")[\"Role\"][\"Arn\"]\n", + "\n", + "iam_client.attach_role_policy(\n", + " RoleName=lambda_role_name,\n", + " PolicyArn=\"arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole\"\n", + ")\n", + "iam_client.put_role_policy(\n", + " RoleName=lambda_role_name,\n", + " PolicyName=\"DynamoDBToolsPolicy\",\n", + " PolicyDocument=json.dumps({\n", + " \"Version\": \"2012-10-17\",\n", + " \"Statement\": [{\n", + " \"Effect\": \"Allow\",\n", + " \"Action\": [\"dynamodb:GetItem\",\"dynamodb:PutItem\",\"dynamodb:UpdateItem\",\n", + " \"dynamodb:Scan\",\"dynamodb:Query\"],\n", + " \"Resource\": [f\"arn:aws:dynamodb:{AWS_REGION}:{ACCOUNT_ID}:table/{TABLE_PREFIX}*\"]\n", + " }]\n", + " })\n", + ")\n", + "print(f\"Lambda role: {lambda_role_arn}\")\n", + "print(\"Waiting 10s for IAM propagation...\")\n", + "time.sleep(10)" + ] + }, + { + "cell_type": "markdown", + "id": "cell-8", + "metadata": {}, + "source": [ + "### 2b. Cognito Auth + AgentCore Gateway\n", + "\n", + "One Cognito User Pool and one Gateway are shared by **all 9 MCP tools**. Each tool gets its own Gateway target pointing to its Lambda." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-9", + "metadata": {}, + "outputs": [], + "source": [ + "# ── Cognito User Pool ──────────────────────────────────────────────────────\n", + "user_pool_id = cognito_client.create_user_pool(\n", + " PoolName=f\"tools-pool-{timestamp}\",\n", + " Policies={\"PasswordPolicy\": {\"MinimumLength\": 8, \"RequireUppercase\": False,\n", + " \"RequireLowercase\": False, \"RequireNumbers\": False,\n", + " \"RequireSymbols\": False}}\n", + ")[\"UserPool\"][\"Id\"]\n", + "print(f\"User Pool: {user_pool_id}\")\n", + "\n", + "resource_server_id = f\"tools-api-{timestamp}\"\n", + "cognito_client.create_resource_server(\n", + " UserPoolId=user_pool_id, Identifier=resource_server_id,\n", + " Name=f\"Tools API {timestamp}\",\n", + " Scopes=[{\"ScopeName\": \"read\", \"ScopeDescription\": \"Read access\"},\n", + " {\"ScopeName\": \"write\", \"ScopeDescription\": \"Write access\"}]\n", + ")\n", + "\n", + "app_client = cognito_client.create_user_pool_client(\n", + " UserPoolId=user_pool_id, ClientName=f\"tools-client-{timestamp}\",\n", + " GenerateSecret=True, AllowedOAuthFlows=[\"client_credentials\"],\n", + " AllowedOAuthFlowsUserPoolClient=True,\n", + " AllowedOAuthScopes=[f\"{resource_server_id}/read\", f\"{resource_server_id}/write\"]\n", + ")[\"UserPoolClient\"]\n", + "client_id = app_client[\"ClientId\"]\n", + "client_secret = cognito_client.describe_user_pool_client(\n", + " UserPoolId=user_pool_id, ClientId=client_id\n", + ")[\"UserPoolClient\"][\"ClientSecret\"]\n", + "\n", + "cognito_domain_prefix = f\"tools-{timestamp}\"\n", + "cognito_client.create_user_pool_domain(Domain=cognito_domain_prefix, UserPoolId=user_pool_id)\n", + "cognito_domain = f\"{cognito_domain_prefix}.auth.{AWS_REGION}.amazoncognito.com\"\n", + "print(f\"Client ID : {client_id}\")\n", + "print(f\"Domain : {cognito_domain}\")\n", + "\n", + "# ── Gateway IAM role ───────────────────────────────────────────────────────\n", + "gateway_role_name = f\"ToolsGatewayRole-{timestamp}\"\n", + "gateway_role_arn = iam_client.create_role(\n", + " RoleName=gateway_role_name,\n", + " AssumeRolePolicyDocument=json.dumps({\n", + " \"Version\": \"2012-10-17\",\n", + " \"Statement\": [{\"Effect\": \"Allow\",\n", + " \"Principal\": {\"Service\": \"bedrock-agentcore.amazonaws.com\"},\n", + " \"Action\": \"sts:AssumeRole\"}]\n", + " })\n", + ")[\"Role\"][\"Arn\"]\n", + "iam_client.attach_role_policy(\n", + " RoleName=gateway_role_name,\n", + " PolicyArn=\"arn:aws:iam::aws:policy/BedrockAgentCoreFullAccess\"\n", + ")\n", + "iam_client.put_role_policy(\n", + " RoleName=gateway_role_name, PolicyName=\"LambdaInvokePolicy\",\n", + " PolicyDocument=json.dumps({\n", + " \"Version\": \"2012-10-17\",\n", + " \"Statement\": [{\"Effect\": \"Allow\", \"Action\": \"lambda:InvokeFunction\",\n", + " \"Resource\": f\"arn:aws:lambda:{AWS_REGION}:{ACCOUNT_ID}:function:*-mcp-{timestamp}\"}]\n", + " })\n", + ")\n", + "print(f\"Gateway role: {gateway_role_arn}\")\n", + "time.sleep(10)\n", + "\n", + "# ── Create AgentCore Gateway ───────────────────────────────────────────────\n", + "gateway_name = f\"tools-mcp-gateway-{timestamp}\"\n", + "gateway_response = agentcore_client.create_gateway(\n", + " name=gateway_name, roleArn=gateway_role_arn, protocolType=\"MCP\",\n", + " protocolConfiguration={\"mcp\": {\"supportedVersions\": [\"2025-03-26\", \"2025-06-18\"]}},\n", + " authorizerType=\"CUSTOM_JWT\",\n", + " authorizerConfiguration={\n", + " \"customJWTAuthorizer\": {\n", + " \"discoveryUrl\": f\"https://cognito-idp.{AWS_REGION}.amazonaws.com/{user_pool_id}/.well-known/openid-configuration\",\n", + " \"allowedClients\": [client_id],\n", + " \"allowedScopes\": [f\"{resource_server_id}/read\", f\"{resource_server_id}/write\"]\n", + " }\n", + " }\n", + ")\n", + "gateway_id = gateway_response[\"gatewayId\"]\n", + "print(f\"Gateway ID: {gateway_id}\")\n", + "\n", + "while True:\n", + " resp = agentcore_client.get_gateway(gatewayIdentifier=gateway_id)\n", + " if resp[\"status\"] == \"READY\":\n", + " gateway_url = resp[\"gatewayUrl\"]\n", + " print(f\"Gateway URL: {gateway_url}\")\n", + " break\n", + " print(f\" status: {resp['status']}...\")\n", + " time.sleep(10)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ioks74tsof", + "metadata": {}, + "outputs": [], + "source": [ + "# ── Deploy MCP tool groups (order mgmt, notification, read-services) ───────\n", + "# Runs after gateway_id and gateway_role_arn are available.\n", + "import importlib\n", + "import utils.mcp.order_management, utils.mcp.notification, utils.mcp.read_services\n", + "importlib.reload(utils.mcp.order_management)\n", + "importlib.reload(utils.mcp.notification)\n", + "importlib.reload(utils.mcp.read_services)\n", + "\n", + "from utils.mcp.order_management import deploy as _deploy_order_mgmt\n", + "from utils.mcp.notification import deploy as _deploy_notification\n", + "from utils.mcp.read_services import deploy as _deploy_read_services\n", + "\n", + "_common = dict(\n", + " lambda_client=lambda_client,\n", + " agentcore_client=agentcore_client,\n", + " lambda_role_arn=lambda_role_arn,\n", + " gateway_id=gateway_id,\n", + " gateway_role_arn=gateway_role_arn,\n", + " table_names=TABLE_NAMES,\n", + " timestamp=timestamp,\n", + ")\n", + "\n", + "print(\"=\" * 60)\n", + "_r = _deploy_order_mgmt(**_common)\n", + "order_fn_name = _r[\"lambda_fn_name\"]\n", + "order_lambda_arn = _r[\"lambda_arn\"]\n", + "order_target_id = _r[\"targets\"][\"order_lookup\"]\n", + "order_update_target_id = _r[\"targets\"][\"order_update\"]\n", + "order_cancel_target_id = _r[\"targets\"][\"order_cancel\"]\n", + "\n", + "print(\"=\" * 60)\n", + "_r = _deploy_notification(**_common)\n", + "email_fn_name = _r[\"lambda_fn_name\"]\n", + "email_lambda_arn = _r[\"lambda_arn\"]\n", + "email_target_id = _r[\"targets\"][\"email_send\"]\n", + "email_template_target_id = _r[\"targets\"][\"email_template\"]\n", + "sms_notify_target_id = _r[\"targets\"][\"sms_notify\"]\n", + "\n", + "print(\"=\" * 60)\n", + "_r = _deploy_read_services(**_common)\n", + "read_fn_name = _r[\"lambda_fn_name\"]\n", + "read_lambda_arn = _r[\"lambda_arn\"]\n", + "payment_status_target_id = _r[\"targets\"][\"payment_status\"]\n", + "inventory_check_target_id = _r[\"targets\"][\"inventory_check\"]\n", + "shipping_track_target_id = _r[\"targets\"][\"shipping_track\"]\n", + "\n", + "print(\"=\" * 60)\n", + "print(\"All MCP tool groups deployed.\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e3eb7d0989", + "metadata": {}, + "outputs": [], + "source": [ + "import base64, requests\n", + "\n", + "# ── Cognito OAuth2 token (re-run if tests return 401 — expires after 1h) ──\n", + "credentials = base64.b64encode(f\"{client_id}:{client_secret}\".encode()).decode()\n", + "token_resp = requests.post(\n", + " f\"https://{cognito_domain}/oauth2/token\",\n", + " headers={\"Authorization\": f\"Basic {credentials}\",\n", + " \"Content-Type\": \"application/x-www-form-urlencoded\"},\n", + " data={\"grant_type\": \"client_credentials\",\n", + " \"scope\": f\"{resource_server_id}/read {resource_server_id}/write\"},\n", + " timeout=30\n", + ")\n", + "token_resp.raise_for_status()\n", + "access_token = token_resp.json()[\"access_token\"]\n", + "\n", + "\n", + "def _mcp_session():\n", + " h = {\"Authorization\": f\"Bearer {access_token}\", \"Content-Type\": \"application/json\"}\n", + " r = requests.post(gateway_url, headers=h, json={\n", + " \"jsonrpc\": \"2.0\", \"id\": 0, \"method\": \"initialize\",\n", + " \"params\": {\"protocolVersion\": \"2025-03-26\", \"capabilities\": {},\n", + " \"clientInfo\": {\"name\": \"notebook-client\", \"version\": \"1.0\"}}\n", + " }, timeout=30)\n", + " r.raise_for_status()\n", + " sid = r.headers.get(\"Mcp-Session-Id\")\n", + " if sid:\n", + " h[\"Mcp-Session-Id\"] = sid\n", + " requests.post(gateway_url, headers=h, json={\n", + " \"jsonrpc\": \"2.0\", \"method\": \"notifications/initialized\", \"params\": {}\n", + " }, timeout=10)\n", + " return h\n", + "\n", + "\n", + "def _parse_mcp_result(r):\n", + " ct = r.headers.get(\"content-type\", \"\")\n", + " if \"text/event-stream\" in ct:\n", + " for line in r.text.splitlines():\n", + " if line.startswith(\"data: \"):\n", + " data = json.loads(line[6:])\n", + " if \"result\" in data:\n", + " content = data[\"result\"].get(\"content\", [])\n", + " if content and content[0].get(\"type\") == \"text\":\n", + " try: return json.loads(content[0][\"text\"])\n", + " except: return content[0][\"text\"]\n", + " return data\n", + " return {}\n", + " data = r.json()\n", + " if \"result\" in data:\n", + " content = data[\"result\"].get(\"content\", [])\n", + " if content and content[0].get(\"type\") == \"text\":\n", + " try: return json.loads(content[0][\"text\"])\n", + " except: return content[0][\"text\"]\n", + " return data\n", + "\n", + "\n", + "_tool_name_cache = {}\n", + "\n", + "def _resolve_tool_name(h, short_name):\n", + " global _tool_name_cache\n", + " if short_name in _tool_name_cache:\n", + " return _tool_name_cache[short_name]\n", + " r = requests.post(gateway_url, headers=h, json={\n", + " \"jsonrpc\": \"2.0\", \"id\": 99, \"method\": \"tools/list\", \"params\": {}\n", + " }, timeout=30)\n", + " r.raise_for_status()\n", + " tools = r.json().get(\"result\", {}).get(\"tools\", [])\n", + " for t in tools:\n", + " suffix = t[\"name\"].split(\"___\", 1)[-1]\n", + " _tool_name_cache[suffix] = t[\"name\"]\n", + " if short_name not in _tool_name_cache:\n", + " raise ValueError(f\"Tool '{short_name}' not found. Available: {[t['name'] for t in tools]}\")\n", + " return _tool_name_cache[short_name]\n", + "\n", + "\n", + "def mcp_call(tool_name, arguments):\n", + " h = _mcp_session()\n", + " full_name = _resolve_tool_name(h, tool_name)\n", + " r = requests.post(gateway_url, headers=h, json={\n", + " \"jsonrpc\": \"2.0\", \"id\": 1, \"method\": \"tools/call\",\n", + " \"params\": {\"name\": full_name, \"arguments\": arguments}\n", + " }, timeout=30)\n", + " r.raise_for_status()\n", + " return _parse_mcp_result(r)\n", + "\n", + "\n", + "def list_mcp_tools():\n", + " h = _mcp_session()\n", + " r = requests.post(gateway_url, headers=h, json={\n", + " \"jsonrpc\": \"2.0\", \"id\": 1, \"method\": \"tools/list\", \"params\": {}\n", + " }, timeout=30)\n", + " r.raise_for_status()\n", + " return r.json()\n", + "\n", + "\n", + "print(\"Helpers ready: mcp_call, list_mcp_tools.\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "212a66fe4d", + "metadata": {}, + "outputs": [], + "source": [ + "# ── Test: Order Management ───────────────────────────────────────────────\n", + "print(\"=\" * 60); print(\"order_lookup_tool — get_order\"); print(\"=\" * 60)\n", + "print(json.dumps(mcp_call(\"get_order\", {\"order_id\": \"ORD-1002\"}), indent=2))\n", + "\n", + "print(\"=\" * 60); print(\"order_lookup_tool — list_orders\"); print(\"=\" * 60)\n", + "print(json.dumps(mcp_call(\"list_orders\", {\"customer_email\": \"jane@example.com\"}), indent=2))\n", + "\n", + "print(\"=\" * 60); print(\"order_update_tool — update_order_status\"); print(\"=\" * 60)\n", + "print(json.dumps(mcp_call(\"update_order_status\", {\"order_id\": \"ORD-1001\", \"status\": \"SHIPPED\"}), indent=2))\n", + "\n", + "print(\"=\" * 60); print(\"order_update_tool — update_shipping_addr\"); print(\"=\" * 60)\n", + "print(json.dumps(mcp_call(\"update_shipping_addr\",\n", + " {\"order_id\": \"ORD-1004\", \"street\": \"999 New St\", \"city\": \"Denver\",\n", + " \"state\": \"CO\", \"zip\": \"80201\"}), indent=2))\n", + "\n", + "print(\"=\" * 60); print(\"order_cancel_tool — cancel_order\"); print(\"=\" * 60)\n", + "print(json.dumps(mcp_call(\"cancel_order\", {\"order_id\": \"ORD-1004\", \"reason\": \"customer_request\"}), indent=2))\n" + ] + }, + { + "cell_type": "markdown", + "id": "cell-13", + "metadata": {}, + "source": [ + "---\n", + "## Step 3: MCP Tools — Notifications\n", + "\n", + "**3 tools, 7 operations** — all served from one Lambda function, three separate Gateway targets on the same shared Gateway.\n", + "\n", + "| Tool | Operations |\n", + "|---|---|\n", + "| `email_send_tool` | `send_email`, `send_bulk_email` |\n", + "| `email_template_tool` | `get_template`, `list_templates`, `create_template` |\n", + "| `sms_notify_tool` | `send_sms`, `send_bulk_sms` |" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-test-notifications", + "metadata": {}, + "outputs": [], + "source": [ + "# ── Test: Notifications ──────────────────────────────────────────────────\n", + "print(\"=\" * 60); print(\"email_send_tool — send_email\"); print(\"=\" * 60)\n", + "print(json.dumps(mcp_call(\"send_email\", {\n", + " \"to\": \"bob@example.com\", \"template_id\": \"order_shipped\",\n", + " \"template_vars\": {\"customer_name\": \"Bob Jones\", \"order_id\": \"ORD-1002\",\n", + " \"tracking_number\": \"1Z999AA10123456784\"}\n", + "}), indent=2))\n", + "\n", + "print(\"=\" * 60); print(\"email_send_tool — send_bulk_email\"); print(\"=\" * 60)\n", + "print(json.dumps(mcp_call(\"send_bulk_email\", {\n", + " \"recipients\": [\"jane@example.com\", \"alice@example.com\"],\n", + " \"subject\": \"Maintenance notice\", \"body\": \"Scheduled maintenance Sunday 2-4am PST.\"\n", + "}), indent=2))\n", + "\n", + "print(\"=\" * 60); print(\"email_template_tool — list_templates\"); print(\"=\" * 60)\n", + "print(json.dumps(mcp_call(\"list_templates\", {}), indent=2))\n", + "\n", + "print(\"=\" * 60); print(\"sms_notify_tool — send_sms\"); print(\"=\" * 60)\n", + "print(json.dumps(mcp_call(\"send_sms\", {\"to\": \"+15550001002\", \"body\": \"Your order ORD-1002 has shipped!\"}), indent=2))\n" + ] + }, + { + "cell_type": "markdown", + "id": "7qis1zzql4a", + "metadata": {}, + "source": [ + "---\n", + "## Step 4: MCP Tools — Read-Only Services\n", + "\n", + "**3 tools, 5 operations** — all served from one Lambda function, three separate Gateway targets.\n", + "\n", + "| Tool | Operations |\n", + "|---|---|\n", + "| `payment_status_tool` | `get_payment_status` |\n", + "| `inventory_check_tool` | `check_inventory`, `check_multiple_inventory` |\n", + "| `shipping_track_tool` | `track_shipment`, `estimate_delivery` |" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e6b63debc1", + "metadata": {}, + "outputs": [], + "source": [ + "# ── Test: Read-Only Services ─────────────────────────────────────────────\n", + "print(\"=\" * 60); print(\"payment_status_tool — get_payment_status\"); print(\"=\" * 60)\n", + "print(json.dumps(mcp_call(\"get_payment_status\", {\"order_id\": \"ORD-1001\"}), indent=2))\n", + "\n", + "print(\"=\" * 60); print(\"inventory_check_tool — check_multiple_inventory\"); print(\"=\" * 60)\n", + "print(json.dumps(mcp_call(\"check_multiple_inventory\",\n", + " {\"skus\": [\"WIDGET-42\", \"GIZMO-3\", \"GADGET-7\"]}), indent=2))\n", + "\n", + "print(\"=\" * 60); print(\"shipping_track_tool — track_shipment\"); print(\"=\" * 60)\n", + "print(json.dumps(mcp_call(\"track_shipment\", {\"order_id\": \"ORD-1002\"}), indent=2))\n", + "\n", + "print(\"=\" * 60); print(\"shipping_track_tool — estimate_delivery\"); print(\"=\" * 60)\n", + "print(json.dumps(mcp_call(\"estimate_delivery\", {\"order_id\": \"ORD-1001\"}), indent=2))\n" + ] + }, + { + "cell_type": "markdown", + "id": "cell-27", + "metadata": {}, + "source": [ + "---\n", + "## Step 5: Summary — All 9 MCP Tools Deployed\n", + "\n", + "### MCP Tools (9) — AgentCore Gateway + Lambda — `streamable_http`\n", + "\n", + "| Step | Tool | Lambda | Operations |\n", + "|---|---|---|---|\n", + "| 2 | `order_lookup_tool` | order-management | get_order, list_orders |\n", + "| 2 | `order_update_tool` | order-management | update_order_status, update_shipping_addr |\n", + "| 2 | `order_cancel_tool` | order-management | cancel_order |\n", + "| 3 | `email_send_tool` | notification | send_email, send_bulk_email |\n", + "| 3 | `email_template_tool` | notification | get_template, list_templates, create_template |\n", + "| 3 | `sms_notify_tool` | notification | send_sms, send_bulk_sms |\n", + "| 4 | `payment_status_tool` | read-services | get_payment_status |\n", + "| 4 | `inventory_check_tool` | read-services | check_inventory, check_multiple_inventory |\n", + "| 4 | `shipping_track_tool` | read-services | track_shipment, estimate_delivery |\n", + "\n", + "### Sample Database — DynamoDB (6 tables)\n", + "\n", + "| Table | Seeded records |\n", + "|---|---|\n", + "| orders | 5 (PROCESSING, SHIPPED, DELIVERED, PENDING, CANCELLED) |\n", + "| customers | 4 |\n", + "| payments | 5 (CAPTURED ×3, PENDING ×1, REFUNDED ×1) |\n", + "| inventory | 4 SKUs (one out-of-stock) |\n", + "| shipments | 2 |\n", + "| templates | 3 email templates |\n", + "\n", + "For A2A agents, see `00_deploy_sample_a2a_agents.ipynb`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-28", + "metadata": {}, + "outputs": [], + "source": [ + "print(\"Resource Summary\")\n", + "print(\"=\" * 65)\n", + "print(f\"timestamp : {timestamp}\")\n", + "print(f\"Cognito User Pool : {user_pool_id}\")\n", + "print(f\"Cognito Domain : {cognito_domain}\")\n", + "print(f\"Gateway ID : {gateway_id}\")\n", + "print(f\"Gateway URL : {gateway_url}\")\n", + "print()\n", + "print(f\"[MCP Targets]\")\n", + "print(f\" order_lookup_target : {order_target_id}\")\n", + "print(f\" order_update_target : {order_update_target_id}\")\n", + "print(f\" order_cancel_target : {order_cancel_target_id}\")\n", + "print(f\" email_send_target : {email_target_id}\")\n", + "print(f\" email_template_target : {email_template_target_id}\")\n", + "print(f\" sms_notify_target : {sms_notify_target_id}\")\n", + "print(f\" payment_status_target : {payment_status_target_id}\")\n", + "print(f\" inventory_check_target : {inventory_check_target_id}\")\n", + "print(f\" shipping_track_target : {shipping_track_target_id}\")\n", + "print()\n", + "print(f\"[Lambda Functions]\")\n", + "print(f\" order-management : {order_lambda_arn}\")\n", + "print(f\" notification : {email_lambda_arn}\")\n", + "print(f\" read-services : {read_lambda_arn}\")\n", + "print()\n", + "print(f\"[IAM Roles]\")\n", + "print(f\" Lambda role : {lambda_role_name}\")\n", + "print(f\" Gateway role : {gateway_role_name}\")" + ] + }, + { + "cell_type": "markdown", + "id": "md-save-config", + "metadata": {}, + "source": [ + "### Save deployment config for downstream notebooks\n", + "\n", + "Writes `util/mcp_tools_config.json` so that `07_planner_executor_live.ipynb` can load\n", + "the Gateway URL, Cognito credentials, and tool definitions without manual copy-paste." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-save-config", + "metadata": {}, + "outputs": [], + "source": [ + "import pathlib\n", + "\n", + "# Fetch full tool definitions (with inputSchema) from the Gateway\n", + "gateway_tools_response = list_mcp_tools()\n", + "all_gateway_tools = gateway_tools_response.get(\"result\", {}).get(\"tools\", [])\n", + "print(f\"Fetched {len(all_gateway_tools)} tools from Gateway\")\n", + "\n", + "# Map gateway tools (target___toolname) back to our 9 tool groups\n", + "TOOL_GROUP_DEFS = [\n", + " {\"name\": \"order_lookup_tool\", \"description\": \"Retrieve full order details by order ID including status, items, and customer info\",\n", + " \"tool_names\": [\"get_order\", \"list_orders\"]},\n", + " {\"name\": \"order_update_tool\", \"description\": \"Update order status, shipping address, or line items for an existing order\",\n", + " \"tool_names\": [\"update_order_status\", \"update_shipping_addr\"]},\n", + " {\"name\": \"order_cancel_tool\", \"description\": \"Cancel an order and trigger refund workflow if payment was already captured\",\n", + " \"tool_names\": [\"cancel_order\"]},\n", + " {\"name\": \"email_send_tool\", \"description\": \"Send transactional emails to customers including order confirmations and shipping updates\",\n", + " \"tool_names\": [\"send_email\", \"send_bulk_email\"]},\n", + " {\"name\": \"email_template_tool\", \"description\": \"Manage and render email templates for order confirmations, shipping alerts, and promotions\",\n", + " \"tool_names\": [\"list_templates\", \"render_template\"]},\n", + " {\"name\": \"sms_notify_tool\", \"description\": \"Send SMS notifications to customers for time-sensitive order and delivery alerts\",\n", + " \"tool_names\": [\"send_sms\"]},\n", + " {\"name\": \"payment_status_tool\", \"description\": \"Check payment status, retrieve transaction history, and verify charge details for an order\",\n", + " \"tool_names\": [\"get_payment\", \"list_transactions\"]},\n", + " {\"name\": \"inventory_check_tool\", \"description\": \"Check real-time stock levels for products across warehouses\",\n", + " \"tool_names\": [\"check_stock\", \"list_warehouses\"]},\n", + " {\"name\": \"shipping_track_tool\", \"description\": \"Track shipment status and location in real-time using carrier tracking numbers\",\n", + " \"tool_names\": [\"track_shipment\", \"get_eta\"]},\n", + "]\n", + "\n", + "# Build a lookup: short_name → full tool definition from Gateway\n", + "gateway_tool_lookup = {}\n", + "for gt in all_gateway_tools:\n", + " short = gt[\"name\"].split(\"___\", 1)[-1] if \"___\" in gt[\"name\"] else gt[\"name\"]\n", + " gateway_tool_lookup[short] = gt\n", + "\n", + "# Enrich each tool group with full tool definitions (including inputSchema)\n", + "tools_with_schemas = []\n", + "for group in TOOL_GROUP_DEFS:\n", + " full_tools = []\n", + " for tn in group[\"tool_names\"]:\n", + " if tn in gateway_tool_lookup:\n", + " gt = gateway_tool_lookup[tn]\n", + " full_tools.append({\n", + " \"name\": tn,\n", + " \"description\": gt.get(\"description\", tn),\n", + " \"inputSchema\": gt.get(\"inputSchema\", {\"type\": \"object\"})\n", + " })\n", + " else:\n", + " full_tools.append({\"name\": tn, \"description\": tn, \"inputSchema\": {\"type\": \"object\"}})\n", + " tools_with_schemas.append({\n", + " \"name\": group[\"name\"],\n", + " \"description\": group[\"description\"],\n", + " \"tool_names\": group[\"tool_names\"],\n", + " \"tools_full\": full_tools\n", + " })\n", + "\n", + "mcp_config = {\n", + " \"timestamp\": timestamp,\n", + " \"gateway_id\": gateway_id,\n", + " \"gateway_url\": gateway_url,\n", + " \"cognito_domain\": cognito_domain,\n", + " \"client_id\": client_id,\n", + " \"client_secret\": client_secret,\n", + " \"resource_server_id\": resource_server_id,\n", + " \"user_pool_id\": user_pool_id,\n", + " \"targets\": {\n", + " \"order_lookup\": order_target_id,\n", + " \"order_update\": order_update_target_id,\n", + " \"order_cancel\": order_cancel_target_id,\n", + " \"email_send\": email_target_id,\n", + " \"email_template\": email_template_target_id,\n", + " \"sms_notify\": sms_notify_target_id,\n", + " \"payment_status\": payment_status_target_id,\n", + " \"inventory_check\": inventory_check_target_id,\n", + " \"shipping_track\": shipping_track_target_id,\n", + " },\n", + " \"tools\": tools_with_schemas\n", + "}\n", + "\n", + "config_path = pathlib.Path(\"utils/mcp_tools_config.json\")\n", + "config_path.write_text(json.dumps(mcp_config, indent=2))\n", + "print(f\"Saved: {config_path}\")" + ] + }, + { + "cell_type": "markdown", + "id": "cell-29", + "metadata": {}, + "source": [ + "---\n", + "## Step 6: Cleanup" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-30", + "metadata": {}, + "outputs": [], + "source": [ + "# ── Cleanup: delete all MCP resources for a given timestamp ────────────────\n", + "# Set CLEANUP_TIMESTAMP if different from the current session's `timestamp`.\n", + "CLEANUP_TIMESTAMP = \"\" # leave empty to use current session's `timestamp`\n", + "\n", + "_ts = CLEANUP_TIMESTAMP or timestamp\n", + "print(f\"Cleaning up deployment: {_ts}\\n\")\n", + "\n", + "# ── 1. Gateway targets + gateway ────────────────────────────────────────────\n", + "print(\"Deleting Gateway targets + gateway...\")\n", + "try:\n", + " _gws = agentcore_client.list_gateways()[\"items\"]\n", + " _gw = next((g for g in _gws if _ts in g.get(\"name\", \"\")), None)\n", + " if _gw:\n", + " _gw_id = _gw[\"gatewayId\"]\n", + " try:\n", + " _targets = agentcore_client.list_gateway_targets(gatewayIdentifier=_gw_id)[\"items\"]\n", + " for _t in _targets:\n", + " try:\n", + " agentcore_client.delete_gateway_target(gatewayIdentifier=_gw_id, targetId=_t[\"targetId\"])\n", + " print(f\" Deleted target: {_t['name']}\")\n", + " except Exception as e:\n", + " print(f\" {_t['targetId']}: {e}\")\n", + " except Exception as e:\n", + " print(f\" list targets: {e}\")\n", + " time.sleep(5)\n", + " try:\n", + " agentcore_client.delete_gateway(gatewayIdentifier=_gw_id)\n", + " print(f\" Deleted gateway: {_gw['name']}\")\n", + " except Exception as e:\n", + " print(f\" delete gateway: {e}\")\n", + " else:\n", + " print(\" No matching gateway found\")\n", + "except Exception as e:\n", + " print(f\" list_gateways failed: {e}\")\n", + "\n", + "# ── 2. Lambda functions ──────────────────────────────────────────────────────\n", + "print(\"\\nDeleting Lambda functions...\")\n", + "for _fn in [f\"order-mgmt-mcp-{_ts}\", f\"notification-mcp-{_ts}\", f\"read-services-mcp-{_ts}\"]:\n", + " try:\n", + " lambda_client.delete_function(FunctionName=_fn)\n", + " print(f\" Deleted: {_fn}\")\n", + " except lambda_client.exceptions.ResourceNotFoundException:\n", + " print(f\" Not found: {_fn}\")\n", + " except Exception as e:\n", + " print(f\" {_fn}: {e}\")\n", + "\n", + "# ── 3. Cognito ──────────────────────────────────────────────────────────────\n", + "print(\"\\nDeleting Cognito...\")\n", + "try:\n", + " _pools = cognito_client.list_user_pools(MaxResults=60)[\"UserPools\"]\n", + " _pool = next((p for p in _pools if p[\"Name\"] == f\"tools-pool-{_ts}\"), None)\n", + " if _pool:\n", + " _up_id = _pool[\"Id\"]\n", + " try:\n", + " cognito_client.delete_user_pool_domain(Domain=f\"tools-{_ts}\", UserPoolId=_up_id)\n", + " except Exception:\n", + " pass\n", + " cognito_client.delete_user_pool(UserPoolId=_up_id)\n", + " print(f\" Deleted user pool: {_up_id}\")\n", + " else:\n", + " print(\" No matching user pool found\")\n", + "except Exception as e:\n", + " print(f\" {e}\")\n", + "\n", + "# ── 4. IAM roles ────────────────────────────────────────────────────────────\n", + "print(\"\\nDeleting IAM roles...\")\n", + "\n", + "def _delete_role(role_name, inline_policies, managed_arns):\n", + " try:\n", + " for p in inline_policies:\n", + " try:\n", + " iam_client.delete_role_policy(RoleName=role_name, PolicyName=p)\n", + " except Exception:\n", + " pass\n", + " for arn in managed_arns:\n", + " try:\n", + " iam_client.detach_role_policy(RoleName=role_name, PolicyArn=arn)\n", + " except Exception:\n", + " pass\n", + " try:\n", + " for p in iam_client.list_attached_role_policies(RoleName=role_name)[\"AttachedPolicies\"]:\n", + " iam_client.detach_role_policy(RoleName=role_name, PolicyArn=p[\"PolicyArn\"])\n", + " except Exception:\n", + " pass\n", + " try:\n", + " for p in iam_client.list_role_policies(RoleName=role_name)[\"PolicyNames\"]:\n", + " iam_client.delete_role_policy(RoleName=role_name, PolicyName=p)\n", + " except Exception:\n", + " pass\n", + " iam_client.delete_role(RoleName=role_name)\n", + " print(f\" Deleted: {role_name}\")\n", + " except iam_client.exceptions.NoSuchEntityException:\n", + " print(f\" Not found: {role_name}\")\n", + " except Exception as e:\n", + " print(f\" {role_name}: {e}\")\n", + "\n", + "_delete_role(f\"ToolsLambdaRole-{_ts}\",\n", + " [\"DynamoDBToolsPolicy\"],\n", + " [\"arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole\"])\n", + "_delete_role(f\"ToolsGatewayRole-{_ts}\",\n", + " [\"LambdaInvokePolicy\"],\n", + " [\"arn:aws:iam::aws:policy/BedrockAgentCoreFullAccess\"])\n", + "\n", + "# ── 5. DynamoDB tables ──────────────────────────────────────────────────────\n", + "print(\"\\nDeleting DynamoDB tables...\")\n", + "_prefix = f\"tools-{_ts}-\"\n", + "try:\n", + " _tables = ddb_client.list_tables()[\"TableNames\"]\n", + " for _tname in _tables:\n", + " if _tname.startswith(_prefix):\n", + " try:\n", + " ddb_client.delete_table(TableName=_tname)\n", + " print(f\" Deleted: {_tname}\")\n", + " except Exception as e:\n", + " print(f\" {_tname}: {e}\")\n", + "except Exception as e:\n", + " print(f\" {e}\")\n", + "\n", + "# ── 6. Local temp files ──────────────────────────────────────────────────────\n", + "print(\"\\nRemoving local temp files...\")\n", + "for _f in [\"db.py\", \".bedrock_agentcore.yaml\", \"Dockerfile\", \".dockerignore\"]:\n", + " _p = pathlib.Path(_f)\n", + " if _p.exists():\n", + " _p.unlink()\n", + " print(f\" Removed: {_f}\")\n", + "\n", + "# ── 7. Config file ────────────────────────────────────────────────────────────\n", + "_cfg = pathlib.Path(\"utils/mcp_tools_config.json\")\n", + "if _cfg.exists():\n", + " _cfg.unlink()\n", + " print(f\" Removed: {_cfg}\")\n", + "\n", + "print(\"\\nCleanup complete.\")" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.14.0" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/README.md b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/README.md new file mode 100644 index 000000000..b9d07087c --- /dev/null +++ b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/README.md @@ -0,0 +1,68 @@ +# Planner + Executor: Runtime Tool Discovery with AWS Agent Registry + +## Overview + +When building AI agents that need to discover and use tools at runtime, loading every +available tool into the LLM context upfront is expensive, slow, and wasteful. As your +tool catalog grows — across order management, payments, shipping, notifications, and +more — the token cost and latency of stuffing all tool schemas into every request +becomes a real bottleneck. + +This notebook demonstrates the **Planner + Executor pattern**, a two-phase approach to +runtime tool discovery that solves this problem: + +1. **Planner Agent** — receives a task, searches the AWS Agent Registry to discover + which tools are relevant, and outputs a minimal Tool Plan. It never executes + business logic — it only identifies what's needed. + +2. **Executor Agent** — loads only the tools specified in the Plan, creates live + connections to MCP servers and A2A agents, and executes the task step by step. + +This pattern delivers three key benefits: + +- **Runtime discovery** — agents find the right tools dynamically from the Registry + instead of hardcoding them, so new tools become available without redeploying agents +- **Token optimization** — the Executor's context contains only the tools it needs + (e.g., 2-3 out of 12), significantly reducing input tokens and cost compared to + loading everything upfront +- **Faster execution** — smaller context means faster LLM inference and fewer + irrelevant tool calls + +In this notebook you will be using **real deployed services** — 9 MCP tools on AgentCore Gateway and +3 A2A agents on AgentCore Runtime — to run 3 end-to-end e-commerce scenarios. You will perform runtime discovery of the required MCP servers and A2A agents for each scenario using Planner agent and then use Executor agent to execute the scenario based on the plan and tools provided by the Planner agent. You will also analyze the token and cost savings that comes with runtime tool discovery and execution. + +## Deploying MCP tools to Amazon AgentCore Gateway and A2A agents to Amazon AgentCore Runtime + +Run these notebooks first to deploy the tools and agents: +1. `00-deploy-sample-mcp-tools.ipynb` — deploys 9 MCP tools on AgentCore Gateway +2. `00-deploy-sample-a2a-agents.ipynb` — deploys 3 A2A agents on AgentCore Runtime + +Both notebooks save config files (`utils/mcp_tools_config.json`, `util/sa2a_agents_config.json`) +that this notebook loads automatically. + +**These MCP servers and A2A agents will be deployed to Amazon AgentCore once above notebooks are executed** + +| Tool | Protocol | Purpose | +|---|---|---| +| `order_lookup_tool` | MCP | Look up order details and list orders by customer | +| `order_update_tool` | MCP | Update order status or shipping address | +| `order_cancel_tool` | MCP | Cancel an order | +| `email_send_tool` | MCP | Send transactional emails to customers | +| `email_template_tool` | MCP | Manage reusable email templates | +| `sms_notify_tool` | MCP | Send SMS notifications | +| `payment_status_tool` | MCP | Look up payment status for an order | +| `inventory_check_tool` | MCP | Check available stock for one or more SKUs | +| `shipping_track_tool` | MCP | Track shipments and get delivery estimates | +| `returns_processing_tool` | MCP (remote) | Process product returns and generate return labels | +| `loyalty_rewards_tool` | MCP (remote) | Manage loyalty points and redeem rewards | +| `payment_refund_tool` | A2A | Issue refunds with multi-step validation | +| `inventory_reserve_tool` | A2A | Reserve inventory with rollback support | +| `shipping_update_tool` | A2A | Create shipments with carrier selection and status updates | + +## Architecture + +![Planner + Executor architecture](images/planner-executor-architecture.png) + +## Tutorial + +- [Planner Executor Pattern](planner-executor.ipynb) \ No newline at end of file diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/images/planner-executor-architecture.png b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/images/planner-executor-architecture.png new file mode 100644 index 000000000..911f2d78a Binary files /dev/null and b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/images/planner-executor-architecture.png differ diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/planner-executor.ipynb b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/planner-executor.ipynb new file mode 100644 index 000000000..6cb65dc77 --- /dev/null +++ b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/planner-executor.ipynb @@ -0,0 +1,1076 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "md-title", + "metadata": {}, + "source": [ + "# Planner + Executor: Runtime Tool Discovery with AWS Agent Registry\n", + "\n", + "## Overview\n", + "\n", + "When building AI agents that need to discover and use tools at runtime, loading every\n", + "available tool into the LLM context upfront is expensive, slow, and wasteful. As your\n", + "tool catalog grows — across order management, payments, shipping, notifications, and\n", + "more — the token cost and latency of stuffing all tool schemas into every request\n", + "becomes a real bottleneck.\n", + "\n", + "This notebook demonstrates the **Planner + Executor pattern**, a two-phase approach to\n", + "runtime tool discovery that solves this problem:\n", + "\n", + "1. **Planner Agent** — receives a task, searches the AWS Agent Registry to discover\n", + " which tools are relevant, and outputs a minimal Tool Plan. It never executes\n", + " business logic — it only identifies what's needed.\n", + "\n", + "2. **Executor Agent** — loads only the tools specified in the Plan, creates live\n", + " connections to MCP servers and A2A agents, and executes the task step by step.\n", + "\n", + "This pattern delivers three key benefits:\n", + "\n", + "- **Runtime discovery** — agents find the right tools dynamically from the Registry\n", + " instead of hardcoding them, so new tools become available without redeploying agents\n", + "- **Token optimization** — the Executor's context contains only the tools it needs\n", + " (e.g., 2-3 out of 12), significantly reducing input tokens and cost compared to\n", + " loading everything upfront\n", + "- **Faster execution** — smaller context means faster LLM inference and fewer\n", + " irrelevant tool calls\n", + "\n", + "In this notebook you will be using **real deployed services** — 9 MCP tools on AgentCore Gateway and\n", + "3 A2A agents on AgentCore Runtime — to run 3 end-to-end e-commerce scenarios. You will perform runtime discovery of the required MCP servers and A2A agents for each scenario using Planner agent and then use Executor agent to execute the scenario based on the plan and tools provided by the Planner agent. You will also analyze the token and cost savings that comes with runtime tool discovery and execution.\n", + "\n", + "## Deploying MCP tools to Amazon AgentCore Gateway and A2A agents to Amazon AgentCore Runtime\n", + "\n", + "Run these notebooks first to deploy the tools and agents:\n", + "1. `00-deploy-sample-mcp-tools.ipynb` — deploys 9 MCP tools on AgentCore Gateway\n", + "2. `00-deploy-sample-a2a-agents.ipynb` — deploys 3 A2A agents on AgentCore Runtime\n", + "\n", + "Both notebooks save config files (`utils/mcp_tools_config.json`, `util/sa2a_agents_config.json`)\n", + "that this notebook loads automatically.\n", + "\n", + "**These MCP servers and A2A agents will be dpeloyed to Amazon AgentCore once above notebooks are executed**\n", + "\n", + "| Tool | Protocol | Purpose |\n", + "|---|---|---|\n", + "| `order_lookup_tool` | MCP | Look up order details and list orders by customer |\n", + "| `order_update_tool` | MCP | Update order status or shipping address |\n", + "| `order_cancel_tool` | MCP | Cancel an order |\n", + "| `email_send_tool` | MCP | Send transactional emails to customers |\n", + "| `email_template_tool` | MCP | Manage reusable email templates |\n", + "| `sms_notify_tool` | MCP | Send SMS notifications |\n", + "| `payment_status_tool` | MCP | Look up payment status for an order |\n", + "| `inventory_check_tool` | MCP | Check available stock for one or more SKUs |\n", + "| `shipping_track_tool` | MCP | Track shipments and get delivery estimates |\n", + "| `returns_processing_tool` | MCP (remote) | Process product returns and generate return labels |\n", + "| `loyalty_rewards_tool` | MCP (remote) | Manage loyalty points and redeem rewards |\n", + "| `payment_refund_tool` | A2A | Issue refunds with multi-step validation |\n", + "| `inventory_reserve_tool` | A2A | Reserve inventory with rollback support |\n", + "| `shipping_update_tool` | A2A | Create shipments with carrier selection and status updates |\n", + "\n", + "## Architecture\n", + "\n", + "![Planner + Executor architecture](images/planner-executor-architecture.png)\n", + "\n", + "## Sections\n", + "1. Setup\n", + "2. Obtain OAuth2 Token\n", + "3. Create Registry & Register All 12 Tools\n", + "4. Build Planner Agent\n", + "5. Build Executor Agent\n", + "6. Run Planner + Executor Locally (3 scenarios)\n", + "7. Token/Cost Comparison\n", + "8. Deploy to AgentCore Runtime\n", + "9. Invoke Deployed Agent\n", + "10. Cleanup" + ] + }, + { + "cell_type": "markdown", + "id": "md-setup", + "metadata": {}, + "source": [ + "---\n", + "## Step 1: Setup" + ] + }, + { + "cell_type": "markdown", + "id": "9227f0f3", + "metadata": {}, + "source": [ + "### Prerequisites\n", + "- AWS account with IAM credentials configured\n", + "- Python 3.10+\n", + "- boto3 >= 1.42.87\n", + "- IAM user or role with the permissions below (replace ACCOUNT_ID and REGION as needed)\n", + "\n", + "
\n", + "Required IAM policy (click to expand)\n", + "\n", + "```json\n", + "{\n", + " \"Version\": \"2012-10-17\",\n", + " \"Statement\": [\n", + " {\n", + " \"Sid\": \"AllowCreateRegistry\",\n", + " \"Effect\": \"Allow\",\n", + " \"Action\": [\"bedrock-agentcore:CreateRegistry\"],\n", + " \"Resource\": [\"arn:aws:bedrock-agentcore:us-west-2:ACCOUNT_ID:*\"]\n", + " },\n", + " {\n", + " \"Sid\": \"AllowGetUpdateDeleteRegistry\",\n", + " \"Effect\": \"Allow\",\n", + " \"Action\": [\n", + " \"bedrock-agentcore:GetRegistry\",\n", + " \"bedrock-agentcore:DeleteRegistry\"\n", + " ],\n", + " \"Resource\": [\"arn:aws:bedrock-agentcore:us-west-2:ACCOUNT_ID:registry/*\"]\n", + " },\n", + " {\n", + " \"Sid\": \"AllowCreateAndListRecords\",\n", + " \"Effect\": \"Allow\",\n", + " \"Action\": [\n", + " \"bedrock-agentcore:CreateRegistryRecord\",\n", + " \"bedrock-agentcore:SearchRegistryRecords\"\n", + " ],\n", + " \"Resource\": [\"arn:aws:bedrock-agentcore:us-west-2:ACCOUNT_ID:registry/*\"]\n", + " },\n", + " {\n", + " \"Sid\": \"AllowRecordOperations\",\n", + " \"Effect\": \"Allow\",\n", + " \"Action\": [\n", + " \"bedrock-agentcore:GetRegistryRecord\",\n", + " \"bedrock-agentcore:DeleteRegistryRecord\",\n", + " \"bedrock-agentcore:SubmitRegistryRecordForApproval\"\n", + " ],\n", + " \"Resource\": [\"arn:aws:bedrock-agentcore:us-west-2:ACCOUNT_ID:registry/*/record/*\"]\n", + " }\n", + " ]\n", + "}\n", + "```\n", + "\n", + "
\n", + "\n", + "**Note:** This notebook uses `bedrock-agentcore`, `bedrock-agentcore-starter-toolkit`, `strands-agents`, and `matplotlib`. These are installed automatically via `requirements.txt`." + ] + }, + { + "cell_type": "markdown", + "id": "md-auth0-setup", + "metadata": {}, + "source": [ + "### Auth0 Setup (Optional)\n", + "\n", + "To use OAuth-based registry search instead of IAM, set up Auth0:\n", + "\n", + "1. Create an Auth0 Account & Tenant\n", + "- Sign up at [auth0.com](https://auth0.com)\n", + "- Create a new tenant — note your **Auth0 Domain** (e.g., `your-tenant.us.auth0.com`)\n", + "\n", + "2. Register an API (Resource Server)\n", + "- Navigate to **Applications → APIs** in the Auth0 Dashboard\n", + "- Click **+ Create API**\n", + "- Set the **Identifier (Audience)** to: `https://bedrock-agentcore.us-west-2.amazonaws.com`\n", + "- Select **RS256** signing algorithm\n", + "\n", + "3. Configure .env\n", + "- Copy `03-Consumer/.env.example` to `03-Consumer/.env`\n", + "- Fill in `AUTH0_DOMAIN`, `AUTH0_CLIENT_ID`, `AUTH0_CLIENT_SECRET`\n", + "- Leave Auth0 fields empty to fall back to IAM auth\n" + ] + }, + { + "cell_type": "markdown", + "id": "6e7e7aa3", + "metadata": {}, + "source": [ + "#### Install the dependencies" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-install", + "metadata": {}, + "outputs": [], + "source": [ + "!pip install -r requirements.txt" + ] + }, + { + "cell_type": "markdown", + "id": "b04162b4", + "metadata": {}, + "source": [ + "#### Connect to your AWS environment" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-setup", + "metadata": {}, + "outputs": [], + "source": [ + "import boto3, json, time, os, sys, pathlib, math, uuid\n", + "from dotenv import load_dotenv\n", + "from datetime import datetime\n", + "from strands import Agent, tool\n", + "from strands.models import BedrockModel\n", + "\n", + "# ── Load .env for Auth0 config (optional) ────────────────────────────────\n", + "load_dotenv(pathlib.Path.cwd() / \".env\")\n", + "\n", + "# Auth0 OAuth config (leave empty to use IAM auth for registry search)\n", + "AUTH0_DOMAIN = os.getenv(\"AUTH0_DOMAIN\", \"\")\n", + "AUTH0_CLIENT_ID = os.getenv(\"AUTH0_CLIENT_ID\", \"\")\n", + "AUTH0_CLIENT_SECRET = os.getenv(\"AUTH0_CLIENT_SECRET\", \"\")\n", + "AUTH0_AUDIENCE = os.getenv(\"AUTH0_AUDIENCE\", \"\")\n", + "USE_AUTH0 = bool(AUTH0_DOMAIN and AUTH0_CLIENT_ID and AUTH0_CLIENT_SECRET)\n", + "\n", + "print(f\"Auth mode: {'Auth0 OAuth' if USE_AUTH0 else 'IAM (default)'}\")\n", + "\n", + "# ── Config ─────────────────────────────────────────────────────────────────\n", + "\n", + "# Set AWS credentials if not using Amazon SageMaker notebook\n", + "#AWS_PROFILE = \"configured-aws-profile\"\n", + "\n", + "AWS_REGION = \"us-west-2\"\n", + "MODEL_ID = \"us.anthropic.claude-sonnet-4-20250514-v1:0\"\n", + "\n", + "# Claude Sonnet 4 pricing (USD per 1 000 tokens)\n", + "PRICE_INPUT_PER_1K = 0.003\n", + "PRICE_OUTPUT_PER_1K = 0.015\n", + "\n", + "session = boto3.Session(region_name=AWS_REGION)\n", + "\n", + "cp_client = session.client(\"bedrock-agentcore-control\")\n", + "dp_client = session.client(\"bedrock-agentcore\")\n", + "\n", + "\n", + "iam_client = session.client(\"iam\")\n", + "ACCOUNT_ID = session.client(\"sts\").get_caller_identity()[\"Account\"]\n", + "\n", + "def pp(data):\n", + " print(json.dumps(data, indent=2, default=str))\n", + "\n", + "def estimate_tokens(text: str) -> int:\n", + " return math.ceil(len(text) / 4)\n", + "\n", + "\n", + "print(f\"Account : {ACCOUNT_ID}\")\n", + "print(f\"Region : {AWS_REGION}\")" + ] + }, + { + "cell_type": "markdown", + "id": "affdbbec", + "metadata": {}, + "source": [ + "### Load deployment configs for MCP tools and A2A agents from prior deployment notebooks" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a08838cc", + "metadata": {}, + "outputs": [], + "source": [ + "mcp_config = json.loads(pathlib.Path(\"utils/mcp_tools_config.json\").read_text())\n", + "a2a_config = json.loads(pathlib.Path(\"utils/a2a_agents_config.json\").read_text())\n", + "\n", + "print(f\"MCP Gateway : {mcp_config['gateway_url']}\")\n", + "print(f\"A2A Agents : {list(a2a_config['agents'].keys())}\")\n", + "print(\"Clients ready.\")" + ] + }, + { + "cell_type": "markdown", + "id": "md-token", + "metadata": {}, + "source": [ + "---\n", + "## Step 2: Obtain OAuth2 Tokens\n", + "\n", + "Get a Cognito access token for the MCP Gateway (always needed).\n", + "If Auth0 is configured, also obtain an Auth0 token for OAuth-based registry search.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-token", + "metadata": {}, + "outputs": [], + "source": [ + "import base64, requests\n", + "\n", + "# ── MCP Gateway token (Cognito — always needed for Gateway tools) ─────────\n", + "credentials = base64.b64encode(\n", + " f\"{mcp_config['client_id']}:{mcp_config['client_secret']}\".encode()\n", + ").decode()\n", + "\n", + "token_resp = requests.post(\n", + " f\"https://{mcp_config['cognito_domain']}/oauth2/token\",\n", + " headers={\"Authorization\": f\"Basic {credentials}\",\n", + " \"Content-Type\": \"application/x-www-form-urlencoded\"},\n", + " data={\"grant_type\": \"client_credentials\",\n", + " \"scope\": f\"{mcp_config['resource_server_id']}/read {mcp_config['resource_server_id']}/write\"},\n", + " timeout=30\n", + ")\n", + "token_resp.raise_for_status()\n", + "access_token = token_resp.json()[\"access_token\"]\n", + "print(f\"MCP Gateway token obtained ({len(access_token)} chars)\")\n", + "\n", + "# ── Auth0 token for Registry search (optional) ───────────────────────────\n", + "auth0_token = None\n", + "if USE_AUTH0:\n", + " auth0_resp = requests.post(\n", + " f\"https://{AUTH0_DOMAIN}/oauth/token\",\n", + " json={\n", + " \"grant_type\": \"client_credentials\",\n", + " \"client_id\": AUTH0_CLIENT_ID,\n", + " \"client_secret\": AUTH0_CLIENT_SECRET,\n", + " \"audience\": AUTH0_AUDIENCE,\n", + " },\n", + " timeout=30,\n", + " )\n", + " auth0_resp.raise_for_status()\n", + " auth0_token = auth0_resp.json()[\"access_token\"]\n", + " print(f\"Auth0 token obtained ({len(auth0_token)} chars)\")\n", + "else:\n", + " print(\"Auth0 not configured — using IAM for registry search\")\n" + ] + }, + { + "cell_type": "markdown", + "id": "md-registry", + "metadata": {}, + "source": [ + "---\n", + "## Step 3: Create Registry & Register All 12 Tools\n", + "\n", + "Populate the Registry with all 12 deployed tools and agents so the Planner can discover them at runtime.\n", + "Each MCP record includes the Gateway URL for live tool execution. Each A2A record includes the agent card URL for direct agent invocation." + ] + }, + { + "cell_type": "markdown", + "id": "9a28c9b4", + "metadata": {}, + "source": [ + "### Create Registry" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-registry-create", + "metadata": {}, + "outputs": [], + "source": [ + "from utils.registry_records import create_and_approve_all_records\n", + "\n", + "# Create registry — Auth0 OAuth or IAM based on config\n", + "if USE_AUTH0:\n", + " discovery_url = f\"https://{AUTH0_DOMAIN}/.well-known/openid-configuration\"\n", + " reg = cp_client.create_registry(\n", + " name=\"plannerExecutorLiveDemo\",\n", + " description=\"Registry for live Planner+Executor demo with Auth0 OAuth\",\n", + " authorizerType=\"CUSTOM_JWT\",\n", + " authorizerConfiguration={\n", + " \"customJWTAuthorizer\": {\n", + " \"discoveryUrl\": discovery_url,\n", + " \"allowedAudience\": [AUTH0_AUDIENCE],\n", + " }\n", + " },\n", + " )\n", + "else:\n", + " reg = cp_client.create_registry(\n", + " name=\"plannerExecutorLiveDemo\",\n", + " description=\"Registry for live Planner+Executor demo\",\n", + " approvalConfiguration={\"autoApproval\": False}\n", + " )\n", + "\n", + "REGISTRY_ARN = reg[\"registryArn\"]\n", + "REGISTRY_ID = REGISTRY_ARN.split(\"/\")[-1]\n", + "print(f\"Registry created : {REGISTRY_ID}\")\n", + "print(f\"Registry ARN : {REGISTRY_ARN}\")\n", + "\n", + "# Wait for READY\n", + "while True:\n", + " r = cp_client.get_registry(registryId=REGISTRY_ID)\n", + " if r[\"status\"] == \"READY\":\n", + " print(\"Registry status : READY\"); break\n", + " if r[\"status\"] == \"CREATE_FAILED\":\n", + " print(f\"Registry FAILED: {r.get('statusReason', 'unknown')}\"); break\n", + " print(f\"Registry status : {r['status']} — waiting...\")\n", + " time.sleep(5)\n", + "\n", + "# Add MCP URL to allowedAudience for Auth0 registries\n", + "if USE_AUTH0:\n", + " mcp_url = f\"{AUTH0_AUDIENCE}/registry/{REGISTRY_ID}/mcp\"\n", + " registry_info = cp_client.get_registry(registryId=REGISTRY_ID)\n", + " jwt_config = registry_info[\"authorizerConfiguration\"][\"customJWTAuthorizer\"]\n", + " audience = list(set(jwt_config.get(\"allowedAudience\", []) + [mcp_url]))\n", + " cp_client.update_registry(\n", + " registryId=REGISTRY_ID,\n", + " authorizerConfiguration={\n", + " \"optionalValue\": {\n", + " \"customJWTAuthorizer\": {\n", + " \"discoveryUrl\": jwt_config[\"discoveryUrl\"],\n", + " \"allowedAudience\": audience,\n", + " }\n", + " }\n", + " },\n", + " )\n", + " # Wait for update\n", + " while True:\n", + " status = cp_client.get_registry(registryId=REGISTRY_ID)[\"status\"]\n", + " if status != \"UPDATING\": break\n", + " time.sleep(2)\n", + " print(f\"Added MCP URL to allowedAudience: {mcp_url}\")\n" + ] + }, + { + "cell_type": "markdown", + "id": "39942ff8", + "metadata": {}, + "source": [ + "### Build registry record definitions from saved configs" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-registry-records", + "metadata": {}, + "outputs": [], + "source": [ + "# ── Build registry record definitions from saved configs ───────────────────\n", + "gateway_url = mcp_config[\"gateway_url\"]\n", + "\n", + "# 9 MCP tool records\n", + "TOOL_RECORDS = []\n", + "for t in mcp_config[\"tools\"]:\n", + " TOOL_RECORDS.append({\n", + " \"name\": t[\"name\"],\n", + " \"description\": t[\"description\"],\n", + " \"descriptorType\": \"MCP\",\n", + " \"descriptors\": {\n", + " \"mcp\": {\n", + " \"server\": {\n", + " \"inlineContent\": json.dumps({\n", + " \"name\": f\"gateway-mcp-server/{t['name']}\",\n", + " \"description\": t[\"description\"],\n", + " \"version\": \"1.0.0\",\n", + " \"websiteUrl\": gateway_url\n", + " })\n", + " },\n", + " \"tools\": {\n", + " \"inlineContent\": json.dumps({\n", + " \"tools\": t.get(\"tools_full\", [{\"name\": tn, \"description\": tn, \"inputSchema\": {\"type\": \"object\"}} for tn in t[\"tool_names\"]])\n", + " })\n", + " }\n", + " }\n", + " }\n", + " })\n", + "\n", + "# 3 A2A agent records\n", + "from urllib.parse import quote\n", + "for key, agent in a2a_config[\"agents\"].items():\n", + " escaped_arn = quote(agent[\"agent_arn\"], safe=\"\")\n", + " agent_url = f\"{AUTH0_AUDIENCE}/runtimes/{escaped_arn}/invocations/\"\n", + " agent_card = {\n", + " \"protocolVersion\": \"0.3\",\n", + " \"name\": agent[\"name\"],\n", + " \"description\": agent[\"description\"],\n", + " \"version\": \"1.0.0\",\n", + " \"url\": agent_url,\n", + " \"capabilities\": {\"streaming\": True},\n", + " \"skills\": [{\"id\": s, \"name\": s, \"description\": s, \"tags\": [s]} for s in agent.get(\"skills\", [])],\n", + " \"defaultInputModes\": [\"text/plain\"],\n", + " \"defaultOutputModes\": [\"text/plain\"],\n", + " }\n", + " TOOL_RECORDS.append({\n", + " \"name\": f\"{key}_tool\",\n", + " \"description\": agent[\"description\"],\n", + " \"descriptorType\": \"A2A\",\n", + " \"descriptors\": {\n", + " \"a2a\": {\n", + " \"agentCard\": {\n", + " \"schemaVersion\": \"0.3\",\n", + " \"inlineContent\": json.dumps(agent_card)\n", + " }\n", + " }\n", + " }\n", + " })\n", + "\n", + "print(f\"Defined {len(TOOL_RECORDS)} records: {len(mcp_config['tools'])} MCP + {len(a2a_config['agents'])} A2A\")" + ] + }, + { + "cell_type": "markdown", + "id": "67ff57ed", + "metadata": {}, + "source": [ + "### Insert registry record definitions into AWS registry" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-registry-approve", + "metadata": {}, + "outputs": [], + "source": [ + "# Create and approve all records (create → submit → approve)\n", + "from utils.registry_records import create_and_approve_all_records\n", + "#import utils.registry_records\n", + "\n", + "\n", + "record_ids = create_and_approve_all_records(cp_client, REGISTRY_ID, TOOL_RECORDS)\n", + "\n", + "print(f\"\\nAll {len(record_ids)} records approved.\")\n", + "print(\"Waiting for search index to update (eventual consistency)...\")\n", + "time.sleep(15)\n", + "print(\"Registry ready for search.\")" + ] + }, + { + "cell_type": "markdown", + "id": "md-planner", + "metadata": {}, + "source": [ + "---\n", + "## Step 4: Build the Planner Agent\n", + "\n", + "The Planner Agent has a single responsibility: given a task, search the Registry to discover which tools are needed and output a structured Tool Plan as JSON. It never executes business logic or calls tools directly — it only identifies the minimal set of tools required." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-planner", + "metadata": {}, + "outputs": [], + "source": [ + "@tool\n", + "def search_registry(query: str) -> str:\n", + " \"\"\"Search the AWS Agent Registry to discover available tools and agents.\n", + "\n", + " Use this tool to find tools that match a capability needed for the task.\n", + " Search once per distinct capability (e.g., search 'order update' separately\n", + " from 'email notification').\n", + "\n", + " Args:\n", + " query: Natural language description of the capability needed.\n", + "\n", + " Returns:\n", + " JSON list of matching registry records with available tool names.\n", + " \"\"\"\n", + " try:\n", + " if USE_AUTH0 and auth0_token:\n", + " # OAuth bearer token search via HTTP\n", + " import requests as _req\n", + " resp = _req.post(\n", + " f\"{AUTH0_AUDIENCE}/registry-records/search\",\n", + " headers={\"Content-Type\": \"application/json\",\n", + " \"Authorization\": f\"Bearer {auth0_token}\"},\n", + " json={\"searchQuery\": query, \"registryIds\": [REGISTRY_ARN], \"maxResults\": 3},\n", + " timeout=120,\n", + " )\n", + " resp.raise_for_status()\n", + " records = resp.json().get(\"registryRecords\", [])\n", + " else:\n", + " # IAM auth search via boto3\n", + " results = dp_client.search_registry_records(\n", + " registryIds=[REGISTRY_ARN],\n", + " searchQuery=query,\n", + " maxResults=3\n", + " )\n", + " records = results.get(\"registryRecords\", [])\n", + " if not records:\n", + " return json.dumps({\"message\": \"No matching tools found\", \"query\": query})\n", + " out = []\n", + " for rec in records:\n", + " entry = {\n", + " \"record_id\": rec[\"recordId\"],\n", + " \"name\": rec[\"name\"],\n", + " \"description\": rec.get(\"description\", \"\"),\n", + " \"descriptorType\": rec.get(\"descriptorType\", \"MCP\"),\n", + " }\n", + " if rec.get(\"descriptorType\") == \"A2A\":\n", + " ac = (rec.get(\"descriptors\", {}).get(\"a2a\", {})\n", + " .get(\"agentCard\", {}).get(\"inlineContent\", \"{}\"))\n", + " card = json.loads(ac)\n", + " entry[\"available_tools\"] = [s.get(\"id\", s.get(\"name\", \"\")) for s in card.get(\"skills\", [])]\n", + " else:\n", + " tc = (rec.get(\"descriptors\", {}).get(\"mcp\", {})\n", + " .get(\"tools\", {}).get(\"inlineContent\", \"{}\"))\n", + " entry[\"available_tools\"] = [t[\"name\"] for t in json.loads(tc).get(\"tools\", [])]\n", + " out.append(entry)\n", + " return json.dumps({\"results_count\": len(out), \"records\": out}, indent=2)\n", + " except Exception as ex:\n", + " return json.dumps({\"error\": str(ex)})\n", + "\n", + "\n", + "PLANNER_SYSTEM_PROMPT = \"\"\"You are a Planner agent. Your ONLY job is to analyse a task and\n", + "identify the minimal set of registry tools needed to complete it.\n", + "\n", + "Rules:\n", + "- Use search_registry to find relevant tools. Search once per distinct capability.\n", + "- Do NOT execute any business logic or make up data.\n", + "- Return ONLY a JSON object in exactly this format — no prose, no markdown fences:\n", + "\n", + "{\n", + " \"task\": \"\",\n", + " \"steps\": [\n", + " {\n", + " \"step\": 1,\n", + " \"description\": \"\",\n", + " \"record_id\": \"\",\n", + " \"record_name\": \"\",\n", + " \"descriptorType\": \"MCP or A2A\",\n", + " \"tool_name\": \"\"\n", + " }\n", + " ],\n", + " \"selected_record_ids\": [\"\", \"\"]\n", + "}\n", + "\n", + "selected_record_ids must contain only the unique record IDs used across all steps.\"\"\"\n", + "\n", + "planner_model = BedrockModel(model_id=MODEL_ID, region_name=AWS_REGION)\n", + "planner_agent = Agent(\n", + " model=planner_model,\n", + " tools=[search_registry],\n", + " system_prompt=PLANNER_SYSTEM_PROMPT\n", + ")\n", + "print(\"Planner agent created.\")" + ] + }, + { + "cell_type": "markdown", + "id": "md-executor", + "metadata": {}, + "source": [ + "---\n", + "## Step 5: Build the Executor Agent\n", + "\n", + "The Executor Agent receives the Tool Plan from the Planner and dynamically builds live connections to only the planned tools.\n", + "For MCP tools, it connects to the Gateway via streamable HTTP with OAuth2 authentication.\n", + "For A2A agents, it invokes them via the AgentCore Runtime data plane.\n", + "No mocks — every tool call hits a real deployed service." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-executor", + "metadata": {}, + "outputs": [], + "source": [ + "from strands.tools.mcp import MCPClient\n", + "from mcp.client.streamable_http import streamablehttp_client\n", + "from utils.tool_builder import parse_mcp_metadata, parse_a2a_metadata, create_a2a_tool\n", + "\n", + "def build_executor(tool_plan: dict) -> tuple:\n", + " \"\"\"Build an Executor agent with live tool connections from the Tool Plan.\"\"\"\n", + " selected_ids = tool_plan.get(\"selected_record_ids\", [])\n", + " fetched_schemas = {}\n", + " dynamic_tools = []\n", + " seen_mcp_urls = set()\n", + "\n", + " print(f\"Fetching {len(selected_ids)} record(s) from Registry...\")\n", + " for rid in selected_ids:\n", + " record = cp_client.get_registry_record(\n", + " registryId=REGISTRY_ID, recordId=rid)\n", + " record_name = record[\"name\"]\n", + " descriptor_type = record.get(\"descriptorType\", \"MCP\")\n", + " fetched_schemas[rid] = record\n", + "\n", + " if descriptor_type == \"MCP\":\n", + " meta = parse_mcp_metadata(record)\n", + " print(f\" [MCP] {record_name} tools: {meta['tool_names']}\")\n", + "\n", + " if meta[\"url\"] and meta[\"url\"] not in seen_mcp_urls:\n", + " headers = {\"Authorization\": f\"Bearer {access_token}\"}\n", + " mcp_client = MCPClient(\n", + " lambda u=meta[\"url\"], h=headers: streamablehttp_client(u, headers=h))\n", + " dynamic_tools.append(mcp_client)\n", + " seen_mcp_urls.add(meta[\"url\"])\n", + "\n", + " elif descriptor_type == \"A2A\":\n", + " meta = parse_a2a_metadata(record)\n", + " print(f\" [A2A] {record_name} skills: {meta['skills']}\")\n", + "\n", + " if meta[\"url\"]:\n", + " # Resolve agent ARN from the URL\n", + " agent_arn = None\n", + " for a in a2a_config[\"agents\"].values():\n", + " if a[\"agent_arn\"] in meta[\"url\"] or quote(a[\"agent_arn\"], safe=\"\") in meta[\"url\"]:\n", + " agent_arn = a[\"agent_arn\"]\n", + " break\n", + "\n", + " if agent_arn:\n", + " dynamic_tools.append(\n", + " create_a2a_tool(\n", + " record_name,\n", + " record.get(\"description\", \"\"),\n", + " agent_arn,\n", + " meta[\"skills\"],\n", + " dp_client))\n", + "\n", + " executor_model = BedrockModel(model_id=MODEL_ID, region_name=AWS_REGION)\n", + " executor_agent = Agent(\n", + " model=executor_model,\n", + " tools=dynamic_tools,\n", + " system_prompt=(\n", + " \"You are an Executor agent. Execute this Tool Plan step by step \"\n", + " \"using the provided tools. After all steps complete, provide a \"\n", + " \"concise summary of what was accomplished.\\n\\n\"\n", + " f\"Tool Plan:\\n{json.dumps(tool_plan, indent=2)}\"\n", + " )\n", + " )\n", + " return executor_agent, fetched_schemas\n", + "\n", + "print(\"Executor builder ready (live connections).\")" + ] + }, + { + "cell_type": "markdown", + "id": "md-run", + "metadata": {}, + "source": [ + "---\n", + "## Step 6: Run Planner + Executor Locally\n", + "\n", + "Run the full Planner → Executor pipeline against three real e-commerce scenarios. Each scenario exercises a different combination of MCP tools and A2A agents, demonstrating how the pattern adapts to varying task requirements." + ] + }, + { + "cell_type": "markdown", + "id": "md-helper", + "metadata": {}, + "source": [ + "### Orchestration Helper" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-orchestrate", + "metadata": {}, + "outputs": [], + "source": [ + "def run_planner_executor(task: str):\n", + " \"\"\"Run the full Planner → Executor pipeline for a task.\"\"\"\n", + " print(\"=\" * 65)\n", + " print(\"PHASE 1 — PLANNER\")\n", + " print(\"=\" * 65)\n", + " print(f\"Task: {task}\\n\")\n", + "\n", + " planner_response = planner_agent(task)\n", + " planner_raw = planner_response.message[\"content\"][0][\"text\"]\n", + "\n", + " clean = planner_raw.strip()\n", + " if \"```\" in clean:\n", + " # Extract content between first pair of ``` fences\n", + " parts = clean.split(\"```\")\n", + " for part in parts[1:]:\n", + " p = part.strip()\n", + " if p.startswith(\"json\"):\n", + " p = p[4:].strip()\n", + " if p.startswith(\"{\"):\n", + " clean = p\n", + " break\n", + " # Fallback: find JSON object in the raw text\n", + " import re\n", + " if not clean.startswith(\"{\"):\n", + " match = re.search(r'\\{.*\\}', clean, re.DOTALL)\n", + " if match:\n", + " clean = match.group()\n", + " tool_plan = json.loads(clean.strip())\n", + "\n", + " print(\"\\nTool Plan:\")\n", + " pp(tool_plan)\n", + " print(f\"\\nPlanner selected {len(tool_plan['selected_record_ids'])} record(s) \"\n", + " f\"out of {len(TOOL_RECORDS)} in the registry.\")\n", + "\n", + " print(\"\\n\" + \"=\" * 65)\n", + " print(\"PHASE 2 — EXECUTOR\")\n", + " print(\"=\" * 65)\n", + "\n", + " executor_agent, fetched_schemas = build_executor(tool_plan)\n", + " print(f\"\\nExecuting {len(tool_plan['steps'])} step(s)...\\n\")\n", + "\n", + " executor_response = executor_agent(\n", + " f\"Execute the following task using the Tool Plan provided: {task}\"\n", + " )\n", + "\n", + " print(\"\\n\" + \"=\" * 65)\n", + " print(\"FINAL RESULT\")\n", + " print(\"=\" * 65)\n", + " result_text = executor_response.message[\"content\"][0][\"text\"]\n", + " print(result_text)\n", + "\n", + " return tool_plan, fetched_schemas, result_text" + ] + }, + { + "cell_type": "markdown", + "id": "md-scenario1", + "metadata": {}, + "source": [ + "### Scenario 1: Cancel Order + Full Refund\n", + "\n", + "Customer wants to cancel order ORD-1002 and get a full refund. This should exercise\n", + "the order cancellation MCP tool and the payment refund A2A agent." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-scenario1", + "metadata": {}, + "outputs": [], + "source": [ + "plan1, schemas1, result1 = run_planner_executor(\n", + " \"Cancel order ORD-1002 and issue a full refund to the customer.\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "md-scenario2", + "metadata": {}, + "source": [ + "### Scenario 2: Inventory Reserve + Stock Check\n", + "\n", + "Reserve inventory for a new order and verify stock levels. Exercises the inventory\n", + "reserve A2A agent and the inventory check MCP tool." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-scenario2", + "metadata": {}, + "outputs": [], + "source": [ + "plan2, schemas2, result2 = run_planner_executor(\n", + " \"Reserve 5 units of WIDGET-42 for order ORD-1004, then check the current inventory level for WIDGET-42.\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "md-scenario3", + "metadata": {}, + "source": [ + "### Scenario 3: Ship Order + Email Notification\n", + "\n", + "Create a shipment for an order and send a confirmation email. Exercises the shipping\n", + "update A2A agent, order update MCP tool, and email send MCP tool." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-scenario3", + "metadata": {}, + "outputs": [], + "source": [ + "plan3, schemas3, result3 = run_planner_executor(\n", + " \"Ship order ORD-1001: assign a carrier for a 2kg package to WA, create the shipment, \"\n", + " \"update the order status to SHIPPED, and send a shipping confirmation email to the customer.\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "md-cost", + "metadata": {}, + "source": [ + "---\n", + "## Step 7: Token/Cost Comparison\n", + "\n", + "Quantify the token and cost savings of the Planner+Executor approach. The baseline loads all 12 tool schemas into the LLM context upfront. The optimised approach loads only the 2-3 tools the Planner selected, plus a small overhead for the Planner's own Registry search calls." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-cost-chart", + "metadata": {}, + "outputs": [], + "source": [ + "# ── Token & Cost Comparison ──────────────────────────────────────────────\n", + "import matplotlib.pyplot as plt\n", + "import numpy as np\n", + "\n", + "# Use the last scenario's plan for the comparison\n", + "tool_plan = plan3\n", + "fetched_schemas = schemas3\n", + "\n", + "# Full tool schema payload (baseline — all 12 records)\n", + "all_schemas_text = json.dumps(TOOL_RECORDS)\n", + "\n", + "# Planned tool schema payload (only fetched records)\n", + "planned_schemas_text = json.dumps([\n", + " {\"name\": fetched_schemas[rid][\"name\"],\n", + " \"description\": fetched_schemas[rid].get(\"description\", \"\"),\n", + " \"descriptors\": fetched_schemas[rid].get(\"descriptors\", {})}\n", + " for rid in tool_plan[\"selected_record_ids\"]\n", + "])\n", + "\n", + "# Planner overhead\n", + "planner_overhead_text = PLANNER_SYSTEM_PROMPT + json.dumps(tool_plan)\n", + "\n", + "tokens_all = estimate_tokens(all_schemas_text)\n", + "tokens_planned = estimate_tokens(planned_schemas_text)\n", + "tokens_planner = estimate_tokens(planner_overhead_text)\n", + "tokens_optimised = tokens_planned + tokens_planner\n", + "\n", + "reduction_pct = ((tokens_all - tokens_optimised) / tokens_all * 100) if tokens_all > 0 else 0\n", + "\n", + "cost_all = (tokens_all / 1000) * PRICE_INPUT_PER_1K\n", + "cost_planned = (tokens_planned / 1000) * PRICE_INPUT_PER_1K\n", + "cost_planner = (tokens_planner / 1000) * PRICE_INPUT_PER_1K\n", + "cost_optimised = cost_planned + cost_planner\n", + "cost_saved_pct = ((cost_all - cost_optimised) / cost_all * 100) if cost_all > 0 else 0\n", + "\n", + "# ── Stacked Bar Charts ───────────────────────────────────────────────────\n", + "fig, axes = plt.subplots(1, 2, figsize=(12, 5))\n", + "fig.suptitle(\"Planner+Executor vs. Load-All-Tools Upfront\", fontsize=14, fontweight=\"bold\")\n", + "\n", + "labels = [\"Baseline\\n(all 12 tools)\", \"Optimised\\n(Planner+Executor)\"]\n", + "x = np.arange(len(labels))\n", + "width = 0.45\n", + "\n", + "# Left: Token comparison\n", + "ax1 = axes[0]\n", + "ax1.bar(0, tokens_all, width, color=\"#e74c3c\", label=\"Baseline (all tools)\")\n", + "ax1.bar(1, tokens_planned, width, color=\"#2ecc71\", label=\"Executor (planned tools)\")\n", + "ax1.bar(1, tokens_planner, width, bottom=tokens_planned, color=\"#f39c12\", label=\"Planner overhead\")\n", + "ax1.set_title(\"Input Tokens\")\n", + "ax1.set_ylabel(\"Tokens (estimated)\")\n", + "ax1.set_xticks(x)\n", + "ax1.set_xticklabels(labels)\n", + "ax1.yaxis.set_major_formatter(plt.FuncFormatter(lambda v, _: f\"{int(v):,}\"))\n", + "ax1.text(0, tokens_all + tokens_all * 0.02, f\"{tokens_all:,}\", ha=\"center\", va=\"bottom\", fontsize=9)\n", + "ax1.text(1, tokens_optimised + tokens_all * 0.02, f\"{tokens_optimised:,}\\n({reduction_pct:.0f}% less)\", ha=\"center\", va=\"bottom\", fontsize=9, color=\"#2e7d32\")\n", + "ax1.legend(fontsize=8, loc=\"upper right\")\n", + "\n", + "# Right: Cost comparison\n", + "ax2 = axes[1]\n", + "ax2.bar(0, cost_all, width, color=\"#e74c3c\", label=\"Baseline (all tools)\")\n", + "ax2.bar(1, cost_planned, width, color=\"#2ecc71\", label=\"Executor (planned tools)\")\n", + "ax2.bar(1, cost_planner, width, bottom=cost_planned, color=\"#f39c12\", label=\"Planner overhead\")\n", + "ax2.set_title(\"Estimated Input Cost (USD)\")\n", + "ax2.set_ylabel(\"Cost (USD)\")\n", + "ax2.set_xticks(x)\n", + "ax2.set_xticklabels(labels)\n", + "ax2.yaxis.set_major_formatter(plt.FuncFormatter(lambda v, _: f\"${v:.4f}\"))\n", + "ax2.text(0, cost_all + cost_all * 0.02, f\"${cost_all:.4f}\", ha=\"center\", va=\"bottom\", fontsize=9)\n", + "ax2.text(1, cost_optimised + cost_all * 0.02, f\"${cost_optimised:.4f}\\n({cost_saved_pct:.0f}% less)\", ha=\"center\", va=\"bottom\", fontsize=9, color=\"#2e7d32\")\n", + "ax2.legend(fontsize=8, loc=\"upper right\")\n", + "\n", + "plt.tight_layout()\n", + "plt.show()\n", + "\n", + "print(f\"\\nTools loaded — Baseline: {len(TOOL_RECORDS)} | Optimised: {len(tool_plan['selected_record_ids'])}\")" + ] + }, + { + "cell_type": "markdown", + "id": "d5723ff8", + "metadata": {}, + "source": [ + "### Display the Pie Chart for token/cost comparison" + ] + }, + { + "cell_type": "markdown", + "id": "md-cleanup", + "metadata": {}, + "source": [ + "---\n", + "## Step 8: Cleanup\n", + "\n", + "Clean up the Registry created by this notebook. The MCP tools and A2A agents deployed by the prerequisite notebooks are left running — manage them from their respective notebooks." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cell-cleanup", + "metadata": {}, + "outputs": [], + "source": [ + "# 1. Delete registry records + registry\n", + "print(\"Cleaning up Registry...\")\n", + "for rid in record_ids:\n", + " try:\n", + " cp_client.delete_registry_record(registryId=REGISTRY_ID, recordId=rid)\n", + " print(f\" Deleted record: {rid}\")\n", + " except Exception as e:\n", + " print(f\" Record {rid}: {e}\")\n", + "time.sleep(5)\n", + "try:\n", + " cp_client.delete_registry(registryId=REGISTRY_ID)\n", + " print(f\" Deleted registry: {REGISTRY_ID}\")\n", + "except Exception as e:\n", + " print(f\" Registry: {e}\")\n", + "\n", + "# 2. Remove local files\n", + "print(\"\\nCleaning up local files...\")\n", + "for f in [\"planner_executor_live_comparison.png\"]:\n", + " p = pathlib.Path(f)\n", + " if p.exists(): p.unlink(); print(f\" Removed: {f}\")\n", + "\n", + "print(\"\\nCleanup complete! MCP tools and A2A agents are still running (managed by their own notebooks).\")" + ] + }, + { + "cell_type": "markdown", + "id": "md-summary", + "metadata": {}, + "source": [ + "---\n", + "## What We Demonstrated\n", + "\n", + "This notebook showed how the Planner + Executor pattern enables efficient runtime tool discovery:\n", + "\n", + "1. Registered 12 tools (9 MCP + 3 A2A) in a single AWS Agent Registry\n", + "2. The Planner searched the Registry and selected only the tools needed for each task\n", + "3. The Executor built live connections to real MCP Gateway targets and A2A agents on AgentCore Runtime\n", + "4. Ran 3 end-to-end e-commerce scenarios — order cancellation with refund, inventory reservation, and shipment creation with email notification\n", + "5. Measured token and cost savings vs. loading all tools upfront\n", + "\n", + "The key takeaway: as your tool catalog grows, this pattern keeps agent execution fast and cost-efficient by loading only what's needed, while the Registry ensures new tools are discoverable without redeploying agents." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.14.0" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/requirements.txt b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/requirements.txt new file mode 100644 index 000000000..91cf40fb0 --- /dev/null +++ b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/requirements.txt @@ -0,0 +1,8 @@ +mcp>=1.10.0 +boto3>=1.42.87 +botocore>=1.42.87 +bedrock-agentcore +bedrock-agentcore-starter-toolkit>=0.1.21 +strands-agents +strands-agents-tools +matplotlib \ No newline at end of file diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/__init__.py b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/__init__.py new file mode 100644 index 000000000..8b1378917 --- /dev/null +++ b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/__init__.py @@ -0,0 +1 @@ + diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/agents/__init__.py b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/agents/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/agents/inventory_reserve_agent.py b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/agents/inventory_reserve_agent.py new file mode 100644 index 000000000..d3705aeb4 --- /dev/null +++ b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/agents/inventory_reserve_agent.py @@ -0,0 +1,102 @@ +import os, json, uuid +import db +from strands import Agent, tool +from strands.models import BedrockModel +from strands.multiagent.a2a import A2AServer +from fastapi import FastAPI +import uvicorn + +AWS_REGION = os.environ.get("AWS_REGION", "us-west-2") +MODEL_ID = os.environ.get("MODEL_ID", "us.anthropic.claude-sonnet-4-20250514-v1:0") +RUNTIME_URL = os.environ.get("AGENTCORE_RUNTIME_URL", "http://127.0.0.1:9000/") + +# TABLE_NAMES_PLACEHOLDER + + +@tool +def reserve_inventory(order_id: str, sku: str, quantity: int) -> str: + """Reserve inventory units for an order. + Args: + order_id: Order this reservation is for (e.g. ORD-1001). + sku: Product SKU (e.g. WIDGET-42). + quantity: Units to reserve. + Returns: + JSON with reservation_id and updated stock. + """ + item = db.get_item("INVENTORY_TABLE", {"sku": sku}) + if not item: + return json.dumps({"error": f"SKU '{sku}' not found"}) + available = item["stock"] - item["reserved"] + if quantity > available: + return json.dumps({"error": f"Insufficient stock: requested {quantity}, available {available}"}) + db.update_attrs("INVENTORY_TABLE", {"sku": sku}, + {"reserved": item["reserved"] + quantity}) + res_id = f"RES-{str(uuid.uuid4())[:8].upper()}" + record = {"reservation_id": res_id, "order_id": order_id, "sku": sku, + "quantity": quantity, "status": "ACTIVE"} + db.put_item("RESERVATIONS_TABLE", record) + return json.dumps({**record, "remaining_available": available - quantity}) + + +@tool +def release_reservation(reservation_id: str) -> str: + """Release a reservation and return units to available stock. + Args: + reservation_id: Reservation ID from reserve_inventory. + Returns: + JSON confirming release. + """ + res = db.get_item("RESERVATIONS_TABLE", {"reservation_id": reservation_id}) + if not res: + return json.dumps({"error": f"Reservation {reservation_id} not found"}) + if res["status"] == "RELEASED": + return json.dumps({"error": "Already released"}) + item = db.get_item("INVENTORY_TABLE", {"sku": res["sku"]}) + if item: + db.update_attrs("INVENTORY_TABLE", {"sku": res["sku"]}, + {"reserved": max(0, item["reserved"] - res["quantity"])}) + db.update_attrs("RESERVATIONS_TABLE", {"reservation_id": reservation_id}, + {"status": "RELEASED"}) + return json.dumps({"reservation_id": reservation_id, "status": "RELEASED", + "sku": res["sku"], "quantity_released": res["quantity"]}) + + +@tool +def get_reservation_status(reservation_id: str) -> str: + """Check reservation status and current stock levels. + Args: + reservation_id: Reservation ID. + Returns: + JSON with reservation and stock details. + """ + res = db.get_item("RESERVATIONS_TABLE", {"reservation_id": reservation_id}) + if not res: + return json.dumps({"error": f"Reservation {reservation_id} not found"}) + item = db.get_item("INVENTORY_TABLE", {"sku": res["sku"]}) or {} + return json.dumps({**res, "current_stock": item.get("stock", 0), + "current_reserved": item.get("reserved", 0)}) + + +model = BedrockModel(model_id=MODEL_ID, region_name=AWS_REGION) +agent = Agent( + model=model, + tools=[reserve_inventory, release_reservation, get_reservation_status], + system_prompt=( + "You are an inventory reservation specialist. Check stock before reserving. " + "Support rollback via release_reservation if downstream steps fail." + ), + name="InventoryReserveAgent", + description="Reserves inventory units for orders with rollback support.", +) + +server = A2AServer(agent=agent, http_url=RUNTIME_URL, serve_at_root=True) + +app = FastAPI() +app.mount("/", server.to_fastapi_app()) + +@app.get("/ping") +def ping(): + return {"status": "healthy"} + +if __name__ == "__main__": + uvicorn.run(app, host=os.environ.get("HOST", "0.0.0.0"), port=int(os.environ.get("PORT", "9000"))) # nosec B104 diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/agents/payment_refund_agent.py b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/agents/payment_refund_agent.py new file mode 100644 index 000000000..abb1182a5 --- /dev/null +++ b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/agents/payment_refund_agent.py @@ -0,0 +1,100 @@ +import os, json, uuid +import db +from strands import Agent, tool +from strands.models import BedrockModel +from strands.multiagent.a2a import A2AServer +from fastapi import FastAPI +import uvicorn + +AWS_REGION = os.environ.get("AWS_REGION", "us-west-2") +MODEL_ID = os.environ.get("MODEL_ID", "us.anthropic.claude-sonnet-4-20250514-v1:0") +RUNTIME_URL = os.environ.get("AGENTCORE_RUNTIME_URL", "http://127.0.0.1:9000/") + +# TABLE_NAMES_PLACEHOLDER + + +@tool +def get_order_payment_info(order_id: str) -> str: + """Look up order and its payment before deciding refund amount. + Args: + order_id: The order ID (e.g. ORD-1001). + Returns: + JSON with order status and payment details. + """ + order = db.get_item("ORDERS_TABLE", {"order_id": order_id}) + if not order: + return json.dumps({"error": f"Order {order_id} not found"}) + pays = db.query_gsi("PAYMENTS_TABLE", "order_id-index", "order_id", order_id) + return json.dumps({"order_id": order_id, "order_status": order["status"], + "payment": pays[0] if pays else None}) + + +@tool +def issue_refund(order_id: str, amount: float, reason: str = "customer_request") -> str: + """Issue a refund after validating the order and payment status. + Args: + order_id: Order to refund (e.g. ORD-1001). + amount: Refund amount in USD. + reason: Reason for refund. + Returns: + JSON with refund_id and status. + """ + order = db.get_item("ORDERS_TABLE", {"order_id": order_id}) + if not order: + return json.dumps({"error": f"Order {order_id} not found"}) + pays = db.query_gsi("PAYMENTS_TABLE", "order_id-index", "order_id", order_id) + if not pays: + return json.dumps({"error": "No payment found for order"}) + pay = pays[0] + if pay["status"] == "REFUNDED": + return json.dumps({"error": "Order already fully refunded"}) + if pay["status"] == "PENDING": + return json.dumps({"error": "Payment not yet captured"}) + if amount > pay["amount"]: + return json.dumps({"error": f"Refund ${amount} exceeds captured ${pay['amount']}"}) + refund_id = f"REF-{str(uuid.uuid4())[:8].upper()}" + item = {"refund_id": refund_id, "order_id": order_id, "payment_id": pay["payment_id"], + "gateway": pay["gateway"], "amount": amount, "reason": reason, + "status": "COMPLETED", "note": "[DEMO] Refund not actually processed"} + db.put_item("REFUNDS_TABLE", item) + return json.dumps(item) + + +@tool +def get_refund_status(refund_id: str) -> str: + """Check the status of a previously issued refund. + Args: + refund_id: The refund ID (e.g. REF-AB12CD34). + Returns: + JSON with refund details. + """ + refund = db.get_item("REFUNDS_TABLE", {"refund_id": refund_id}) + if not refund: + return json.dumps({"error": f"Refund {refund_id} not found"}) + return json.dumps(refund) + + +model = BedrockModel(model_id=MODEL_ID, region_name=AWS_REGION) +agent = Agent( + model=model, + tools=[get_order_payment_info, issue_refund, get_refund_status], + system_prompt=( + "You are a payment refund specialist. Always call get_order_payment_info first " + "to verify the order and payment before issuing a refund. " + "Confirm the refund by calling get_refund_status after issue_refund." + ), + name="PaymentRefundAgent", + description="Issues refunds with multi-step validation: verify order/payment, issue refund, confirm status.", +) + +server = A2AServer(agent=agent, http_url=RUNTIME_URL, serve_at_root=True) + +app = FastAPI() +app.mount("/", server.to_fastapi_app()) + +@app.get("/ping") +def ping(): + return {"status": "healthy"} + +if __name__ == "__main__": + uvicorn.run(app, host=os.environ.get("HOST", "0.0.0.0"), port=int(os.environ.get("PORT", "9000"))) # nosec B104 diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/agents/shipping_update_agent.py b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/agents/shipping_update_agent.py new file mode 100644 index 000000000..0ba3f34af --- /dev/null +++ b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/agents/shipping_update_agent.py @@ -0,0 +1,121 @@ +import os, json, uuid +import db +from strands import Agent, tool +from strands.models import BedrockModel +from strands.multiagent.a2a import A2AServer +from fastapi import FastAPI +import uvicorn +from datetime import date, timedelta + +AWS_REGION = os.environ.get("AWS_REGION", "us-west-2") +MODEL_ID = os.environ.get("MODEL_ID", "us.anthropic.claude-sonnet-4-20250514-v1:0") +RUNTIME_URL = os.environ.get("AGENTCORE_RUNTIME_URL", "http://127.0.0.1:9000/") + +# TABLE_NAMES_PLACEHOLDER + +CARRIERS = { + "UPS": {"avg_days": 3, "prefix": "1Z"}, + "FedEx": {"avg_days": 2, "prefix": "FX"}, + "USPS": {"avg_days": 5, "prefix": "94"}, + "DHL": {"avg_days": 4, "prefix": "JD"}, +} + + +@tool +def assign_carrier(order_id: str, weight_kg: float = 1.0, destination_state: str = "") -> str: + """Recommend the best carrier for an order based on weight and destination. + Args: + order_id: Order ID. + weight_kg: Package weight in kg. + destination_state: US state code (e.g. WA). + Returns: + JSON with recommended carrier, days, and cost estimate. + """ + if weight_kg > 30: + carrier, reason = "FedEx", "heavy parcel specialist" + elif destination_state in ("HI", "AK", "PR"): + carrier, reason = "USPS", "best coverage for remote destinations" + elif weight_kg < 0.5: + carrier, reason = "USPS", "cost-optimal for lightweight packages" + else: + carrier, reason = "UPS", "best ground network for continental US" + info = CARRIERS[carrier] + cost = round(3.50 + weight_kg * 1.20 + (2.0 if destination_state in ("HI", "AK", "PR") else 0), 2) + return json.dumps({"order_id": order_id, "recommended_carrier": carrier, + "reason": reason, "estimated_days": info["avg_days"], + "estimated_cost_usd": cost}) + + +@tool +def create_shipment(order_id: str, carrier: str = "UPS", service: str = "GROUND") -> str: + """Create a shipment for an order and write it to DynamoDB. + Args: + order_id: Order to ship (e.g. ORD-1001). + carrier: UPS, FedEx, USPS, or DHL. + service: GROUND, EXPRESS, or OVERNIGHT. + Returns: + JSON with shipment_id and tracking_number. + """ + order = db.get_item("ORDERS_TABLE", {"order_id": order_id}) + if not order: + return json.dumps({"error": f"Order {order_id} not found"}) + existing = db.query_gsi("SHIPMENTS_TABLE", "order_id-index", "order_id", order_id) + if existing: + return json.dumps({"error": f"Shipment already exists for order {order_id}"}) + info = CARRIERS.get(carrier, CARRIERS["UPS"]) + days_adj = {"GROUND": 0, "EXPRESS": -1, "OVERNIGHT": info["avg_days"] - 1} + days = max(1, info["avg_days"] + days_adj.get(service.upper(), 0)) + est = (date.today() + timedelta(days=days)).isoformat() + ship_id = f"SHIP-{str(uuid.uuid4())[:8].upper()}" + tracking = f"{info['prefix']}{str(uuid.uuid4()).replace('-','')[:16].upper()}" + item = {"shipment_id": ship_id, "order_id": order_id, "carrier": carrier, + "service": service.upper(), "tracking_number": tracking, + "status": "CREATED", "estimated_delivery": est, + "note": "[DEMO] Shipment not actually created"} + db.put_item("SHIPMENTS_TABLE", item) + db.update_attrs("ORDERS_TABLE", {"order_id": order_id}, {"status": "SHIPPED"}) + return json.dumps(item) + + +@tool +def update_shipment_status(shipment_id: str, status: str) -> str: + """Update the status of an existing shipment. + Args: + shipment_id: Shipment ID from create_shipment. + status: New status: CREATED, IN_TRANSIT, OUT_FOR_DELIVERY, DELIVERED, EXCEPTION. + Returns: + JSON confirming the update. + """ + VALID = {"CREATED", "IN_TRANSIT", "OUT_FOR_DELIVERY", "DELIVERED", "EXCEPTION"} + shipment = db.get_item("SHIPMENTS_TABLE", {"shipment_id": shipment_id}) + if not shipment: + return json.dumps({"error": f"Shipment {shipment_id} not found"}) + if status.upper() not in VALID: + return json.dumps({"error": f"Invalid status. Must be one of {sorted(VALID)}"}) + db.update_attrs("SHIPMENTS_TABLE", {"shipment_id": shipment_id}, {"status": status.upper()}) + return json.dumps({"shipment_id": shipment_id, "status": status.upper(), "updated": True}) + + +model = BedrockModel(model_id=MODEL_ID, region_name=AWS_REGION) +agent = Agent( + model=model, + tools=[assign_carrier, create_shipment, update_shipment_status], + system_prompt=( + "You are a shipping coordinator. Use assign_carrier to pick the best carrier, " + "then create_shipment to book it, then update_shipment_status to confirm." + ), + name="ShippingUpdateAgent", + description="Creates shipments with carrier selection and status tracking.", +) + +server = A2AServer(agent=agent, http_url=RUNTIME_URL, serve_at_root=True) + +app = FastAPI() +app.mount("/", server.to_fastapi_app()) + +@app.get("/ping") +def ping(): + return {"status": "healthy"} + +if __name__ == "__main__": + uvicorn.run(app, host=os.environ.get("HOST", "0.0.0.0"), port=int(os.environ.get("PORT", "9000"))) # nosec B104 diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/db.py b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/db.py new file mode 100644 index 000000000..108fd7d13 --- /dev/null +++ b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/db.py @@ -0,0 +1,68 @@ +""" +DynamoDB helper bundled into every Lambda zip and A2A agent. +Table names are injected via environment variables (ORDERS_TABLE, etc.). +""" +import boto3, os, json +from decimal import Decimal +from boto3.dynamodb.conditions import Attr, Key + +_ddb = boto3.resource("dynamodb", region_name=os.environ.get("AWS_DEFAULT_REGION", "us-west-2")) + +def _table(env_key: str): + return _ddb.Table(os.environ[env_key]) + +def _from_ddb(obj): + """Decimal → float recursively (DynamoDB stores numbers as Decimal).""" + if isinstance(obj, Decimal): + return float(obj) + if isinstance(obj, dict): + return {k: _from_ddb(v) for k, v in obj.items()} + if isinstance(obj, list): + return [_from_ddb(i) for i in obj] + return obj + +def _to_ddb(obj): + """float → Decimal recursively (required for DynamoDB puts/updates).""" + return json.loads(json.dumps(obj), parse_float=Decimal) + +# ── Read helpers ─────────────────────────────────────────────────────────── + +def get_item(env_key: str, key: dict): + """Fetch a single item by primary key. Returns None if not found.""" + resp = _table(env_key).get_item(Key=key) + return _from_ddb(resp.get("Item")) + +def scan_all(env_key: str) -> list: + """Full table scan — fine for small demo tables.""" + return _from_ddb(_table(env_key).scan().get("Items", [])) + +def scan_filter(env_key: str, attr: str, val) -> list: + """Scan with a simple equality filter on any attribute.""" + resp = _table(env_key).scan(FilterExpression=Attr(attr).eq(val)) + return _from_ddb(resp.get("Items", [])) + +def query_gsi(env_key: str, index: str, key_attr: str, key_val: str) -> list: + """Query a GSI (e.g. order_id-index on payments/shipments).""" + resp = _table(env_key).query( + IndexName=index, + KeyConditionExpression=Key(key_attr).eq(key_val), + ) + return _from_ddb(resp.get("Items", [])) + +# ── Write helpers ────────────────────────────────────────────────────────── + +def put_item(env_key: str, item: dict): + """Insert or replace an item.""" + _table(env_key).put_item(Item=_to_ddb(item)) + +def update_attrs(env_key: str, key: dict, attrs: dict): + """Update specific attributes on an existing item.""" + expr = "SET " + ", ".join(f"#a{i}=:v{i}" for i in range(len(attrs))) + names = {f"#a{i}": k for i, k in enumerate(attrs)} + values = {f":v{i}": _to_ddb(v) for i, v in enumerate(attrs.values())} + _table(env_key).update_item( + Key=key, + UpdateExpression=expr, + ExpressionAttributeNames=names, + ExpressionAttributeValues=values, + ) \ No newline at end of file diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/mcp/__init__.py b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/mcp/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/mcp/notification.py b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/mcp/notification.py new file mode 100644 index 000000000..98589d5f5 --- /dev/null +++ b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/mcp/notification.py @@ -0,0 +1,154 @@ +""" +Deploy notification MCP tools (Lambda + AgentCore Gateway targets). + +Tools deployed: + email_send_tool — send_email, send_bulk_email + email_template_tool — get_template, list_templates, create_template + sms_notify_tool — send_sms, send_bulk_sms +""" +import io, time, zipfile, pathlib + +_HERE = pathlib.Path(__file__).parent +_LAMBDA = (_HERE.parent / "tools" / "notification.py").read_text() +_DB = (_HERE.parent / "db.py").read_text() + + +def _make_zip(): + buf = io.BytesIO() + with zipfile.ZipFile(buf, "w", zipfile.ZIP_DEFLATED) as zf: + zf.writestr("lambda_function.py", _LAMBDA) + zf.writestr("db.py", _DB) + return buf.getvalue() + + +def _wait_lambda(client, name): + while client.get_function(FunctionName=name)["Configuration"]["State"] != "Active": + time.sleep(2) + + +def _wait_target(agentcore_client, gateway_id, target_id): + while agentcore_client.get_gateway_target( + gatewayIdentifier=gateway_id, targetId=target_id)["status"] != "READY": + time.sleep(5) + + +def _create_target(agentcore_client, gateway_id, name, lambda_arn, tools): + tid = agentcore_client.create_gateway_target( + gatewayIdentifier=gateway_id, + name=name, + targetConfiguration={"mcp": {"lambda": { + "lambdaArn": lambda_arn, + "toolSchema": {"inlinePayload": tools} + }}}, + credentialProviderConfigurations=[{"credentialProviderType": "GATEWAY_IAM_ROLE"}] + )["targetId"] + _wait_target(agentcore_client, gateway_id, tid) + return tid + + +def deploy(*, lambda_client, agentcore_client, lambda_role_arn, + gateway_id, gateway_role_arn, table_names, timestamp): + """ + Deploy notification Lambda + 3 gateway targets. + + Returns: + dict with lambda_fn_name, lambda_arn, targets + (keys: email_send, email_template, sms_notify) + """ + fn_name = f"notification-mcp-{timestamp}" + env = { + "TEMPLATES_TABLE": table_names["templates"], + } + + print("Deploying notification Lambda...") + lambda_arn = lambda_client.create_function( + FunctionName=fn_name, Runtime="python3.13", Role=lambda_role_arn, + Handler="lambda_function.lambda_handler", Code={"ZipFile": _make_zip()}, + Timeout=30, Environment={"Variables": env}, + Description="Notification MCP Lambda", + )["FunctionArn"] + _wait_lambda(lambda_client, fn_name) + lambda_client.add_permission( + FunctionName=fn_name, StatementId=f"gateway-invoke-{timestamp}", + Action="lambda:InvokeFunction", Principal=gateway_role_arn, + ) + print(f" Lambda ready : {lambda_arn}") + + print("Creating gateway targets...") + email_send_id = _create_target(agentcore_client, gateway_id, + f"email-send-{timestamp}", lambda_arn, [ + {"name": "send_email", + "description": "Send a transactional email to a recipient", + "inputSchema": {"type": "object", + "properties": { + "to": {"type": "string", "description": "Recipient email"}, + "subject": {"type": "string"}, + "body": {"type": "string"}, + "template_id": {"type": "string", "description": "Optional template ID"}, + "template_vars": {"type": "object", "description": "Variables for template substitution"} + }, + "required": ["to"]}}, + {"name": "send_bulk_email", + "description": "Send the same email to a list of recipients", + "inputSchema": {"type": "object", + "properties": { + "recipients": {"type": "array", "items": {"type": "string"}}, + "subject": {"type": "string"}, + "body": {"type": "string"} + }, + "required": ["recipients", "subject", "body"]}}, + ]) + print(f" email_send_tool ready : {email_send_id}") + + email_template_id = _create_target(agentcore_client, gateway_id, + f"email-template-{timestamp}", lambda_arn, [ + {"name": "get_template", + "description": "Fetch an email template by ID", + "inputSchema": {"type": "object", + "properties": {"template_id": {"type": "string"}}, + "required": ["template_id"]}}, + {"name": "list_templates", + "description": "List all available email templates", + "inputSchema": {"type": "object", "properties": {}}}, + {"name": "create_template", + "description": "Create a new email template", + "inputSchema": {"type": "object", + "properties": { + "template_id": {"type": "string"}, + "subject": {"type": "string"}, + "body": {"type": "string"} + }, + "required": ["subject", "body"]}}, + ]) + print(f" email_template_tool ready : {email_template_id}") + + sms_id = _create_target(agentcore_client, gateway_id, + f"sms-notify-{timestamp}", lambda_arn, [ + {"name": "send_sms", + "description": "Send an SMS to a single recipient", + "inputSchema": {"type": "object", + "properties": { + "to": {"type": "string", "description": "E.164 phone number"}, + "body": {"type": "string", "description": "SMS body (max 160 chars)"} + }, + "required": ["to", "body"]}}, + {"name": "send_bulk_sms", + "description": "Send an SMS to multiple recipients", + "inputSchema": {"type": "object", + "properties": { + "recipients": {"type": "array", "items": {"type": "string"}}, + "body": {"type": "string"} + }, + "required": ["recipients", "body"]}}, + ]) + print(f" sms_notify_tool ready : {sms_id}") + + return { + "lambda_fn_name": fn_name, + "lambda_arn": lambda_arn, + "targets": { + "email_send": email_send_id, + "email_template": email_template_id, + "sms_notify": sms_id, + }, + } diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/mcp/order_management.py b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/mcp/order_management.py new file mode 100644 index 000000000..c0cde785f --- /dev/null +++ b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/mcp/order_management.py @@ -0,0 +1,143 @@ +""" +Deploy order management MCP tools (Lambda + AgentCore Gateway targets). + +Tools deployed: + order_lookup_tool — get_order, list_orders + order_update_tool — update_order_status, update_shipping_addr + order_cancel_tool — cancel_order +""" +import io, time, zipfile, pathlib + +_HERE = pathlib.Path(__file__).parent +_LAMBDA = (_HERE.parent / "tools" / "order_management.py").read_text() +_DB = (_HERE.parent / "db.py").read_text() + + +def _make_zip(): + buf = io.BytesIO() + with zipfile.ZipFile(buf, "w", zipfile.ZIP_DEFLATED) as zf: + zf.writestr("lambda_function.py", _LAMBDA) + zf.writestr("db.py", _DB) + return buf.getvalue() + + +def _wait_lambda(client, name): + while client.get_function(FunctionName=name)["Configuration"]["State"] != "Active": + time.sleep(2) + + +def _wait_target(agentcore_client, gateway_id, target_id): + while agentcore_client.get_gateway_target( + gatewayIdentifier=gateway_id, targetId=target_id)["status"] != "READY": + time.sleep(5) + + +def _create_target(agentcore_client, gateway_id, name, lambda_arn, tools): + tid = agentcore_client.create_gateway_target( + gatewayIdentifier=gateway_id, + name=name, + targetConfiguration={"mcp": {"lambda": { + "lambdaArn": lambda_arn, + "toolSchema": {"inlinePayload": tools} + }}}, + credentialProviderConfigurations=[{"credentialProviderType": "GATEWAY_IAM_ROLE"}] + )["targetId"] + _wait_target(agentcore_client, gateway_id, tid) + return tid + + +def deploy(*, lambda_client, agentcore_client, lambda_role_arn, + gateway_id, gateway_role_arn, table_names, timestamp): + """ + Deploy order management Lambda + 3 gateway targets. + + Returns: + dict with lambda_fn_name, lambda_arn, targets + (keys: order_lookup, order_update, order_cancel) + """ + fn_name = f"order-mgmt-mcp-{timestamp}" + env = { + "ORDERS_TABLE": table_names["orders"], + "CUSTOMERS_TABLE": table_names["customers"], + "SHIPMENTS_TABLE": table_names["shipments"], + "PAYMENTS_TABLE": table_names.get("payments", ""), + } + + print("Deploying order management Lambda...") + lambda_arn = lambda_client.create_function( + FunctionName=fn_name, Runtime="python3.13", Role=lambda_role_arn, + Handler="lambda_function.lambda_handler", Code={"ZipFile": _make_zip()}, + Timeout=30, Environment={"Variables": env}, + Description="Order management MCP Lambda", + )["FunctionArn"] + _wait_lambda(lambda_client, fn_name) + lambda_client.add_permission( + FunctionName=fn_name, StatementId=f"gateway-invoke-{timestamp}", + Action="lambda:InvokeFunction", Principal=gateway_role_arn, + ) + print(f" Lambda ready : {lambda_arn}") + + print("Creating gateway targets...") + lookup_id = _create_target(agentcore_client, gateway_id, + f"order-lookup-{timestamp}", lambda_arn, [ + {"name": "get_order", + "description": "Fetch full order details by order ID", + "inputSchema": {"type": "object", + "properties": {"order_id": {"type": "string", "description": "Order ID e.g. ORD-1001"}}, + "required": ["order_id"]}}, + {"name": "list_orders", + "description": "List orders, optionally filtered by customer email or status", + "inputSchema": {"type": "object", + "properties": { + "customer_email": {"type": "string", "description": "Filter by customer email"}, + "status": {"type": "string", "description": "PENDING|PROCESSING|SHIPPED|DELIVERED|CANCELLED"} + }}}, + ]) + print(f" order_lookup_tool ready : {lookup_id}") + + update_id = _create_target(agentcore_client, gateway_id, + f"order-update-{timestamp}", lambda_arn, [ + {"name": "update_order_status", + "description": "Change the status of an order", + "inputSchema": {"type": "object", + "properties": { + "order_id": {"type": "string"}, + "status": {"type": "string", "description": "PENDING|PROCESSING|SHIPPED|DELIVERED|CANCELLED|RETURNED"} + }, + "required": ["order_id", "status"]}}, + {"name": "update_shipping_addr", + "description": "Update delivery address before an order ships", + "inputSchema": {"type": "object", + "properties": { + "order_id": {"type": "string"}, + "street": {"type": "string"}, + "city": {"type": "string"}, + "state": {"type": "string"}, + "zip": {"type": "string"} + }, + "required": ["order_id", "street", "city", "state", "zip"]}}, + ]) + print(f" order_update_tool ready : {update_id}") + + cancel_id = _create_target(agentcore_client, gateway_id, + f"order-cancel-{timestamp}", lambda_arn, [ + {"name": "cancel_order", + "description": "Cancel an order by ID; triggers refund if payment was captured", + "inputSchema": {"type": "object", + "properties": { + "order_id": {"type": "string"}, + "reason": {"type": "string", "description": "Cancellation reason (default: customer_request)"} + }, + "required": ["order_id"]}}, + ]) + print(f" order_cancel_tool ready : {cancel_id}") + + return { + "lambda_fn_name": fn_name, + "lambda_arn": lambda_arn, + "targets": { + "order_lookup": lookup_id, + "order_update": update_id, + "order_cancel": cancel_id, + }, + } diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/mcp/read_services.py b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/mcp/read_services.py new file mode 100644 index 000000000..c125c7110 --- /dev/null +++ b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/mcp/read_services.py @@ -0,0 +1,133 @@ +""" +Deploy read-only service MCP tools (Lambda + AgentCore Gateway targets). + +Tools deployed: + payment_status_tool — get_payment_status + inventory_check_tool — check_inventory, check_multiple_inventory + shipping_track_tool — track_shipment, estimate_delivery +""" +import io, time, zipfile, pathlib + +_HERE = pathlib.Path(__file__).parent +_LAMBDA = (_HERE.parent / "tools" / "read_services.py").read_text() +_DB = (_HERE.parent / "db.py").read_text() + + +def _make_zip(): + buf = io.BytesIO() + with zipfile.ZipFile(buf, "w", zipfile.ZIP_DEFLATED) as zf: + zf.writestr("lambda_function.py", _LAMBDA) + zf.writestr("db.py", _DB) + return buf.getvalue() + + +def _wait_lambda(client, name): + while client.get_function(FunctionName=name)["Configuration"]["State"] != "Active": + time.sleep(2) + + +def _wait_target(agentcore_client, gateway_id, target_id): + while agentcore_client.get_gateway_target( + gatewayIdentifier=gateway_id, targetId=target_id)["status"] != "READY": + time.sleep(5) + + +def _create_target(agentcore_client, gateway_id, name, lambda_arn, tools): + tid = agentcore_client.create_gateway_target( + gatewayIdentifier=gateway_id, + name=name, + targetConfiguration={"mcp": {"lambda": { + "lambdaArn": lambda_arn, + "toolSchema": {"inlinePayload": tools} + }}}, + credentialProviderConfigurations=[{"credentialProviderType": "GATEWAY_IAM_ROLE"}] + )["targetId"] + _wait_target(agentcore_client, gateway_id, tid) + return tid + + +def deploy(*, lambda_client, agentcore_client, lambda_role_arn, + gateway_id, gateway_role_arn, table_names, timestamp): + """ + Deploy read-only services Lambda + 3 gateway targets. + + Returns: + dict with lambda_fn_name, lambda_arn, targets + (keys: payment_status, inventory_check, shipping_track) + """ + fn_name = f"read-services-mcp-{timestamp}" + env = { + "PAYMENTS_TABLE": table_names["payments"], + "INVENTORY_TABLE": table_names["inventory"], + "ORDERS_TABLE": table_names["orders"], + "SHIPMENTS_TABLE": table_names["shipments"], + } + + print("Deploying read-services Lambda...") + lambda_arn = lambda_client.create_function( + FunctionName=fn_name, Runtime="python3.13", Role=lambda_role_arn, + Handler="lambda_function.lambda_handler", Code={"ZipFile": _make_zip()}, + Timeout=30, Environment={"Variables": env}, + Description="Read-only services MCP Lambda", + )["FunctionArn"] + _wait_lambda(lambda_client, fn_name) + lambda_client.add_permission( + FunctionName=fn_name, StatementId=f"gateway-invoke-{timestamp}", + Action="lambda:InvokeFunction", Principal=gateway_role_arn, + ) + print(f" Lambda ready : {lambda_arn}") + + print("Creating gateway targets...") + payment_id = _create_target(agentcore_client, gateway_id, + f"payment-status-{timestamp}", lambda_arn, [ + {"name": "get_payment_status", + "description": "Get payment status and details for an order or by payment ID", + "inputSchema": {"type": "object", + "properties": { + "order_id": {"type": "string", "description": "Order ID e.g. ORD-1001"}, + "payment_id": {"type": "string", "description": "Payment ID e.g. PAY-001"} + }}}, + ]) + print(f" payment_status_tool ready : {payment_id}") + + inventory_id = _create_target(agentcore_client, gateway_id, + f"inventory-check-{timestamp}", lambda_arn, [ + {"name": "check_inventory", + "description": "Check stock availability for a single SKU", + "inputSchema": {"type": "object", + "properties": {"sku": {"type": "string", "description": "Product SKU e.g. WIDGET-42"}}, + "required": ["sku"]}}, + {"name": "check_multiple_inventory", + "description": "Check stock availability for multiple SKUs in one call", + "inputSchema": {"type": "object", + "properties": {"skus": {"type": "array", "items": {"type": "string"}}}, + "required": ["skus"]}}, + ]) + print(f" inventory_check_tool ready : {inventory_id}") + + shipping_id = _create_target(agentcore_client, gateway_id, + f"shipping-track-{timestamp}", lambda_arn, [ + {"name": "track_shipment", + "description": "Track a shipment by shipment_id or order_id", + "inputSchema": {"type": "object", + "properties": { + "shipment_id": {"type": "string"}, + "order_id": {"type": "string"} + }}}, + {"name": "estimate_delivery", + "description": "Get estimated delivery date and carrier info for an order", + "inputSchema": {"type": "object", + "properties": {"order_id": {"type": "string"}}, + "required": ["order_id"]}}, + ]) + print(f" shipping_track_tool ready : {shipping_id}") + + return { + "lambda_fn_name": fn_name, + "lambda_arn": lambda_arn, + "targets": { + "payment_status": payment_id, + "inventory_check": inventory_id, + "shipping_track": shipping_id, + }, + } diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/registry_records.py b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/registry_records.py new file mode 100644 index 000000000..559f09874 --- /dev/null +++ b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/registry_records.py @@ -0,0 +1,170 @@ +"""Utility functions for managing AWS Agent Registry records. + +Provides helpers to create, submit for approval, and approve registry records +with polling loops that wait for each status transition to complete. + +Usage: + from util.registry_records import create_and_approve_all_records + + record_ids = create_and_approve_all_records(cp_client, REGISTRY_ID, records) +""" + +import time + +# --------------------------------------------------------------------------- +# Module-level configuration +# --------------------------------------------------------------------------- +POLL_INTERVAL = 3 # seconds between status checks +MAX_RETRIES = 20 # max polling attempts before raising +DEBUG = False # set to True to print full record payloads + + +def _wait_for_status(cp_client, registry_id, record_id, target_status, label=""): + """Poll get_registry_record until status matches *target_status*.""" + for attempt in range(MAX_RETRIES): + rec = cp_client.get_registry_record( + registryId=registry_id, recordId=record_id + ) + status = rec["status"] + if status == target_status: + print(f" {label}{record_id}: {status}") + return rec + print(f" {label}{record_id}: {status} - waiting for {target_status}...") + time.sleep(POLL_INTERVAL) + raise TimeoutError( + f"Record {record_id} did not reach {target_status} after " + f"{MAX_RETRIES * POLL_INTERVAL}s (last status: {status})" + ) + + +# --------------------------------------------------------------------------- +# Record creation +# --------------------------------------------------------------------------- +def create_record(cp_client, registry_id, name, description, descriptor_type, + descriptors, record_version="1.0"): + """Create a single registry record and wait for DRAFT status. + + Returns the record ID. + """ + import json as _json + record_payload = { + "registryId": registry_id, + "name": name, + "description": description, + "descriptorType": descriptor_type, + "descriptors": descriptors, + "recordVersion": record_version, + } + print(f" Creating: {name} ...") + if DEBUG: + print(f" Payload: {_json.dumps(record_payload, indent=2)}") + try: + resp = cp_client.create_registry_record( + registryId=registry_id, + name=name, + description=description, + descriptorType=descriptor_type, + descriptors=descriptors, + recordVersion=record_version, + ) + except Exception as e: + print(f" FAILED creating record: {name}") + print(f" Record: {_json.dumps(record_payload, indent=2)}") + if hasattr(e, "response"): + meta = e.response.get("ResponseMetadata", {}) + print(f" Request ID: {meta.get('RequestId', 'N/A')}") + print(f" HTTP Status: {meta.get('HTTPStatusCode', 'N/A')}") + print(f" Error: {e.response.get('Error', {})}") + raise + record_id = resp["recordArn"].split("/")[-1] + _wait_for_status(cp_client, registry_id, record_id, "DRAFT", label="Created ") + return record_id + + +def create_all_records(cp_client, registry_id, records): + """Create multiple registry records and wait for each to reach DRAFT. + + *records* is a list of dicts, each with keys: + name, description, descriptorType, descriptors, record_version (optional) + + Returns a list of record IDs. + """ + record_ids = [] + for rec in records: + rid = create_record( + cp_client, registry_id, + name=rec["name"], + description=rec["description"], + descriptor_type=rec.get("descriptorType", rec.get("descriptor_type")), + descriptors=rec["descriptors"], + record_version=rec.get("record_version", "1.0"), + ) + record_ids.append(rid) + return record_ids + + +# --------------------------------------------------------------------------- +# Approval workflow +# --------------------------------------------------------------------------- +def submit_for_approval(cp_client, registry_id, record_id): + """Submit a record for approval and wait for PENDING_APPROVAL status.""" + cp_client.submit_registry_record_for_approval( + registryId=registry_id, recordId=record_id + ) + _wait_for_status(cp_client, registry_id, record_id, "PENDING_APPROVAL", + label="Submitted ") + + +def approve_record(cp_client, registry_id, record_id): + """Approve a record and wait for APPROVED status.""" + cp_client.update_registry_record_status( + registryId=registry_id, recordId=record_id, + status="APPROVED", + statusReason="Approved via notebook", + ) + _wait_for_status(cp_client, registry_id, record_id, "APPROVED", + label="Approved ") + + +def submit_all_for_approval(cp_client, registry_id, record_ids): + """Submit multiple records for approval, waiting for each.""" + for rid in record_ids: + submit_for_approval(cp_client, registry_id, rid) + + +def approve_all_records(cp_client, registry_id, record_ids): + """Approve multiple records, waiting for each.""" + for rid in record_ids: + approve_record(cp_client, registry_id, rid) + + +# --------------------------------------------------------------------------- +# Convenience: create + full approval in one call +# --------------------------------------------------------------------------- +def create_and_approve_record(cp_client, registry_id, name, description, + descriptor_type, descriptors, record_version="1.0"): + """Create a record, submit for approval, and approve it. + + Returns the record ID. + """ + record_id = create_record( + cp_client, registry_id, name, description, + descriptor_type, descriptors, record_version, + ) + submit_for_approval(cp_client, registry_id, record_id) + approve_record(cp_client, registry_id, record_id) + return record_id + + +def create_and_approve_all_records(cp_client, registry_id, records): + """Create, submit, and approve multiple records. + + *records* is a list of dicts, each with keys: + name, description, descriptorType, descriptors, record_version (optional) + + Returns a list of record IDs. + """ + record_ids = create_all_records(cp_client, registry_id, records) + submit_all_for_approval(cp_client, registry_id, record_ids) + approve_all_records(cp_client, registry_id, record_ids) + return record_ids diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/sample_db.py b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/sample_db.py new file mode 100644 index 000000000..9a9032fcd --- /dev/null +++ b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/sample_db.py @@ -0,0 +1,94 @@ +# Sample data — written to DynamoDB by the setup cell below. +# Also imported locally during seeding; not bundled into Lambda zips at runtime. + +CUSTOMERS = { + "CUST-001": {"customer_id": "CUST-001", "name": "Jane Smith", "email": "jane@example.com", "phone": "+15550001001"}, + "CUST-002": {"customer_id": "CUST-002", "name": "Bob Jones", "email": "bob@example.com", "phone": "+15550001002"}, + "CUST-003": {"customer_id": "CUST-003", "name": "Alice Chen", "email": "alice@example.com", "phone": "+15550001003"}, + "CUST-004": {"customer_id": "CUST-004", "name": "Dave Kim", "email": "dave@example.com", "phone": "+15550001004"}, +} + +ORDERS = { + "ORD-1001": { + "order_id": "ORD-1001", "customer_id": "CUST-001", "status": "PROCESSING", + "items": [{"sku": "WIDGET-42", "qty": 2, "price": 29.99}, + {"sku": "GADGET-7", "qty": 1, "price": 0.00}], + "total": 59.98, + "shipping_address": {"street": "123 Main St", "city": "Seattle", "state": "WA", "zip": "98101"}, + "created_at": "2026-03-15T10:00:00Z", "payment_id": "PAY-001", + }, + "ORD-1002": { + "order_id": "ORD-1002", "customer_id": "CUST-002", "status": "SHIPPED", + "items": [{"sku": "GADGET-7", "qty": 1, "price": 99.00}], + "total": 99.00, + "shipping_address": {"street": "456 Oak Ave", "city": "Portland", "state": "OR", "zip": "97201"}, + "created_at": "2026-03-14T09:30:00Z", "payment_id": "PAY-002", + }, + "ORD-1003": { + "order_id": "ORD-1003", "customer_id": "CUST-003", "status": "DELIVERED", + "items": [{"sku": "DOOHICKEY-9", "qty": 3, "price": 49.99}], + "total": 149.97, + "shipping_address": {"street": "789 Pine Rd", "city": "San Francisco", "state": "CA", "zip": "94101"}, + "created_at": "2026-03-10T14:00:00Z", "payment_id": "PAY-003", + }, + "ORD-1004": { + "order_id": "ORD-1004", "customer_id": "CUST-004", "status": "PENDING", + "items": [{"sku": "WIDGET-42", "qty": 1, "price": 29.99}], + "total": 29.99, + "shipping_address": {"street": "321 Elm St", "city": "Austin", "state": "TX", "zip": "78701"}, + "created_at": "2026-03-18T08:00:00Z", "payment_id": "PAY-004", + }, + "ORD-1005": { + "order_id": "ORD-1005", "customer_id": "CUST-001", "status": "CANCELLED", + "items": [{"sku": "GADGET-7", "qty": 1, "price": 49.99}], + "total": 49.99, + "shipping_address": {"street": "123 Main St", "city": "Seattle", "state": "WA", "zip": "98101"}, + "created_at": "2026-03-12T11:00:00Z", "payment_id": "PAY-005", + }, +} + +PAYMENTS = { + "PAY-001": {"payment_id": "PAY-001", "order_id": "ORD-1001", "amount": 59.98, "status": "CAPTURED", "gateway": "stripe"}, + "PAY-002": {"payment_id": "PAY-002", "order_id": "ORD-1002", "amount": 99.00, "status": "CAPTURED", "gateway": "stripe"}, + "PAY-003": {"payment_id": "PAY-003", "order_id": "ORD-1003", "amount": 149.97, "status": "CAPTURED", "gateway": "braintree"}, + "PAY-004": {"payment_id": "PAY-004", "order_id": "ORD-1004", "amount": 29.99, "status": "PENDING", "gateway": "stripe"}, + "PAY-005": {"payment_id": "PAY-005", "order_id": "ORD-1005", "amount": 49.99, "status": "REFUNDED", "gateway": "stripe"}, +} + +INVENTORY = { + "WIDGET-42": {"sku": "WIDGET-42", "name": "Widget Pro 42", "stock": 150, "reserved": 3, "warehouse": "WH-WEST"}, + "GADGET-7": {"sku": "GADGET-7", "name": "Gadget Series 7", "stock": 45, "reserved": 1, "warehouse": "WH-EAST"}, + "GIZMO-3": {"sku": "GIZMO-3", "name": "Gizmo v3", "stock": 0, "reserved": 0, "warehouse": "WH-WEST"}, + "DOOHICKEY-9": {"sku": "DOOHICKEY-9", "name": "Doohickey Mark IX", "stock": 200, "reserved": 3, "warehouse": "WH-CENTRAL"}, +} + +SHIPMENTS = { + "SHIP-001": { + "shipment_id": "SHIP-001", "order_id": "ORD-1002", "carrier": "UPS", + "tracking_number": "1Z999AA10123456784", "status": "IN_TRANSIT", + "estimated_delivery": "2026-03-20", "last_update": "2026-03-17T18:00:00Z", + }, + "SHIP-002": { + "shipment_id": "SHIP-002", "order_id": "ORD-1003", "carrier": "FedEx", + "tracking_number": "7489023480237", "status": "DELIVERED", + "estimated_delivery": "2026-03-13", "last_update": "2026-03-13T14:22:00Z", + }, +} + +EMAIL_TEMPLATES = { + "order_confirmation": { + "template_id": "order_confirmation", + "subject": "Order {order_id} Confirmed", + "body": "Dear {customer_name}, your order {order_id} has been confirmed. Total: ${total}. Thank you!", + }, + "order_shipped": { + "template_id": "order_shipped", + "subject": "Order {order_id} Has Shipped", + "body": "Dear {customer_name}, your order {order_id} is on its way! Tracking: {tracking_number}.", + }, + "refund_issued": { + "template_id": "refund_issued", + "subject": "Refund Issued for Order {order_id}", + "body": "Dear {customer_name}, a refund of ${amount} has been issued for order {order_id}.", + }, +} \ No newline at end of file diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/tool_builder.py b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/tool_builder.py new file mode 100644 index 000000000..a66ccc651 --- /dev/null +++ b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/tool_builder.py @@ -0,0 +1,183 @@ +"""Utilities for building live Strands tools from AWS Agent Registry records. + +Provides helpers to parse MCP/A2A registry record metadata and create +dynamic tool connections for the Executor agent. +""" + +import json +import uuid +from strands import tool + + +def parse_mcp_metadata(record): + """Extract connection metadata from an MCP registry record. + + Args: + record: A registry record dict from get_registry_record. + + Returns: + dict with keys: url (str), tool_names (list[str]) + """ + descriptors = record.get("descriptors", {}) + mcp = descriptors.get("mcp", {}) + + # Parse websiteUrl from server descriptor + url = "" + try: + server_info = json.loads( + mcp.get("server", {}).get("inlineContent", "{}")) + url = server_info.get("websiteUrl", "") + except (json.JSONDecodeError, AttributeError): + pass + + # Parse tool names from tools descriptor + tool_names = [] + try: + tools_info = json.loads( + mcp.get("tools", {}).get("inlineContent", "{}")) + tool_names = [t["name"] for t in tools_info.get("tools", [])] + except (json.JSONDecodeError, AttributeError, KeyError): + pass + + return {"url": url, "tool_names": tool_names} + + +def parse_a2a_metadata(record): + """Extract connection metadata from an A2A registry record. + + Args: + record: A registry record dict from get_registry_record. + + Returns: + dict with keys: url (str), skills (list[str]) + """ + descriptors = record.get("descriptors", {}) + a2a = descriptors.get("a2a", {}) + + url = "" + skills = [] + try: + card = json.loads( + a2a.get("agentCard", {}).get("inlineContent", "{}")) + url = card.get("url", "") + skills = [ + s.get("id", s.get("name", "")) + for s in card.get("skills", []) + ] + except (json.JSONDecodeError, AttributeError): + pass + + return {"url": url, "skills": skills} + + +def create_a2a_tool(name, description, agent_arn, skills, ac_data_client): + """Create a Strands @tool function that invokes an A2A agent. + + The returned function sends an A2A message/send JSON-RPC request via + invoke_agent_runtime (SigV4 auth) and handles both streaming and + non-streaming responses. + + Args: + name: Tool name (used as the function name for the LLM). + description: Tool description shown to the LLM. + agent_arn: The AgentCore Runtime ARN of the A2A agent. + skills: List of skill names the agent supports. + ac_data_client: A boto3 bedrock-agentcore data plane client. + + Returns: + A @tool-decorated callable. + """ + skill_list = ", ".join(skills) if skills else "general tasks" + + def _add_accept(request, **kwargs): + request.headers.add_header( + "Accept", "text/event-stream, application/json") + + @tool + def a2a_invoke(task: str) -> str: + """Invoke an A2A agent with a task. + Args: + task: The task or question to send. + Returns: + The agent's response. + """ + session_id = str(uuid.uuid4()) + payload = json.dumps({ + "jsonrpc": "2.0", + "id": str(uuid.uuid4()), + "method": "message/send", + "params": { + "message": { + "role": "user", + "messageId": str(uuid.uuid4()), + "parts": [{"kind": "text", "text": task}] + } + } + }) + + ac_data_client.meta.events.register_first( + "before-sign.bedrock-agentcore.InvokeAgentRuntime", + _add_accept) + try: + resp = ac_data_client.invoke_agent_runtime( + agentRuntimeArn=agent_arn, + qualifier="DEFAULT", + runtimeSessionId=session_id, + contentType="application/json", + payload=payload) + finally: + ac_data_client.meta.events.unregister( + "before-sign.bedrock-agentcore.InvokeAgentRuntime", + _add_accept) + + ct = resp.get("contentType", "") + body = resp["response"] + + # Streaming SSE response + if "text/event-stream" in ct: + texts = [] + for line in body.iter_lines(chunk_size=1): + if line: + line = (line.decode("utf-8") + if isinstance(line, bytes) else line) + if line.startswith("data: "): + try: + chunk = json.loads(line[6:]) + texts.append( + chunk if isinstance(chunk, str) + else json.dumps(chunk)) + except Exception: + texts.append(line[6:]) + return "\n".join(texts) or "(empty streaming response)" + + # Non-streaming: collect EventStream chunks + chunks = [] + for event in body: + chunks.append( + event.decode("utf-8") + if isinstance(event, bytes) else str(event)) + raw = "".join(chunks) + try: + data = json.loads(raw) + parts = (data.get("result", {}) + .get("status", {}) + .get("message", {}) + .get("parts", [])) + if not parts: + parts = data.get("result", {}).get("parts", []) + texts = [p.get("text", "") for p in parts + if p.get("kind") == "text"] + if texts: + return "\n".join(texts) + return json.dumps(data, indent=2) + except Exception: + return raw + + a2a_invoke.__name__ = name + a2a_invoke.__doc__ = ( + f"{description}\n" + f"Available skills: {skill_list}\n" + f"Args:\n task: The task to send.\n" + f"Returns:\n The agent's response." + ) + return a2a_invoke diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/tools/__init__.py b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/tools/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/tools/notification.py b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/tools/notification.py new file mode 100644 index 000000000..a0296dea5 --- /dev/null +++ b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/tools/notification.py @@ -0,0 +1,56 @@ +import json, uuid +import db + +def lambda_handler(event, context): + tool_name = context.client_context.custom.get("bedrockAgentCoreToolName", "") + if "___" in tool_name: + tool_name = tool_name.split("___", 1)[1] + + if tool_name == "send_email": + to = event.get("to", "") + subject = event.get("subject", "") + body = event.get("body", "") + tmpl_id = event.get("template_id", "") + if tmpl_id: + tmpl = db.get_item("TEMPLATES_TABLE", {"template_id": tmpl_id}) + if tmpl: + vars_ = event.get("template_vars", {}) + subject = subject or tmpl["subject"].format_map(vars_) + body = body or tmpl["body"].format_map(vars_) + return {"message_id": str(uuid.uuid4())[:8], "to": to, "subject": subject, + "status": "DELIVERED", "note": "[MOCK] Email not actually sent"} + + elif tool_name == "send_bulk_email": + recipients = event.get("recipients", []) + subject = event.get("subject", "(no subject)") + return {"sent_count": len(recipients), + "results": [{"to": r, "status": "DELIVERED"} for r in recipients], + "note": "[MOCK] Emails not actually sent"} + + elif tool_name == "get_template": + tmpl_id = event.get("template_id", "") + tmpl = db.get_item("TEMPLATES_TABLE", {"template_id": tmpl_id}) + return tmpl if tmpl else {"error": f"Template '{tmpl_id}' not found"} + + elif tool_name == "list_templates": + return {"templates": db.scan_all("TEMPLATES_TABLE")} + + elif tool_name == "create_template": + tmpl_id = event.get("template_id", str(uuid.uuid4())[:8]) + item = {"template_id": tmpl_id, + "subject": event.get("subject", ""), + "body": event.get("body", "")} + db.put_item("TEMPLATES_TABLE", item) + return {"created": True, "template": item} + + elif tool_name == "send_sms": + return {"message_id": str(uuid.uuid4())[:8], "to": event.get("to", ""), + "status": "DELIVERED", "note": "[MOCK] SMS not actually sent"} + + elif tool_name == "send_bulk_sms": + recipients = event.get("recipients", []) + return {"sent_count": len(recipients), + "results": [{"to": r, "status": "DELIVERED"} for r in recipients], + "note": "[MOCK] SMS not actually sent"} + + return {"error": f"Unknown tool: {tool_name}"} diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/tools/order_management.py b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/tools/order_management.py new file mode 100644 index 000000000..96d0265be --- /dev/null +++ b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/tools/order_management.py @@ -0,0 +1,69 @@ +import json +import db + +def lambda_handler(event, context): + tool_name = context.client_context.custom.get("bedrockAgentCoreToolName", "") + if "___" in tool_name: + tool_name = tool_name.split("___", 1)[1] + + if tool_name == "get_order": + order_id = event.get("order_id", "") + order = db.get_item("ORDERS_TABLE", {"order_id": order_id}) + if not order: + return {"error": f"Order {order_id} not found"} + customer = db.get_item("CUSTOMERS_TABLE", {"customer_id": order["customer_id"]}) or {} + ships = db.query_gsi("SHIPMENTS_TABLE", "order_id-index", "order_id", order_id) + return {**order, "customer": customer, "shipment": ships[0] if ships else None} + + elif tool_name == "list_orders": + email = event.get("customer_email", "") + status = event.get("status", "").upper() + if email: + customers = db.scan_filter("CUSTOMERS_TABLE", "email", email) + if not customers: + return {"orders": [], "count": 0} + orders = db.scan_filter("ORDERS_TABLE", "customer_id", customers[0]["customer_id"]) + else: + orders = db.scan_all("ORDERS_TABLE") + if status: + orders = [o for o in orders if o.get("status") == status] + return {"orders": orders, "count": len(orders)} + + elif tool_name == "update_order_status": + order_id = event.get("order_id", "") + new_status = event.get("status", "").upper() + VALID = {"PENDING","PROCESSING","SHIPPED","DELIVERED","CANCELLED","RETURNED"} + if not db.get_item("ORDERS_TABLE", {"order_id": order_id}): + return {"error": f"Order {order_id} not found"} + if new_status not in VALID: + return {"error": f"Invalid status. Must be one of {sorted(VALID)}"} + db.update_attrs("ORDERS_TABLE", {"order_id": order_id}, {"status": new_status}) + return {"order_id": order_id, "status": new_status, "updated": True} + + elif tool_name == "update_shipping_addr": + order_id = event.get("order_id", "") + order = db.get_item("ORDERS_TABLE", {"order_id": order_id}) + if not order: + return {"error": f"Order {order_id} not found"} + if order.get("status") in ("SHIPPED", "DELIVERED", "CANCELLED"): + return {"error": f"Cannot update address — order is already {order['status']}"} + addr = {k: event.get(k, "") for k in ("street", "city", "state", "zip")} + db.update_attrs("ORDERS_TABLE", {"order_id": order_id}, {"shipping_address": addr}) + return {"order_id": order_id, "shipping_address": addr, "updated": True} + + elif tool_name == "cancel_order": + order_id = event.get("order_id", "") + reason = event.get("reason", "customer_request") + order = db.get_item("ORDERS_TABLE", {"order_id": order_id}) + if not order: + return {"error": f"Order {order_id} not found"} + if order.get("status") in ("SHIPPED", "DELIVERED", "CANCELLED"): + return {"error": f"Cannot cancel — current status is {order['status']}"} + db.update_attrs("ORDERS_TABLE", {"order_id": order_id}, + {"status": "CANCELLED", "cancel_reason": reason}) + pays = db.query_gsi("PAYMENTS_TABLE", "order_id-index", "order_id", order_id) + refund_triggered = bool(pays and pays[0].get("status") == "CAPTURED") + return {"order_id": order_id, "status": "CANCELLED", + "reason": reason, "refund_triggered": refund_triggered} + + return {"error": f"Unknown tool: {tool_name}"} diff --git a/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/tools/read_services.py b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/tools/read_services.py new file mode 100644 index 000000000..93f56aff3 --- /dev/null +++ b/01-tutorials/10-Agent-Registry/01-advanced/planner-executor-pattern/utils/tools/read_services.py @@ -0,0 +1,62 @@ +import json +import db + +def lambda_handler(event, context): + tool_name = context.client_context.custom.get("bedrockAgentCoreToolName", "") + if "___" in tool_name: + tool_name = tool_name.split("___", 1)[1] + + if tool_name == "get_payment_status": + order_id = event.get("order_id", "") + payment_id = event.get("payment_id", "") + if order_id: + items = db.query_gsi("PAYMENTS_TABLE", "order_id-index", "order_id", order_id) + payment = items[0] if items else None + elif payment_id: + payment = db.get_item("PAYMENTS_TABLE", {"payment_id": payment_id}) + else: + return {"error": "Provide order_id or payment_id"} + return payment if payment else {"error": "Payment not found"} + + elif tool_name == "check_inventory": + sku = event.get("sku", "") + item = db.get_item("INVENTORY_TABLE", {"sku": sku}) + if not item: + return {"error": f"SKU '{sku}' not found"} + return {**item, "available": item["stock"] - item["reserved"]} + + elif tool_name == "check_multiple_inventory": + results = [] + for sku in event.get("skus", []): + item = db.get_item("INVENTORY_TABLE", {"sku": sku}) + results.append({**item, "available": item["stock"] - item["reserved"]} + if item else {"sku": sku, "error": "not found"}) + return {"items": results, "count": len(results)} + + elif tool_name == "track_shipment": + shipment_id = event.get("shipment_id", "") + order_id = event.get("order_id", "") + if shipment_id: + shipment = db.get_item("SHIPMENTS_TABLE", {"shipment_id": shipment_id}) + elif order_id: + items = db.query_gsi("SHIPMENTS_TABLE", "order_id-index", "order_id", order_id) + shipment = items[0] if items else None + else: + return {"error": "Provide shipment_id or order_id"} + return shipment if shipment else {"error": "Shipment not found — order may not have shipped yet"} + + elif tool_name == "estimate_delivery": + order_id = event.get("order_id", "") + order = db.get_item("ORDERS_TABLE", {"order_id": order_id}) + if not order: + return {"error": f"Order {order_id} not found"} + items = db.query_gsi("SHIPMENTS_TABLE", "order_id-index", "order_id", order_id) + shipment = items[0] if items else None + if shipment: + return {"order_id": order_id, "order_status": order["status"], + "estimated_delivery": shipment["estimated_delivery"], + "carrier": shipment["carrier"], "tracking": shipment["tracking_number"]} + return {"order_id": order_id, "order_status": order["status"], + "estimated_delivery": None, "note": "Shipment not yet created"} + + return {"error": f"Unknown tool: {tool_name}"}