diff --git a/agents-and-function-calling/open-source-agents/README.md b/agents-and-function-calling/open-source-agents/README.md index a4ef2fa20..eb398ba75 100644 --- a/agents-and-function-calling/open-source-agents/README.md +++ b/agents-and-function-calling/open-source-agents/README.md @@ -2,6 +2,25 @@ +## Contents + +### [AG2](ag2/) +Multi-agent framework with native Amazon Bedrock support. +Examples: single agent, tool use, multi-agent GroupChat — all using +`LLMConfig(api_type="bedrock")` with no wrapper libraries. + +### [CrewAI](crew.ai/) +CrewAI agent examples with Amazon Bedrock. + +### [LangChain](LangChain/) +LangChain agent examples with Amazon Bedrock. + +### [LangGraph](langgraph/) +LangGraph agent examples with Amazon Bedrock. + +### [LlamaIndex](llamaindex/) +LlamaIndex agent examples with Amazon Bedrock. + ## Contributing We welcome community contributions! Please ensure your sample aligns with AWS [best practices](https://aws.amazon.com/architecture/well-architected/), and please update the **Contents** section of this README file with a link to your sample, along with a description. diff --git a/agents-and-function-calling/open-source-agents/ag2/README.md b/agents-and-function-calling/open-source-agents/ag2/README.md new file mode 100644 index 000000000..8752faeb1 --- /dev/null +++ b/agents-and-function-calling/open-source-agents/ag2/README.md @@ -0,0 +1,43 @@ +# AG2 with Amazon Bedrock + +[AG2](https://ag2.ai/) (formerly AutoGen) is an open-source multi-agent framework with +**native Amazon Bedrock support**. Unlike other frameworks that require wrapper libraries, +AG2 connects to Bedrock directly via `LLMConfig(api_type="bedrock")`. + +## Examples + +| Notebook | Description | +|----------|-------------| +| [ag2-single-agent-bedrock.ipynb](ag2-single-agent-bedrock.ipynb) | Basic single agent with Bedrock | +| [ag2-tool-use-bedrock.ipynb](ag2-tool-use-bedrock.ipynb) | Function calling with `register_for_llm` | +| [ag2-multi-agent-bedrock.ipynb](ag2-multi-agent-bedrock.ipynb) | Multi-agent GroupChat orchestration | + +## Why AG2 + Bedrock? + +- **Native support**: `LLMConfig(api_type="bedrock")` — no LangChain or wrapper needed +- **AWS credential chain**: IAM roles, environment variables, or `~/.aws/credentials` +- **All Bedrock models**: Claude, Llama, Mistral, Titan, Command R+ +- **Multi-agent**: GroupChat with automatic speaker selection +- **500K+ monthly PyPI downloads**: Active community with frequent releases + +## Quick Start + +```bash +pip install ag2[openai] +``` + +```python +from autogen import AssistantAgent, UserProxyAgent, LLMConfig + +llm_config = LLMConfig( + api_type="bedrock", + model="anthropic.claude-3-sonnet-20240229-v1:0", + aws_region="us-east-1", +) +``` + +## Resources + +- [AG2 Documentation](https://docs.ag2.ai/) +- [AG2 GitHub](https://github.com/ag2ai/ag2) +- [AG2 Bedrock Guide](https://docs.ag2.ai/docs/user-guide/models/amazon-bedrock) diff --git a/agents-and-function-calling/open-source-agents/ag2/ag2-multi-agent-bedrock.ipynb b/agents-and-function-calling/open-source-agents/ag2/ag2-multi-agent-bedrock.ipynb new file mode 100644 index 000000000..e8c616e86 --- /dev/null +++ b/agents-and-function-calling/open-source-agents/ag2/ag2-multi-agent-bedrock.ipynb @@ -0,0 +1,243 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# AG2 Multi-Agent GroupChat with Amazon Bedrock\n", + "\n", + "## Overview\n", + "\n", + "[AG2](https://ag2.ai/) (formerly AutoGen) provides a powerful **GroupChat** feature for multi-agent orchestration. The `GroupChatManager` uses the LLM to automatically select the next speaker based on conversation context — no hardcoded routing graphs or handoff logic required.\n", + "\n", + "In this notebook, we'll create a multi-agent research team using AG2's GroupChat with Amazon Bedrock as the LLM backend.\n", + "\n", + "## Context\n", + "\n", + "AG2's GroupChat is its flagship multi-agent feature. Unlike frameworks that require explicit handoff definitions or routing graphs, AG2's `GroupChatManager` uses the LLM itself to determine which agent should speak next based on:\n", + "- Each agent's name and system message\n", + "- The current conversation history\n", + "- The task at hand\n", + "\n", + "This makes it easy to add or remove agents without rewriting orchestration code. Combined with native Bedrock support, you get enterprise-grade multi-agent systems with AWS IAM authentication, VPC endpoints, and CloudTrail logging — all without wrapper libraries.\n", + "\n", + "## Prerequisites\n", + "\n", + "- An AWS account with Amazon Bedrock model access enabled\n", + "- Python 3.10+\n", + "- AWS credentials configured (IAM role, environment variables, or `~/.aws/credentials`)\n", + "- Model access granted for `anthropic.claude-3-sonnet-20240229-v1:0` in your AWS region" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Setup\n", + "\n", + "Install the AG2 package." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%pip install -q ag2[openai]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Code\n", + "\n", + "### Configure Bedrock\n", + "\n", + "Set up the native Bedrock connection." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from autogen import AssistantAgent, UserProxyAgent, GroupChat, GroupChatManager, LLMConfig\n", + "\n", + "# Native Bedrock support — no OpenAI key needed\n", + "# ---- ⚠️ Update region for your AWS setup ⚠️ ----\n", + "llm_config = LLMConfig(\n", + " api_type=\"bedrock\",\n", + " model=\"anthropic.claude-3-sonnet-20240229-v1:0\",\n", + " aws_region=\"us-east-1\",\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create Specialist Agents\n", + "\n", + "We'll create a team of three specialist agents, each with a distinct role:\n", + "- **Researcher**: Gathers information and provides factual analysis\n", + "- **Writer**: Creates clear, well-structured content from research findings\n", + "- **Critic**: Reviews content for accuracy and completeness, and terminates when satisfied" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "with llm_config:\n", + " researcher = AssistantAgent(\n", + " name=\"Researcher\",\n", + " system_message=(\n", + " \"You are a research analyst. Search for information and provide \"\n", + " \"factual analysis. Focus on key data points and trends. \"\n", + " \"Cite sources when possible.\"\n", + " ),\n", + " )\n", + " writer = AssistantAgent(\n", + " name=\"Writer\",\n", + " system_message=(\n", + " \"You are a technical writer. Take research findings and create \"\n", + " \"clear, well-structured summaries for business stakeholders. \"\n", + " \"Use bullet points and concise language.\"\n", + " ),\n", + " )\n", + " critic = AssistantAgent(\n", + " name=\"Critic\",\n", + " system_message=(\n", + " \"You review content for accuracy, completeness, and clarity. \"\n", + " \"Provide constructive feedback. When the output meets quality \"\n", + " \"standards, say TERMINATE to end the conversation.\"\n", + " ),\n", + " )\n", + " user_proxy = UserProxyAgent(\n", + " name=\"user\",\n", + " human_input_mode=\"NEVER\",\n", + " max_consecutive_auto_reply=0,\n", + " )" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Set Up GroupChat\n", + "\n", + "The `GroupChat` collects agents into a group, and the `GroupChatManager` orchestrates the conversation. The `speaker_selection_method=\"auto\"` setting lets the LLM decide which agent speaks next based on context.\n", + "\n", + "The `max_round` parameter limits the total number of turns to prevent runaway conversations." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "groupchat = GroupChat(\n", + " agents=[user_proxy, researcher, writer, critic],\n", + " messages=[],\n", + " max_round=8,\n", + " speaker_selection_method=\"auto\",\n", + ")\n", + "manager = GroupChatManager(groupchat=groupchat, llm_config=llm_config)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Run the Multi-Agent Conversation\n", + "\n", + "The user proxy sends the initial request to the GroupChatManager, which then automatically routes messages to the appropriate agents. Watch the conversation to see how:\n", + "1. The **Researcher** gathers information\n", + "2. The **Writer** creates a structured summary\n", + "3. The **Critic** reviews and provides feedback or approves" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "result = user_proxy.initiate_chat(\n", + " manager,\n", + " message=(\n", + " \"Research the current state of generative AI adoption in enterprise. \"\n", + " \"Write a brief executive summary with key trends and challenges.\"\n", + " ),\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Understanding the Output\n", + "\n", + "The GroupChatManager automatically routed the conversation through the agents:\n", + "- The **Researcher** provided data points and analysis\n", + "- The **Writer** structured the findings into an executive summary\n", + "- The **Critic** reviewed the output and said TERMINATE when satisfied\n", + "\n", + "The key advantage of AG2's GroupChat: the `GroupChatManager` uses the LLM to automatically select the next speaker based on conversation context. You don't need to define explicit routing logic or handoff patterns." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Best Practices\n", + "\n", + "- **Speaker selection**: Use `\"auto\"` for LLM-based routing, `\"round_robin\"` for predictable sequential flow\n", + "- **Max rounds**: Set `max_round` to prevent runaway conversations — 6-10 is a good starting range\n", + "- **Termination**: Include `TERMINATE` in one agent's system message to end gracefully\n", + "- **Agent count**: 3-5 agents works well; more agents increases speaker selection complexity\n", + "- **Distinct roles**: Give each agent a clear, non-overlapping system message for better routing\n", + "- **Native Bedrock advantages**: No OpenAI key, AWS IAM auth, supports Claude/Llama/Mistral/Titan, enterprise-grade security with VPC endpoints and CloudTrail logging\n", + "\n", + "## Next Steps\n", + "\n", + "- **Single agent**: See [ag2-single-agent-bedrock.ipynb](ag2-single-agent-bedrock.ipynb) for the basic setup\n", + "- **Tool use**: See [ag2-tool-use-bedrock.ipynb](ag2-tool-use-bedrock.ipynb) for function calling with Bedrock\n", + "- **AG2 GroupChat Guide**: [docs.ag2.ai/docs/user-guide/basic-concepts/orchestration/group-chat](https://docs.ag2.ai/docs/user-guide/basic-concepts/orchestration/group-chat)\n", + "- **AG2 Documentation**: [docs.ag2.ai](https://docs.ag2.ai/)\n", + "- **AG2 Bedrock Guide**: [docs.ag2.ai/docs/user-guide/models/amazon-bedrock](https://docs.ag2.ai/docs/user-guide/models/amazon-bedrock)\n", + "\n", + "## Cleanup\n", + "\n", + "No resources to clean up — this notebook uses only local compute and Bedrock API calls. To stop incurring Bedrock charges, simply stop running the notebook." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} \ No newline at end of file diff --git a/agents-and-function-calling/open-source-agents/ag2/ag2-single-agent-bedrock.ipynb b/agents-and-function-calling/open-source-agents/ag2/ag2-single-agent-bedrock.ipynb new file mode 100644 index 000000000..eb5f44261 --- /dev/null +++ b/agents-and-function-calling/open-source-agents/ag2/ag2-single-agent-bedrock.ipynb @@ -0,0 +1,172 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# AG2 Single Agent with Amazon Bedrock\n", + "\n", + "## Overview\n", + "\n", + "[AG2](https://ag2.ai/) (formerly AutoGen) is an open-source multi-agent framework with **native Amazon Bedrock support** — no wrapper libraries or OpenAI API keys required.\n", + "\n", + "In this notebook, we'll create a simple conversational agent using AG2 with Amazon Bedrock as the LLM backend.\n", + "\n", + "## Context\n", + "\n", + "AG2 is a community-driven fork of AutoGen with 400K+ monthly PyPI downloads. Its key differentiator for Bedrock users is `LLMConfig(api_type=\"bedrock\")` — native integration without wrapper libraries like LangChain's ChatBedrock.\n", + "\n", + "This means you can use any Bedrock-supported model (Claude, Llama, Mistral, Titan, Command R+) with the standard AWS credential chain — IAM roles, environment variables, or `~/.aws/credentials`.\n", + "\n", + "## Prerequisites\n", + "\n", + "- An AWS account with Amazon Bedrock model access enabled\n", + "- Python 3.10+\n", + "- AWS credentials configured (IAM role, environment variables, or `~/.aws/credentials`)\n", + "- Model access granted for `anthropic.claude-3-sonnet-20240229-v1:0` in your AWS region" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Setup\n", + "\n", + "Install the AG2 package. The `[openai]` extra includes the required dependencies for LLM integration." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%pip install -q ag2[openai]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Code\n", + "\n", + "### Configure AG2 with Amazon Bedrock\n", + "\n", + "AG2 supports Bedrock natively via `LLMConfig(api_type=\"bedrock\")`. This uses the default AWS credential chain — IAM roles, environment variables, or `~/.aws/credentials` — so no API keys need to be hardcoded." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from autogen import AssistantAgent, UserProxyAgent, LLMConfig\n", + "\n", + "# Native Bedrock support — no OpenAI key needed\n", + "# ---- ⚠️ Update region for your AWS setup ⚠️ ----\n", + "llm_config = LLMConfig(\n", + " api_type=\"bedrock\",\n", + " model=\"anthropic.claude-3-sonnet-20240229-v1:0\",\n", + " aws_region=\"us-east-1\",\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create Agents\n", + "\n", + "AG2 uses a two-agent pattern: an `AssistantAgent` (LLM-powered reasoning) and a `UserProxyAgent` (executes tools, provides human input).\n", + "\n", + "The `with llm_config:` context manager applies the Bedrock configuration to all agents created within it." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "with llm_config:\n", + " assistant = AssistantAgent(\n", + " name=\"assistant\",\n", + " system_message=\"You are a helpful AI assistant. Answer questions concisely.\",\n", + " )\n", + " user_proxy = UserProxyAgent(\n", + " name=\"user\",\n", + " human_input_mode=\"NEVER\",\n", + " max_consecutive_auto_reply=1,\n", + " )" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Run the Conversation\n", + "\n", + "The `initiate_chat` method sends a message from the `UserProxyAgent` to the `AssistantAgent` and starts the conversation loop. The assistant will use Amazon Bedrock (Claude) to generate its response." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "result = user_proxy.initiate_chat(\n", + " assistant,\n", + " message=\"What are the main benefits of using Amazon Bedrock for enterprise AI?\",\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Best Practices\n", + "\n", + "- **Use IAM roles** over hardcoded credentials for production deployments\n", + "- **Set `max_consecutive_auto_reply`** to prevent infinite conversation loops\n", + "- **Use `human_input_mode=\"NEVER\"`** for automated pipelines, `\"TERMINATE\"` for interactive use\n", + "- **Region selection**: Choose the AWS region closest to your workload for lower latency\n", + "- **Model selection**: AG2 supports all Bedrock models — use Claude for complex reasoning, Llama for open-source flexibility, Mistral for multilingual tasks\n", + "\n", + "## Next Steps\n", + "\n", + "- **Tool use**: See [ag2-tool-use-bedrock.ipynb](ag2-tool-use-bedrock.ipynb) for function calling with Bedrock\n", + "- **Multi-agent**: See [ag2-multi-agent-bedrock.ipynb](ag2-multi-agent-bedrock.ipynb) for GroupChat orchestration\n", + "- **AG2 Documentation**: [docs.ag2.ai](https://docs.ag2.ai/)\n", + "- **AG2 GitHub**: [github.com/ag2ai/ag2](https://github.com/ag2ai/ag2)\n", + "- **AG2 Bedrock Guide**: [docs.ag2.ai/docs/user-guide/models/amazon-bedrock](https://docs.ag2.ai/docs/user-guide/models/amazon-bedrock)\n", + "\n", + "## Cleanup\n", + "\n", + "No resources to clean up — this notebook uses only local compute and Bedrock API calls. To stop incurring Bedrock charges, simply stop running the notebook." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} \ No newline at end of file diff --git a/agents-and-function-calling/open-source-agents/ag2/ag2-tool-use-bedrock.ipynb b/agents-and-function-calling/open-source-agents/ag2/ag2-tool-use-bedrock.ipynb new file mode 100644 index 000000000..679a199f9 --- /dev/null +++ b/agents-and-function-calling/open-source-agents/ag2/ag2-tool-use-bedrock.ipynb @@ -0,0 +1,221 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# AG2 Tool Use with Amazon Bedrock\n", + "\n", + "## Overview\n", + "\n", + "[AG2](https://ag2.ai/) (formerly AutoGen) supports tool/function calling with Amazon Bedrock models natively. This notebook demonstrates AG2's dual registration pattern — `register_for_llm` (the LLM decides **when** to call a tool) and `register_for_execution` (the UserProxy **executes** it).\n", + "\n", + "## Context\n", + "\n", + "AG2's tool calling approach separates **tool selection** from **tool execution**:\n", + "- The `AssistantAgent` (backed by a Bedrock model) decides which tool to call and with what arguments\n", + "- The `UserProxyAgent` actually executes the function and returns the result\n", + "\n", + "This separation gives full control over tool execution — you can add sandboxing, logging, approval flows, or rate limiting at the execution layer without modifying the LLM configuration.\n", + "\n", + "## Prerequisites\n", + "\n", + "- An AWS account with Amazon Bedrock model access enabled\n", + "- Python 3.10+\n", + "- AWS credentials configured (IAM role, environment variables, or `~/.aws/credentials`)\n", + "- Model access granted for `anthropic.claude-3-sonnet-20240229-v1:0` in your AWS region" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Setup\n", + "\n", + "Install the AG2 package." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%pip install -q ag2[openai]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Code\n", + "\n", + "### Configure Bedrock\n", + "\n", + "Set up the native Bedrock connection using AG2's `LLMConfig`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from typing import Annotated\n", + "from autogen import AssistantAgent, UserProxyAgent, LLMConfig\n", + "\n", + "# Native Bedrock support — no OpenAI key needed\n", + "# ---- ⚠️ Update region for your AWS setup ⚠️ ----\n", + "llm_config = LLMConfig(\n", + " api_type=\"bedrock\",\n", + " model=\"anthropic.claude-3-sonnet-20240229-v1:0\",\n", + " aws_region=\"us-east-1\",\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Define Tools\n", + "\n", + "Define Python functions that will be available to the agent. Use `Annotated` type hints to provide parameter descriptions — Bedrock models use these to understand the tool schema." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def get_stock_price(\n", + " symbol: Annotated[str, \"Stock ticker symbol (e.g., AMZN, GOOGL)\"],\n", + ") -> dict:\n", + " \"\"\"Look up the current stock price for a given ticker symbol.\"\"\"\n", + " # Simulated data for demo purposes\n", + " prices = {\n", + " \"AMZN\": {\"price\": 186.45, \"change\": \"+2.3%\"},\n", + " \"GOOGL\": {\"price\": 175.20, \"change\": \"-0.8%\"},\n", + " \"MSFT\": {\"price\": 420.15, \"change\": \"+1.1%\"},\n", + " }\n", + " return prices.get(symbol.upper(), {\"error\": f\"Unknown symbol: {symbol}\"})\n", + "\n", + "\n", + "def get_company_info(\n", + " company: Annotated[str, \"Company name to look up\"],\n", + ") -> str:\n", + " \"\"\"Get brief information about a company.\"\"\"\n", + " info = {\n", + " \"amazon\": \"Amazon.com, Inc. — e-commerce, cloud computing (AWS), AI, streaming.\",\n", + " \"google\": \"Alphabet Inc. — search, advertising, cloud, AI research.\",\n", + " \"microsoft\": \"Microsoft Corp. — software, cloud (Azure), gaming, AI.\",\n", + " }\n", + " return info.get(company.lower(), f\"No info available for {company}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create Agents and Register Tools\n", + "\n", + "AG2 uses a **dual registration** pattern:\n", + "1. `register_for_llm` — tells the AssistantAgent (LLM) that this tool exists and how to call it\n", + "2. `register_for_execution` — tells the UserProxyAgent to execute the function when the LLM requests it\n", + "\n", + "This separation means the LLM decides **when** to call a tool, but the UserProxy controls **how** it's executed." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "with llm_config:\n", + " assistant = AssistantAgent(\n", + " name=\"FinanceAssistant\",\n", + " system_message=\"You are a financial assistant. Use the available tools to look up stock data and company information. Provide clear analysis based on the data.\",\n", + " )\n", + " user_proxy = UserProxyAgent(\n", + " name=\"user\",\n", + " human_input_mode=\"NEVER\",\n", + " max_consecutive_auto_reply=5,\n", + " )\n", + "\n", + "# Register tools for the LLM (tool schema) and for execution (function call)\n", + "assistant.register_for_llm(description=\"Look up current stock price\")(get_stock_price)\n", + "assistant.register_for_llm(description=\"Get company information\")(get_company_info)\n", + "user_proxy.register_for_execution()(get_stock_price)\n", + "user_proxy.register_for_execution()(get_company_info)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Run the Conversation\n", + "\n", + "The assistant will use Bedrock (Claude) to decide which tools to call, and the user proxy will execute them. Watch the conversation to see the tool calls happening automatically." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "result = user_proxy.initiate_chat(\n", + " assistant,\n", + " message=\"Compare Amazon and Google — show me their stock prices and company info.\",\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Best Practices\n", + "\n", + "- **Use `Annotated` type hints** for parameter descriptions — Bedrock models use these for tool schemas\n", + "- **Keep tools simple and focused** — one function per responsibility\n", + "- **Use `max_consecutive_auto_reply`** to limit tool call loops and prevent runaway execution\n", + "- **Return structured data** (dict/JSON) from tools for consistent parsing\n", + "- **Add docstrings** to tool functions — the LLM uses these to understand when to call each tool\n", + "- **Error handling**: Return error messages as data (not exceptions) so the LLM can reason about failures\n", + "\n", + "## Next Steps\n", + "\n", + "- **Multi-agent**: See [ag2-multi-agent-bedrock.ipynb](ag2-multi-agent-bedrock.ipynb) for GroupChat orchestration with tools\n", + "- **Single agent**: See [ag2-single-agent-bedrock.ipynb](ag2-single-agent-bedrock.ipynb) for the basic setup\n", + "- **AG2 Tool Use Guide**: [docs.ag2.ai/docs/user-guide/basic-concepts/tools](https://docs.ag2.ai/docs/user-guide/basic-concepts/tools)\n", + "- **AG2 Documentation**: [docs.ag2.ai](https://docs.ag2.ai/)\n", + "\n", + "## Cleanup\n", + "\n", + "No resources to clean up — this notebook uses only local compute and Bedrock API calls. To stop incurring Bedrock charges, simply stop running the notebook." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} \ No newline at end of file diff --git a/docs/agents-and-function-calling/open-source-agents/ag2/ag2-multi-agent-bedrock.md b/docs/agents-and-function-calling/open-source-agents/ag2/ag2-multi-agent-bedrock.md new file mode 100644 index 000000000..0eb584538 --- /dev/null +++ b/docs/agents-and-function-calling/open-source-agents/ag2/ag2-multi-agent-bedrock.md @@ -0,0 +1,155 @@ +--- +tags: + - Agents/ Function Calling + - Open Source/ AG2 +--- + +!!! tip inline end "[Open in github](https://github.com/aws-samples/amazon-bedrock-samples/blob/main/agents-and-function-calling/open-source-agents/ag2/ag2-multi-agent-bedrock.ipynb){:target="_blank"}" + +