From 32d5b148c99cda4485eb70327ef5b39e907c1436 Mon Sep 17 00:00:00 2001 From: Faridun Mirzoev Date: Fri, 27 Mar 2026 11:55:25 -0700 Subject: [PATCH] feat: add AG2 framework examples with native Bedrock support Add AG2 (formerly AutoGen) to the open-source agents section with 3 notebooks demonstrating native Amazon Bedrock integration via LLMConfig(api_type="bedrock"): - ag2-single-agent-bedrock: basic conversational agent - ag2-tool-use-bedrock: function calling with dual registration pattern - ag2-multi-agent-bedrock: GroupChat multi-agent orchestration Includes corresponding markdown docs and updated parent README. --- .../open-source-agents/README.md | 19 ++ .../open-source-agents/ag2/README.md | 43 ++++ .../ag2/ag2-multi-agent-bedrock.ipynb | 243 ++++++++++++++++++ .../ag2/ag2-single-agent-bedrock.ipynb | 172 +++++++++++++ .../ag2/ag2-tool-use-bedrock.ipynb | 221 ++++++++++++++++ .../ag2/ag2-multi-agent-bedrock.md | 155 +++++++++++ .../ag2/ag2-single-agent-bedrock.md | 104 ++++++++ .../ag2/ag2-tool-use-bedrock.md | 143 +++++++++++ 8 files changed, 1100 insertions(+) create mode 100644 agents-and-function-calling/open-source-agents/ag2/README.md create mode 100644 agents-and-function-calling/open-source-agents/ag2/ag2-multi-agent-bedrock.ipynb create mode 100644 agents-and-function-calling/open-source-agents/ag2/ag2-single-agent-bedrock.ipynb create mode 100644 agents-and-function-calling/open-source-agents/ag2/ag2-tool-use-bedrock.ipynb create mode 100644 docs/agents-and-function-calling/open-source-agents/ag2/ag2-multi-agent-bedrock.md create mode 100644 docs/agents-and-function-calling/open-source-agents/ag2/ag2-single-agent-bedrock.md create mode 100644 docs/agents-and-function-calling/open-source-agents/ag2/ag2-tool-use-bedrock.md diff --git a/agents-and-function-calling/open-source-agents/README.md b/agents-and-function-calling/open-source-agents/README.md index a4ef2fa20..eb398ba75 100644 --- a/agents-and-function-calling/open-source-agents/README.md +++ b/agents-and-function-calling/open-source-agents/README.md @@ -2,6 +2,25 @@ +## Contents + +### [AG2](ag2/) +Multi-agent framework with native Amazon Bedrock support. +Examples: single agent, tool use, multi-agent GroupChat — all using +`LLMConfig(api_type="bedrock")` with no wrapper libraries. + +### [CrewAI](crew.ai/) +CrewAI agent examples with Amazon Bedrock. + +### [LangChain](LangChain/) +LangChain agent examples with Amazon Bedrock. + +### [LangGraph](langgraph/) +LangGraph agent examples with Amazon Bedrock. + +### [LlamaIndex](llamaindex/) +LlamaIndex agent examples with Amazon Bedrock. + ## Contributing We welcome community contributions! Please ensure your sample aligns with AWS [best practices](https://aws.amazon.com/architecture/well-architected/), and please update the **Contents** section of this README file with a link to your sample, along with a description. diff --git a/agents-and-function-calling/open-source-agents/ag2/README.md b/agents-and-function-calling/open-source-agents/ag2/README.md new file mode 100644 index 000000000..8752faeb1 --- /dev/null +++ b/agents-and-function-calling/open-source-agents/ag2/README.md @@ -0,0 +1,43 @@ +# AG2 with Amazon Bedrock + +[AG2](https://ag2.ai/) (formerly AutoGen) is an open-source multi-agent framework with +**native Amazon Bedrock support**. Unlike other frameworks that require wrapper libraries, +AG2 connects to Bedrock directly via `LLMConfig(api_type="bedrock")`. + +## Examples + +| Notebook | Description | +|----------|-------------| +| [ag2-single-agent-bedrock.ipynb](ag2-single-agent-bedrock.ipynb) | Basic single agent with Bedrock | +| [ag2-tool-use-bedrock.ipynb](ag2-tool-use-bedrock.ipynb) | Function calling with `register_for_llm` | +| [ag2-multi-agent-bedrock.ipynb](ag2-multi-agent-bedrock.ipynb) | Multi-agent GroupChat orchestration | + +## Why AG2 + Bedrock? + +- **Native support**: `LLMConfig(api_type="bedrock")` — no LangChain or wrapper needed +- **AWS credential chain**: IAM roles, environment variables, or `~/.aws/credentials` +- **All Bedrock models**: Claude, Llama, Mistral, Titan, Command R+ +- **Multi-agent**: GroupChat with automatic speaker selection +- **500K+ monthly PyPI downloads**: Active community with frequent releases + +## Quick Start + +```bash +pip install ag2[openai] +``` + +```python +from autogen import AssistantAgent, UserProxyAgent, LLMConfig + +llm_config = LLMConfig( + api_type="bedrock", + model="anthropic.claude-3-sonnet-20240229-v1:0", + aws_region="us-east-1", +) +``` + +## Resources + +- [AG2 Documentation](https://docs.ag2.ai/) +- [AG2 GitHub](https://github.com/ag2ai/ag2) +- [AG2 Bedrock Guide](https://docs.ag2.ai/docs/user-guide/models/amazon-bedrock) diff --git a/agents-and-function-calling/open-source-agents/ag2/ag2-multi-agent-bedrock.ipynb b/agents-and-function-calling/open-source-agents/ag2/ag2-multi-agent-bedrock.ipynb new file mode 100644 index 000000000..e8c616e86 --- /dev/null +++ b/agents-and-function-calling/open-source-agents/ag2/ag2-multi-agent-bedrock.ipynb @@ -0,0 +1,243 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# AG2 Multi-Agent GroupChat with Amazon Bedrock\n", + "\n", + "## Overview\n", + "\n", + "[AG2](https://ag2.ai/) (formerly AutoGen) provides a powerful **GroupChat** feature for multi-agent orchestration. The `GroupChatManager` uses the LLM to automatically select the next speaker based on conversation context — no hardcoded routing graphs or handoff logic required.\n", + "\n", + "In this notebook, we'll create a multi-agent research team using AG2's GroupChat with Amazon Bedrock as the LLM backend.\n", + "\n", + "## Context\n", + "\n", + "AG2's GroupChat is its flagship multi-agent feature. Unlike frameworks that require explicit handoff definitions or routing graphs, AG2's `GroupChatManager` uses the LLM itself to determine which agent should speak next based on:\n", + "- Each agent's name and system message\n", + "- The current conversation history\n", + "- The task at hand\n", + "\n", + "This makes it easy to add or remove agents without rewriting orchestration code. Combined with native Bedrock support, you get enterprise-grade multi-agent systems with AWS IAM authentication, VPC endpoints, and CloudTrail logging — all without wrapper libraries.\n", + "\n", + "## Prerequisites\n", + "\n", + "- An AWS account with Amazon Bedrock model access enabled\n", + "- Python 3.10+\n", + "- AWS credentials configured (IAM role, environment variables, or `~/.aws/credentials`)\n", + "- Model access granted for `anthropic.claude-3-sonnet-20240229-v1:0` in your AWS region" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Setup\n", + "\n", + "Install the AG2 package." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%pip install -q ag2[openai]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Code\n", + "\n", + "### Configure Bedrock\n", + "\n", + "Set up the native Bedrock connection." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from autogen import AssistantAgent, UserProxyAgent, GroupChat, GroupChatManager, LLMConfig\n", + "\n", + "# Native Bedrock support — no OpenAI key needed\n", + "# ---- ⚠️ Update region for your AWS setup ⚠️ ----\n", + "llm_config = LLMConfig(\n", + " api_type=\"bedrock\",\n", + " model=\"anthropic.claude-3-sonnet-20240229-v1:0\",\n", + " aws_region=\"us-east-1\",\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create Specialist Agents\n", + "\n", + "We'll create a team of three specialist agents, each with a distinct role:\n", + "- **Researcher**: Gathers information and provides factual analysis\n", + "- **Writer**: Creates clear, well-structured content from research findings\n", + "- **Critic**: Reviews content for accuracy and completeness, and terminates when satisfied" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "with llm_config:\n", + " researcher = AssistantAgent(\n", + " name=\"Researcher\",\n", + " system_message=(\n", + " \"You are a research analyst. Search for information and provide \"\n", + " \"factual analysis. Focus on key data points and trends. \"\n", + " \"Cite sources when possible.\"\n", + " ),\n", + " )\n", + " writer = AssistantAgent(\n", + " name=\"Writer\",\n", + " system_message=(\n", + " \"You are a technical writer. Take research findings and create \"\n", + " \"clear, well-structured summaries for business stakeholders. \"\n", + " \"Use bullet points and concise language.\"\n", + " ),\n", + " )\n", + " critic = AssistantAgent(\n", + " name=\"Critic\",\n", + " system_message=(\n", + " \"You review content for accuracy, completeness, and clarity. \"\n", + " \"Provide constructive feedback. When the output meets quality \"\n", + " \"standards, say TERMINATE to end the conversation.\"\n", + " ),\n", + " )\n", + " user_proxy = UserProxyAgent(\n", + " name=\"user\",\n", + " human_input_mode=\"NEVER\",\n", + " max_consecutive_auto_reply=0,\n", + " )" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Set Up GroupChat\n", + "\n", + "The `GroupChat` collects agents into a group, and the `GroupChatManager` orchestrates the conversation. The `speaker_selection_method=\"auto\"` setting lets the LLM decide which agent speaks next based on context.\n", + "\n", + "The `max_round` parameter limits the total number of turns to prevent runaway conversations." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "groupchat = GroupChat(\n", + " agents=[user_proxy, researcher, writer, critic],\n", + " messages=[],\n", + " max_round=8,\n", + " speaker_selection_method=\"auto\",\n", + ")\n", + "manager = GroupChatManager(groupchat=groupchat, llm_config=llm_config)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Run the Multi-Agent Conversation\n", + "\n", + "The user proxy sends the initial request to the GroupChatManager, which then automatically routes messages to the appropriate agents. Watch the conversation to see how:\n", + "1. The **Researcher** gathers information\n", + "2. The **Writer** creates a structured summary\n", + "3. The **Critic** reviews and provides feedback or approves" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "result = user_proxy.initiate_chat(\n", + " manager,\n", + " message=(\n", + " \"Research the current state of generative AI adoption in enterprise. \"\n", + " \"Write a brief executive summary with key trends and challenges.\"\n", + " ),\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Understanding the Output\n", + "\n", + "The GroupChatManager automatically routed the conversation through the agents:\n", + "- The **Researcher** provided data points and analysis\n", + "- The **Writer** structured the findings into an executive summary\n", + "- The **Critic** reviewed the output and said TERMINATE when satisfied\n", + "\n", + "The key advantage of AG2's GroupChat: the `GroupChatManager` uses the LLM to automatically select the next speaker based on conversation context. You don't need to define explicit routing logic or handoff patterns." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Best Practices\n", + "\n", + "- **Speaker selection**: Use `\"auto\"` for LLM-based routing, `\"round_robin\"` for predictable sequential flow\n", + "- **Max rounds**: Set `max_round` to prevent runaway conversations — 6-10 is a good starting range\n", + "- **Termination**: Include `TERMINATE` in one agent's system message to end gracefully\n", + "- **Agent count**: 3-5 agents works well; more agents increases speaker selection complexity\n", + "- **Distinct roles**: Give each agent a clear, non-overlapping system message for better routing\n", + "- **Native Bedrock advantages**: No OpenAI key, AWS IAM auth, supports Claude/Llama/Mistral/Titan, enterprise-grade security with VPC endpoints and CloudTrail logging\n", + "\n", + "## Next Steps\n", + "\n", + "- **Single agent**: See [ag2-single-agent-bedrock.ipynb](ag2-single-agent-bedrock.ipynb) for the basic setup\n", + "- **Tool use**: See [ag2-tool-use-bedrock.ipynb](ag2-tool-use-bedrock.ipynb) for function calling with Bedrock\n", + "- **AG2 GroupChat Guide**: [docs.ag2.ai/docs/user-guide/basic-concepts/orchestration/group-chat](https://docs.ag2.ai/docs/user-guide/basic-concepts/orchestration/group-chat)\n", + "- **AG2 Documentation**: [docs.ag2.ai](https://docs.ag2.ai/)\n", + "- **AG2 Bedrock Guide**: [docs.ag2.ai/docs/user-guide/models/amazon-bedrock](https://docs.ag2.ai/docs/user-guide/models/amazon-bedrock)\n", + "\n", + "## Cleanup\n", + "\n", + "No resources to clean up — this notebook uses only local compute and Bedrock API calls. To stop incurring Bedrock charges, simply stop running the notebook." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} \ No newline at end of file diff --git a/agents-and-function-calling/open-source-agents/ag2/ag2-single-agent-bedrock.ipynb b/agents-and-function-calling/open-source-agents/ag2/ag2-single-agent-bedrock.ipynb new file mode 100644 index 000000000..eb5f44261 --- /dev/null +++ b/agents-and-function-calling/open-source-agents/ag2/ag2-single-agent-bedrock.ipynb @@ -0,0 +1,172 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# AG2 Single Agent with Amazon Bedrock\n", + "\n", + "## Overview\n", + "\n", + "[AG2](https://ag2.ai/) (formerly AutoGen) is an open-source multi-agent framework with **native Amazon Bedrock support** — no wrapper libraries or OpenAI API keys required.\n", + "\n", + "In this notebook, we'll create a simple conversational agent using AG2 with Amazon Bedrock as the LLM backend.\n", + "\n", + "## Context\n", + "\n", + "AG2 is a community-driven fork of AutoGen with 400K+ monthly PyPI downloads. Its key differentiator for Bedrock users is `LLMConfig(api_type=\"bedrock\")` — native integration without wrapper libraries like LangChain's ChatBedrock.\n", + "\n", + "This means you can use any Bedrock-supported model (Claude, Llama, Mistral, Titan, Command R+) with the standard AWS credential chain — IAM roles, environment variables, or `~/.aws/credentials`.\n", + "\n", + "## Prerequisites\n", + "\n", + "- An AWS account with Amazon Bedrock model access enabled\n", + "- Python 3.10+\n", + "- AWS credentials configured (IAM role, environment variables, or `~/.aws/credentials`)\n", + "- Model access granted for `anthropic.claude-3-sonnet-20240229-v1:0` in your AWS region" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Setup\n", + "\n", + "Install the AG2 package. The `[openai]` extra includes the required dependencies for LLM integration." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%pip install -q ag2[openai]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Code\n", + "\n", + "### Configure AG2 with Amazon Bedrock\n", + "\n", + "AG2 supports Bedrock natively via `LLMConfig(api_type=\"bedrock\")`. This uses the default AWS credential chain — IAM roles, environment variables, or `~/.aws/credentials` — so no API keys need to be hardcoded." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from autogen import AssistantAgent, UserProxyAgent, LLMConfig\n", + "\n", + "# Native Bedrock support — no OpenAI key needed\n", + "# ---- ⚠️ Update region for your AWS setup ⚠️ ----\n", + "llm_config = LLMConfig(\n", + " api_type=\"bedrock\",\n", + " model=\"anthropic.claude-3-sonnet-20240229-v1:0\",\n", + " aws_region=\"us-east-1\",\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create Agents\n", + "\n", + "AG2 uses a two-agent pattern: an `AssistantAgent` (LLM-powered reasoning) and a `UserProxyAgent` (executes tools, provides human input).\n", + "\n", + "The `with llm_config:` context manager applies the Bedrock configuration to all agents created within it." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "with llm_config:\n", + " assistant = AssistantAgent(\n", + " name=\"assistant\",\n", + " system_message=\"You are a helpful AI assistant. Answer questions concisely.\",\n", + " )\n", + " user_proxy = UserProxyAgent(\n", + " name=\"user\",\n", + " human_input_mode=\"NEVER\",\n", + " max_consecutive_auto_reply=1,\n", + " )" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Run the Conversation\n", + "\n", + "The `initiate_chat` method sends a message from the `UserProxyAgent` to the `AssistantAgent` and starts the conversation loop. The assistant will use Amazon Bedrock (Claude) to generate its response." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "result = user_proxy.initiate_chat(\n", + " assistant,\n", + " message=\"What are the main benefits of using Amazon Bedrock for enterprise AI?\",\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Best Practices\n", + "\n", + "- **Use IAM roles** over hardcoded credentials for production deployments\n", + "- **Set `max_consecutive_auto_reply`** to prevent infinite conversation loops\n", + "- **Use `human_input_mode=\"NEVER\"`** for automated pipelines, `\"TERMINATE\"` for interactive use\n", + "- **Region selection**: Choose the AWS region closest to your workload for lower latency\n", + "- **Model selection**: AG2 supports all Bedrock models — use Claude for complex reasoning, Llama for open-source flexibility, Mistral for multilingual tasks\n", + "\n", + "## Next Steps\n", + "\n", + "- **Tool use**: See [ag2-tool-use-bedrock.ipynb](ag2-tool-use-bedrock.ipynb) for function calling with Bedrock\n", + "- **Multi-agent**: See [ag2-multi-agent-bedrock.ipynb](ag2-multi-agent-bedrock.ipynb) for GroupChat orchestration\n", + "- **AG2 Documentation**: [docs.ag2.ai](https://docs.ag2.ai/)\n", + "- **AG2 GitHub**: [github.com/ag2ai/ag2](https://github.com/ag2ai/ag2)\n", + "- **AG2 Bedrock Guide**: [docs.ag2.ai/docs/user-guide/models/amazon-bedrock](https://docs.ag2.ai/docs/user-guide/models/amazon-bedrock)\n", + "\n", + "## Cleanup\n", + "\n", + "No resources to clean up — this notebook uses only local compute and Bedrock API calls. To stop incurring Bedrock charges, simply stop running the notebook." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} \ No newline at end of file diff --git a/agents-and-function-calling/open-source-agents/ag2/ag2-tool-use-bedrock.ipynb b/agents-and-function-calling/open-source-agents/ag2/ag2-tool-use-bedrock.ipynb new file mode 100644 index 000000000..679a199f9 --- /dev/null +++ b/agents-and-function-calling/open-source-agents/ag2/ag2-tool-use-bedrock.ipynb @@ -0,0 +1,221 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# AG2 Tool Use with Amazon Bedrock\n", + "\n", + "## Overview\n", + "\n", + "[AG2](https://ag2.ai/) (formerly AutoGen) supports tool/function calling with Amazon Bedrock models natively. This notebook demonstrates AG2's dual registration pattern — `register_for_llm` (the LLM decides **when** to call a tool) and `register_for_execution` (the UserProxy **executes** it).\n", + "\n", + "## Context\n", + "\n", + "AG2's tool calling approach separates **tool selection** from **tool execution**:\n", + "- The `AssistantAgent` (backed by a Bedrock model) decides which tool to call and with what arguments\n", + "- The `UserProxyAgent` actually executes the function and returns the result\n", + "\n", + "This separation gives full control over tool execution — you can add sandboxing, logging, approval flows, or rate limiting at the execution layer without modifying the LLM configuration.\n", + "\n", + "## Prerequisites\n", + "\n", + "- An AWS account with Amazon Bedrock model access enabled\n", + "- Python 3.10+\n", + "- AWS credentials configured (IAM role, environment variables, or `~/.aws/credentials`)\n", + "- Model access granted for `anthropic.claude-3-sonnet-20240229-v1:0` in your AWS region" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Setup\n", + "\n", + "Install the AG2 package." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%pip install -q ag2[openai]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Code\n", + "\n", + "### Configure Bedrock\n", + "\n", + "Set up the native Bedrock connection using AG2's `LLMConfig`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from typing import Annotated\n", + "from autogen import AssistantAgent, UserProxyAgent, LLMConfig\n", + "\n", + "# Native Bedrock support — no OpenAI key needed\n", + "# ---- ⚠️ Update region for your AWS setup ⚠️ ----\n", + "llm_config = LLMConfig(\n", + " api_type=\"bedrock\",\n", + " model=\"anthropic.claude-3-sonnet-20240229-v1:0\",\n", + " aws_region=\"us-east-1\",\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Define Tools\n", + "\n", + "Define Python functions that will be available to the agent. Use `Annotated` type hints to provide parameter descriptions — Bedrock models use these to understand the tool schema." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def get_stock_price(\n", + " symbol: Annotated[str, \"Stock ticker symbol (e.g., AMZN, GOOGL)\"],\n", + ") -> dict:\n", + " \"\"\"Look up the current stock price for a given ticker symbol.\"\"\"\n", + " # Simulated data for demo purposes\n", + " prices = {\n", + " \"AMZN\": {\"price\": 186.45, \"change\": \"+2.3%\"},\n", + " \"GOOGL\": {\"price\": 175.20, \"change\": \"-0.8%\"},\n", + " \"MSFT\": {\"price\": 420.15, \"change\": \"+1.1%\"},\n", + " }\n", + " return prices.get(symbol.upper(), {\"error\": f\"Unknown symbol: {symbol}\"})\n", + "\n", + "\n", + "def get_company_info(\n", + " company: Annotated[str, \"Company name to look up\"],\n", + ") -> str:\n", + " \"\"\"Get brief information about a company.\"\"\"\n", + " info = {\n", + " \"amazon\": \"Amazon.com, Inc. — e-commerce, cloud computing (AWS), AI, streaming.\",\n", + " \"google\": \"Alphabet Inc. — search, advertising, cloud, AI research.\",\n", + " \"microsoft\": \"Microsoft Corp. — software, cloud (Azure), gaming, AI.\",\n", + " }\n", + " return info.get(company.lower(), f\"No info available for {company}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create Agents and Register Tools\n", + "\n", + "AG2 uses a **dual registration** pattern:\n", + "1. `register_for_llm` — tells the AssistantAgent (LLM) that this tool exists and how to call it\n", + "2. `register_for_execution` — tells the UserProxyAgent to execute the function when the LLM requests it\n", + "\n", + "This separation means the LLM decides **when** to call a tool, but the UserProxy controls **how** it's executed." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "with llm_config:\n", + " assistant = AssistantAgent(\n", + " name=\"FinanceAssistant\",\n", + " system_message=\"You are a financial assistant. Use the available tools to look up stock data and company information. Provide clear analysis based on the data.\",\n", + " )\n", + " user_proxy = UserProxyAgent(\n", + " name=\"user\",\n", + " human_input_mode=\"NEVER\",\n", + " max_consecutive_auto_reply=5,\n", + " )\n", + "\n", + "# Register tools for the LLM (tool schema) and for execution (function call)\n", + "assistant.register_for_llm(description=\"Look up current stock price\")(get_stock_price)\n", + "assistant.register_for_llm(description=\"Get company information\")(get_company_info)\n", + "user_proxy.register_for_execution()(get_stock_price)\n", + "user_proxy.register_for_execution()(get_company_info)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Run the Conversation\n", + "\n", + "The assistant will use Bedrock (Claude) to decide which tools to call, and the user proxy will execute them. Watch the conversation to see the tool calls happening automatically." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "result = user_proxy.initiate_chat(\n", + " assistant,\n", + " message=\"Compare Amazon and Google — show me their stock prices and company info.\",\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Best Practices\n", + "\n", + "- **Use `Annotated` type hints** for parameter descriptions — Bedrock models use these for tool schemas\n", + "- **Keep tools simple and focused** — one function per responsibility\n", + "- **Use `max_consecutive_auto_reply`** to limit tool call loops and prevent runaway execution\n", + "- **Return structured data** (dict/JSON) from tools for consistent parsing\n", + "- **Add docstrings** to tool functions — the LLM uses these to understand when to call each tool\n", + "- **Error handling**: Return error messages as data (not exceptions) so the LLM can reason about failures\n", + "\n", + "## Next Steps\n", + "\n", + "- **Multi-agent**: See [ag2-multi-agent-bedrock.ipynb](ag2-multi-agent-bedrock.ipynb) for GroupChat orchestration with tools\n", + "- **Single agent**: See [ag2-single-agent-bedrock.ipynb](ag2-single-agent-bedrock.ipynb) for the basic setup\n", + "- **AG2 Tool Use Guide**: [docs.ag2.ai/docs/user-guide/basic-concepts/tools](https://docs.ag2.ai/docs/user-guide/basic-concepts/tools)\n", + "- **AG2 Documentation**: [docs.ag2.ai](https://docs.ag2.ai/)\n", + "\n", + "## Cleanup\n", + "\n", + "No resources to clean up — this notebook uses only local compute and Bedrock API calls. To stop incurring Bedrock charges, simply stop running the notebook." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} \ No newline at end of file diff --git a/docs/agents-and-function-calling/open-source-agents/ag2/ag2-multi-agent-bedrock.md b/docs/agents-and-function-calling/open-source-agents/ag2/ag2-multi-agent-bedrock.md new file mode 100644 index 000000000..0eb584538 --- /dev/null +++ b/docs/agents-and-function-calling/open-source-agents/ag2/ag2-multi-agent-bedrock.md @@ -0,0 +1,155 @@ +--- +tags: + - Agents/ Function Calling + - Open Source/ AG2 +--- + +!!! tip inline end "[Open in github](https://github.com/aws-samples/amazon-bedrock-samples/blob/main/agents-and-function-calling/open-source-agents/ag2/ag2-multi-agent-bedrock.ipynb){:target="_blank"}" + +

AG2 Multi-Agent GroupChat with Amazon Bedrock

+ +

Overview

+ +[AG2](https://ag2.ai/) (formerly AutoGen) provides a powerful **GroupChat** feature for multi-agent orchestration. The `GroupChatManager` uses the LLM to automatically select the next speaker based on conversation context — no hardcoded routing graphs or handoff logic required. + +In this notebook, we'll create a multi-agent research team using AG2's GroupChat with Amazon Bedrock as the LLM backend. + +

Context

+ +AG2's GroupChat is its flagship multi-agent feature. Unlike frameworks that require explicit handoff definitions or routing graphs, AG2's `GroupChatManager` uses the LLM itself to determine which agent should speak next based on: +- Each agent's name and system message +- The current conversation history +- The task at hand + +This makes it easy to add or remove agents without rewriting orchestration code. Combined with native Bedrock support, you get enterprise-grade multi-agent systems with AWS IAM authentication, VPC endpoints, and CloudTrail logging — all without wrapper libraries. + +

Prerequisites

+ +- An AWS account with Amazon Bedrock model access enabled +- Python 3.10+ +- AWS credentials configured (IAM role, environment variables, or `~/.aws/credentials`) +- Model access granted for `anthropic.claude-3-sonnet-20240229-v1:0` in your AWS region + +

Setup

+ +Install the AG2 package. + +```python +%pip install -q ag2[openai] +``` + +

Code

+ +

Configure Bedrock

+ +Set up the native Bedrock connection. + +```python +from autogen import AssistantAgent, UserProxyAgent, GroupChat, GroupChatManager, LLMConfig + +# Native Bedrock support — no OpenAI key needed +# ---- ⚠️ Update region for your AWS setup ⚠️ ---- +llm_config = LLMConfig( + api_type="bedrock", + model="anthropic.claude-3-sonnet-20240229-v1:0", + aws_region="us-east-1", +) +``` + +

Create Specialist Agents

+ +We'll create a team of three specialist agents, each with a distinct role: +- **Researcher**: Gathers information and provides factual analysis +- **Writer**: Creates clear, well-structured content from research findings +- **Critic**: Reviews content for accuracy and completeness, and terminates when satisfied + +```python +with llm_config: + researcher = AssistantAgent( + name="Researcher", + system_message=( + "You are a research analyst. Search for information and provide " + "factual analysis. Focus on key data points and trends. " + "Cite sources when possible." + ), + ) + writer = AssistantAgent( + name="Writer", + system_message=( + "You are a technical writer. Take research findings and create " + "clear, well-structured summaries for business stakeholders. " + "Use bullet points and concise language." + ), + ) + critic = AssistantAgent( + name="Critic", + system_message=( + "You review content for accuracy, completeness, and clarity. " + "Provide constructive feedback. When the output meets quality " + "standards, say TERMINATE to end the conversation." + ), + ) + user_proxy = UserProxyAgent( + name="user", + human_input_mode="NEVER", + max_consecutive_auto_reply=0, + ) +``` + +

Set Up GroupChat

+ +The `GroupChat` collects agents into a group, and the `GroupChatManager` orchestrates the conversation. The `speaker_selection_method="auto"` setting lets the LLM decide which agent speaks next based on context. + +```python +groupchat = GroupChat( + agents=[user_proxy, researcher, writer, critic], + messages=[], + max_round=8, + speaker_selection_method="auto", +) +manager = GroupChatManager(groupchat=groupchat, llm_config=llm_config) +``` + +

Run the Multi-Agent Conversation

+ +The user proxy sends the initial request to the GroupChatManager, which then automatically routes messages to the appropriate agents. + +```python +result = user_proxy.initiate_chat( + manager, + message=( + "Research the current state of generative AI adoption in enterprise. " + "Write a brief executive summary with key trends and challenges." + ), +) +``` + +

Understanding the Output

+ +The GroupChatManager automatically routed the conversation through the agents: +- The **Researcher** provided data points and analysis +- The **Writer** structured the findings into an executive summary +- The **Critic** reviewed the output and said TERMINATE when satisfied + +The key advantage of AG2's GroupChat: the `GroupChatManager` uses the LLM to automatically select the next speaker based on conversation context. You don't need to define explicit routing logic or handoff patterns. + +

Best Practices

+ +- **Speaker selection**: Use `"auto"` for LLM-based routing, `"round_robin"` for predictable sequential flow +- **Max rounds**: Set `max_round` to prevent runaway conversations — 6-10 is a good starting range +- **Termination**: Include `TERMINATE` in one agent's system message to end gracefully +- **Agent count**: 3-5 agents works well; more agents increases speaker selection complexity +- **Distinct roles**: Give each agent a clear, non-overlapping system message for better routing +- **Native Bedrock advantages**: No OpenAI key, AWS IAM auth, supports Claude/Llama/Mistral/Titan, enterprise-grade security with VPC endpoints and CloudTrail logging + +

Next Steps

+ +- **Single agent**: See [ag2-single-agent-bedrock.ipynb](https://github.com/aws-samples/amazon-bedrock-samples/blob/main/agents-and-function-calling/open-source-agents/ag2/ag2-single-agent-bedrock.ipynb) for the basic setup +- **Tool use**: See [ag2-tool-use-bedrock.ipynb](https://github.com/aws-samples/amazon-bedrock-samples/blob/main/agents-and-function-calling/open-source-agents/ag2/ag2-tool-use-bedrock.ipynb) for function calling with Bedrock +- **AG2 GroupChat Guide**: [docs.ag2.ai/docs/user-guide/basic-concepts/orchestration/group-chat](https://docs.ag2.ai/docs/user-guide/basic-concepts/orchestration/group-chat) +- **AG2 Documentation**: [docs.ag2.ai](https://docs.ag2.ai/) +- **AG2 Bedrock Guide**: [docs.ag2.ai/docs/user-guide/models/amazon-bedrock](https://docs.ag2.ai/docs/user-guide/models/amazon-bedrock) + +

Cleanup

+ +No resources to clean up — this notebook uses only local compute and Bedrock API calls. To stop incurring Bedrock charges, simply stop running the notebook. diff --git a/docs/agents-and-function-calling/open-source-agents/ag2/ag2-single-agent-bedrock.md b/docs/agents-and-function-calling/open-source-agents/ag2/ag2-single-agent-bedrock.md new file mode 100644 index 000000000..3ab34b883 --- /dev/null +++ b/docs/agents-and-function-calling/open-source-agents/ag2/ag2-single-agent-bedrock.md @@ -0,0 +1,104 @@ +--- +tags: + - Agents/ Function Calling + - Open Source/ AG2 +--- + +!!! tip inline end "[Open in github](https://github.com/aws-samples/amazon-bedrock-samples/blob/main/agents-and-function-calling/open-source-agents/ag2/ag2-single-agent-bedrock.ipynb){:target="_blank"}" + +

AG2 Single Agent with Amazon Bedrock

+ +

Overview

+ +[AG2](https://ag2.ai/) (formerly AutoGen) is an open-source multi-agent framework with **native Amazon Bedrock support** — no wrapper libraries or OpenAI API keys required. + +In this notebook, we'll create a simple conversational agent using AG2 with Amazon Bedrock as the LLM backend. + +

Context

+ +AG2 is a community-driven fork of AutoGen with 400K+ monthly PyPI downloads. Its key differentiator for Bedrock users is `LLMConfig(api_type="bedrock")` — native integration without wrapper libraries like LangChain's ChatBedrock. + +This means you can use any Bedrock-supported model (Claude, Llama, Mistral, Titan, Command R+) with the standard AWS credential chain — IAM roles, environment variables, or `~/.aws/credentials`. + +

Prerequisites

+ +- An AWS account with Amazon Bedrock model access enabled +- Python 3.10+ +- AWS credentials configured (IAM role, environment variables, or `~/.aws/credentials`) +- Model access granted for `anthropic.claude-3-sonnet-20240229-v1:0` in your AWS region + +

Setup

+ +Install the AG2 package. The `[openai]` extra includes the required dependencies for LLM integration. + +```python +%pip install -q ag2[openai] +``` + +

Code

+ +

Configure AG2 with Amazon Bedrock

+ +AG2 supports Bedrock natively via `LLMConfig(api_type="bedrock")`. This uses the default AWS credential chain — IAM roles, environment variables, or `~/.aws/credentials` — so no API keys need to be hardcoded. + +```python +from autogen import AssistantAgent, UserProxyAgent, LLMConfig + +# Native Bedrock support — no OpenAI key needed +# ---- ⚠️ Update region for your AWS setup ⚠️ ---- +llm_config = LLMConfig( + api_type="bedrock", + model="anthropic.claude-3-sonnet-20240229-v1:0", + aws_region="us-east-1", +) +``` + +

Create Agents

+ +AG2 uses a two-agent pattern: an `AssistantAgent` (LLM-powered reasoning) and a `UserProxyAgent` (executes tools, provides human input). + +The `with llm_config:` context manager applies the Bedrock configuration to all agents created within it. + +```python +with llm_config: + assistant = AssistantAgent( + name="assistant", + system_message="You are a helpful AI assistant. Answer questions concisely.", + ) + user_proxy = UserProxyAgent( + name="user", + human_input_mode="NEVER", + max_consecutive_auto_reply=1, + ) +``` + +

Run the Conversation

+ +The `initiate_chat` method sends a message from the `UserProxyAgent` to the `AssistantAgent` and starts the conversation loop. The assistant will use Amazon Bedrock (Claude) to generate its response. + +```python +result = user_proxy.initiate_chat( + assistant, + message="What are the main benefits of using Amazon Bedrock for enterprise AI?", +) +``` + +

Best Practices

+ +- **Use IAM roles** over hardcoded credentials for production deployments +- **Set `max_consecutive_auto_reply`** to prevent infinite conversation loops +- **Use `human_input_mode="NEVER"`** for automated pipelines, `"TERMINATE"` for interactive use +- **Region selection**: Choose the AWS region closest to your workload for lower latency +- **Model selection**: AG2 supports all Bedrock models — use Claude for complex reasoning, Llama for open-source flexibility, Mistral for multilingual tasks + +

Next Steps

+ +- **Tool use**: See [ag2-tool-use-bedrock.ipynb](https://github.com/aws-samples/amazon-bedrock-samples/blob/main/agents-and-function-calling/open-source-agents/ag2/ag2-tool-use-bedrock.ipynb) for function calling with Bedrock +- **Multi-agent**: See [ag2-multi-agent-bedrock.ipynb](https://github.com/aws-samples/amazon-bedrock-samples/blob/main/agents-and-function-calling/open-source-agents/ag2/ag2-multi-agent-bedrock.ipynb) for GroupChat orchestration +- **AG2 Documentation**: [docs.ag2.ai](https://docs.ag2.ai/) +- **AG2 GitHub**: [github.com/ag2ai/ag2](https://github.com/ag2ai/ag2) +- **AG2 Bedrock Guide**: [docs.ag2.ai/docs/user-guide/models/amazon-bedrock](https://docs.ag2.ai/docs/user-guide/models/amazon-bedrock) + +

Cleanup

+ +No resources to clean up — this notebook uses only local compute and Bedrock API calls. To stop incurring Bedrock charges, simply stop running the notebook. diff --git a/docs/agents-and-function-calling/open-source-agents/ag2/ag2-tool-use-bedrock.md b/docs/agents-and-function-calling/open-source-agents/ag2/ag2-tool-use-bedrock.md new file mode 100644 index 000000000..5b7d4b247 --- /dev/null +++ b/docs/agents-and-function-calling/open-source-agents/ag2/ag2-tool-use-bedrock.md @@ -0,0 +1,143 @@ +--- +tags: + - Agents/ Function Calling + - Open Source/ AG2 +--- + +!!! tip inline end "[Open in github](https://github.com/aws-samples/amazon-bedrock-samples/blob/main/agents-and-function-calling/open-source-agents/ag2/ag2-tool-use-bedrock.ipynb){:target="_blank"}" + +

AG2 Tool Use with Amazon Bedrock

+ +

Overview

+ +[AG2](https://ag2.ai/) (formerly AutoGen) supports tool/function calling with Amazon Bedrock models natively. This notebook demonstrates AG2's dual registration pattern — `register_for_llm` (the LLM decides **when** to call a tool) and `register_for_execution` (the UserProxy **executes** it). + +

Context

+ +AG2's tool calling approach separates **tool selection** from **tool execution**: +- The `AssistantAgent` (backed by a Bedrock model) decides which tool to call and with what arguments +- The `UserProxyAgent` actually executes the function and returns the result + +This separation gives full control over tool execution — you can add sandboxing, logging, approval flows, or rate limiting at the execution layer without modifying the LLM configuration. + +

Prerequisites

+ +- An AWS account with Amazon Bedrock model access enabled +- Python 3.10+ +- AWS credentials configured (IAM role, environment variables, or `~/.aws/credentials`) +- Model access granted for `anthropic.claude-3-sonnet-20240229-v1:0` in your AWS region + +

Setup

+ +Install the AG2 package. + +```python +%pip install -q ag2[openai] +``` + +

Code

+ +

Configure Bedrock

+ +Set up the native Bedrock connection using AG2's `LLMConfig`. + +```python +from typing import Annotated +from autogen import AssistantAgent, UserProxyAgent, LLMConfig + +# Native Bedrock support — no OpenAI key needed +# ---- ⚠️ Update region for your AWS setup ⚠️ ---- +llm_config = LLMConfig( + api_type="bedrock", + model="anthropic.claude-3-sonnet-20240229-v1:0", + aws_region="us-east-1", +) +``` + +

Define Tools

+ +Define Python functions that will be available to the agent. Use `Annotated` type hints to provide parameter descriptions — Bedrock models use these to understand the tool schema. + +```python +def get_stock_price( + symbol: Annotated[str, "Stock ticker symbol (e.g., AMZN, GOOGL)"], +) -> dict: + """Look up the current stock price for a given ticker symbol.""" + # Simulated data for demo purposes + prices = { + "AMZN": {"price": 186.45, "change": "+2.3%"}, + "GOOGL": {"price": 175.20, "change": "-0.8%"}, + "MSFT": {"price": 420.15, "change": "+1.1%"}, + } + return prices.get(symbol.upper(), {"error": f"Unknown symbol: {symbol}"}) + + +def get_company_info( + company: Annotated[str, "Company name to look up"], +) -> str: + """Get brief information about a company.""" + info = { + "amazon": "Amazon.com, Inc. — e-commerce, cloud computing (AWS), AI, streaming.", + "google": "Alphabet Inc. — search, advertising, cloud, AI research.", + "microsoft": "Microsoft Corp. — software, cloud (Azure), gaming, AI.", + } + return info.get(company.lower(), f"No info available for {company}") +``` + +

Create Agents and Register Tools

+ +AG2 uses a **dual registration** pattern: +1. `register_for_llm` — tells the AssistantAgent (LLM) that this tool exists and how to call it +2. `register_for_execution` — tells the UserProxyAgent to execute the function when the LLM requests it + +This separation means the LLM decides **when** to call a tool, but the UserProxy controls **how** it's executed. + +```python +with llm_config: + assistant = AssistantAgent( + name="FinanceAssistant", + system_message="You are a financial assistant. Use the available tools to look up stock data and company information. Provide clear analysis based on the data.", + ) + user_proxy = UserProxyAgent( + name="user", + human_input_mode="NEVER", + max_consecutive_auto_reply=5, + ) + +# Register tools for the LLM (tool schema) and for execution (function call) +assistant.register_for_llm(description="Look up current stock price")(get_stock_price) +assistant.register_for_llm(description="Get company information")(get_company_info) +user_proxy.register_for_execution()(get_stock_price) +user_proxy.register_for_execution()(get_company_info) +``` + +

Run the Conversation

+ +The assistant will use Bedrock (Claude) to decide which tools to call, and the user proxy will execute them. + +```python +result = user_proxy.initiate_chat( + assistant, + message="Compare Amazon and Google — show me their stock prices and company info.", +) +``` + +

Best Practices

+ +- **Use `Annotated` type hints** for parameter descriptions — Bedrock models use these for tool schemas +- **Keep tools simple and focused** — one function per responsibility +- **Use `max_consecutive_auto_reply`** to limit tool call loops and prevent runaway execution +- **Return structured data** (dict/JSON) from tools for consistent parsing +- **Add docstrings** to tool functions — the LLM uses these to understand when to call each tool +- **Error handling**: Return error messages as data (not exceptions) so the LLM can reason about failures + +

Next Steps

+ +- **Multi-agent**: See [ag2-multi-agent-bedrock.ipynb](https://github.com/aws-samples/amazon-bedrock-samples/blob/main/agents-and-function-calling/open-source-agents/ag2/ag2-multi-agent-bedrock.ipynb) for GroupChat orchestration with tools +- **Single agent**: See [ag2-single-agent-bedrock.ipynb](https://github.com/aws-samples/amazon-bedrock-samples/blob/main/agents-and-function-calling/open-source-agents/ag2/ag2-single-agent-bedrock.ipynb) for the basic setup +- **AG2 Tool Use Guide**: [docs.ag2.ai/docs/user-guide/basic-concepts/tools](https://docs.ag2.ai/docs/user-guide/basic-concepts/tools) +- **AG2 Documentation**: [docs.ag2.ai](https://docs.ag2.ai/) + +

Cleanup

+ +No resources to clean up — this notebook uses only local compute and Bedrock API calls. To stop incurring Bedrock charges, simply stop running the notebook.