Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 19 additions & 0 deletions agents-and-function-calling/open-source-agents/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,25 @@



## Contents

### [AG2](ag2/)
Multi-agent framework with native Amazon Bedrock support.
Examples: single agent, tool use, multi-agent GroupChat — all using
`LLMConfig(api_type="bedrock")` with no wrapper libraries.

### [CrewAI](crew.ai/)
CrewAI agent examples with Amazon Bedrock.

### [LangChain](LangChain/)
LangChain agent examples with Amazon Bedrock.

### [LangGraph](langgraph/)
LangGraph agent examples with Amazon Bedrock.

### [LlamaIndex](llamaindex/)
LlamaIndex agent examples with Amazon Bedrock.

## Contributing

We welcome community contributions! Please ensure your sample aligns with AWS [best practices](https://aws.amazon.com/architecture/well-architected/), and please update the **Contents** section of this README file with a link to your sample, along with a description.
43 changes: 43 additions & 0 deletions agents-and-function-calling/open-source-agents/ag2/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# AG2 with Amazon Bedrock

[AG2](https://ag2.ai/) (formerly AutoGen) is an open-source multi-agent framework with
**native Amazon Bedrock support**. Unlike other frameworks that require wrapper libraries,
AG2 connects to Bedrock directly via `LLMConfig(api_type="bedrock")`.

## Examples

| Notebook | Description |
|----------|-------------|
| [ag2-single-agent-bedrock.ipynb](ag2-single-agent-bedrock.ipynb) | Basic single agent with Bedrock |
| [ag2-tool-use-bedrock.ipynb](ag2-tool-use-bedrock.ipynb) | Function calling with `register_for_llm` |
| [ag2-multi-agent-bedrock.ipynb](ag2-multi-agent-bedrock.ipynb) | Multi-agent GroupChat orchestration |

## Why AG2 + Bedrock?

- **Native support**: `LLMConfig(api_type="bedrock")` — no LangChain or wrapper needed
- **AWS credential chain**: IAM roles, environment variables, or `~/.aws/credentials`
- **All Bedrock models**: Claude, Llama, Mistral, Titan, Command R+
- **Multi-agent**: GroupChat with automatic speaker selection
- **500K+ monthly PyPI downloads**: Active community with frequent releases

## Quick Start

```bash
pip install ag2[openai]
```

```python
from autogen import AssistantAgent, UserProxyAgent, LLMConfig

llm_config = LLMConfig(
api_type="bedrock",
model="anthropic.claude-3-sonnet-20240229-v1:0",
aws_region="us-east-1",
)
```

## Resources

- [AG2 Documentation](https://docs.ag2.ai/)
- [AG2 GitHub](https://github.com/ag2ai/ag2)
- [AG2 Bedrock Guide](https://docs.ag2.ai/docs/user-guide/models/amazon-bedrock)
Original file line number Diff line number Diff line change
@@ -0,0 +1,243 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AG2 Multi-Agent GroupChat with Amazon Bedrock\n",
"\n",
"## Overview\n",
"\n",
"[AG2](https://ag2.ai/) (formerly AutoGen) provides a powerful **GroupChat** feature for multi-agent orchestration. The `GroupChatManager` uses the LLM to automatically select the next speaker based on conversation context — no hardcoded routing graphs or handoff logic required.\n",
"\n",
"In this notebook, we'll create a multi-agent research team using AG2's GroupChat with Amazon Bedrock as the LLM backend.\n",
"\n",
"## Context\n",
"\n",
"AG2's GroupChat is its flagship multi-agent feature. Unlike frameworks that require explicit handoff definitions or routing graphs, AG2's `GroupChatManager` uses the LLM itself to determine which agent should speak next based on:\n",
"- Each agent's name and system message\n",
"- The current conversation history\n",
"- The task at hand\n",
"\n",
"This makes it easy to add or remove agents without rewriting orchestration code. Combined with native Bedrock support, you get enterprise-grade multi-agent systems with AWS IAM authentication, VPC endpoints, and CloudTrail logging — all without wrapper libraries.\n",
"\n",
"## Prerequisites\n",
"\n",
"- An AWS account with Amazon Bedrock model access enabled\n",
"- Python 3.10+\n",
"- AWS credentials configured (IAM role, environment variables, or `~/.aws/credentials`)\n",
"- Model access granted for `anthropic.claude-3-sonnet-20240229-v1:0` in your AWS region"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"Install the AG2 package."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -q ag2[openai]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Code\n",
"\n",
"### Configure Bedrock\n",
"\n",
"Set up the native Bedrock connection."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from autogen import AssistantAgent, UserProxyAgent, GroupChat, GroupChatManager, LLMConfig\n",
"\n",
"# Native Bedrock support — no OpenAI key needed\n",
"# ---- ⚠️ Update region for your AWS setup ⚠️ ----\n",
"llm_config = LLMConfig(\n",
" api_type=\"bedrock\",\n",
" model=\"anthropic.claude-3-sonnet-20240229-v1:0\",\n",
" aws_region=\"us-east-1\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Specialist Agents\n",
"\n",
"We'll create a team of three specialist agents, each with a distinct role:\n",
"- **Researcher**: Gathers information and provides factual analysis\n",
"- **Writer**: Creates clear, well-structured content from research findings\n",
"- **Critic**: Reviews content for accuracy and completeness, and terminates when satisfied"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with llm_config:\n",
" researcher = AssistantAgent(\n",
" name=\"Researcher\",\n",
" system_message=(\n",
" \"You are a research analyst. Search for information and provide \"\n",
" \"factual analysis. Focus on key data points and trends. \"\n",
" \"Cite sources when possible.\"\n",
" ),\n",
" )\n",
" writer = AssistantAgent(\n",
" name=\"Writer\",\n",
" system_message=(\n",
" \"You are a technical writer. Take research findings and create \"\n",
" \"clear, well-structured summaries for business stakeholders. \"\n",
" \"Use bullet points and concise language.\"\n",
" ),\n",
" )\n",
" critic = AssistantAgent(\n",
" name=\"Critic\",\n",
" system_message=(\n",
" \"You review content for accuracy, completeness, and clarity. \"\n",
" \"Provide constructive feedback. When the output meets quality \"\n",
" \"standards, say TERMINATE to end the conversation.\"\n",
" ),\n",
" )\n",
" user_proxy = UserProxyAgent(\n",
" name=\"user\",\n",
" human_input_mode=\"NEVER\",\n",
" max_consecutive_auto_reply=0,\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set Up GroupChat\n",
"\n",
"The `GroupChat` collects agents into a group, and the `GroupChatManager` orchestrates the conversation. The `speaker_selection_method=\"auto\"` setting lets the LLM decide which agent speaks next based on context.\n",
"\n",
"The `max_round` parameter limits the total number of turns to prevent runaway conversations."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"groupchat = GroupChat(\n",
" agents=[user_proxy, researcher, writer, critic],\n",
" messages=[],\n",
" max_round=8,\n",
" speaker_selection_method=\"auto\",\n",
")\n",
"manager = GroupChatManager(groupchat=groupchat, llm_config=llm_config)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Run the Multi-Agent Conversation\n",
"\n",
"The user proxy sends the initial request to the GroupChatManager, which then automatically routes messages to the appropriate agents. Watch the conversation to see how:\n",
"1. The **Researcher** gathers information\n",
"2. The **Writer** creates a structured summary\n",
"3. The **Critic** reviews and provides feedback or approves"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"result = user_proxy.initiate_chat(\n",
" manager,\n",
" message=(\n",
" \"Research the current state of generative AI adoption in enterprise. \"\n",
" \"Write a brief executive summary with key trends and challenges.\"\n",
" ),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Understanding the Output\n",
"\n",
"The GroupChatManager automatically routed the conversation through the agents:\n",
"- The **Researcher** provided data points and analysis\n",
"- The **Writer** structured the findings into an executive summary\n",
"- The **Critic** reviewed the output and said TERMINATE when satisfied\n",
"\n",
"The key advantage of AG2's GroupChat: the `GroupChatManager` uses the LLM to automatically select the next speaker based on conversation context. You don't need to define explicit routing logic or handoff patterns."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Best Practices\n",
"\n",
"- **Speaker selection**: Use `\"auto\"` for LLM-based routing, `\"round_robin\"` for predictable sequential flow\n",
"- **Max rounds**: Set `max_round` to prevent runaway conversations — 6-10 is a good starting range\n",
"- **Termination**: Include `TERMINATE` in one agent's system message to end gracefully\n",
"- **Agent count**: 3-5 agents works well; more agents increases speaker selection complexity\n",
"- **Distinct roles**: Give each agent a clear, non-overlapping system message for better routing\n",
"- **Native Bedrock advantages**: No OpenAI key, AWS IAM auth, supports Claude/Llama/Mistral/Titan, enterprise-grade security with VPC endpoints and CloudTrail logging\n",
"\n",
"## Next Steps\n",
"\n",
"- **Single agent**: See [ag2-single-agent-bedrock.ipynb](ag2-single-agent-bedrock.ipynb) for the basic setup\n",
"- **Tool use**: See [ag2-tool-use-bedrock.ipynb](ag2-tool-use-bedrock.ipynb) for function calling with Bedrock\n",
"- **AG2 GroupChat Guide**: [docs.ag2.ai/docs/user-guide/basic-concepts/orchestration/group-chat](https://docs.ag2.ai/docs/user-guide/basic-concepts/orchestration/group-chat)\n",
"- **AG2 Documentation**: [docs.ag2.ai](https://docs.ag2.ai/)\n",
"- **AG2 Bedrock Guide**: [docs.ag2.ai/docs/user-guide/models/amazon-bedrock](https://docs.ag2.ai/docs/user-guide/models/amazon-bedrock)\n",
"\n",
"## Cleanup\n",
"\n",
"No resources to clean up — this notebook uses only local compute and Bedrock API calls. To stop incurring Bedrock charges, simply stop running the notebook."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Loading