Skip to content

frankxai/production-agent-patterns

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Production Agent Patterns

The only repo that shows the same agent implemented across ALL major frameworks with production deployment configs.

License: MIT PRs Welcome

🎯 What This Repo Provides

  1. Reference Implementations: Same agent built across 6 frameworks for direct comparison
  2. Production Configs: Real deployment code, not localhost demos
  3. MCP Servers: Reusable tool integration patterns
  4. Cost Analysis: Actual production cost data
  5. Evaluation Suites: Standard testing patterns

πŸ“Š The 7 Pillars Framework

This repository implements the 7 Pillars of Production Agent Systems:

Pillar What It Solves
Orchestration Multi-agent coordination
Memory State persistence across sessions
Guardrails Safety and validation
Observability Tracing and debugging
Security Access control and audit
Cost Management Token and resource budgets
Lifecycle (AgentOps) CI/CD for agents

πŸ—οΈ Repository Structure

production-agent-patterns/
β”œβ”€β”€ docs/                           # Framework documentation
β”‚   β”œβ”€β”€ 7-pillars.md               # The core framework
β”‚   β”œβ”€β”€ provider-comparison.md      # Detailed matrix
β”‚   └── decision-guide.md          # How to choose
β”‚
β”œβ”€β”€ agents/                         # Reference agent implementations
β”‚   └── research-assistant/        # Primary reference agent
β”‚       β”œβ”€β”€ openai-sdk/            # OpenAI Agents SDK
β”‚       β”œβ”€β”€ claude-sdk/            # Claude Agent SDK
β”‚       β”œβ”€β”€ langraph/              # LangGraph
β”‚       β”œβ”€β”€ aws-strands/           # AWS Strands
β”‚       β”œβ”€β”€ google-adk/            # Google ADK
β”‚       └── oracle-adk/            # Oracle ADK
β”‚
β”œβ”€β”€ mcp-servers/                    # Model Context Protocol servers
β”‚   β”œβ”€β”€ template/                  # Starter template
β”‚   └── database-connector/        # PostgreSQL/MySQL example
β”‚
β”œβ”€β”€ deployment/                     # Infrastructure as Code
β”‚   β”œβ”€β”€ aws/                       # Terraform + CDK
β”‚   β”œβ”€β”€ gcp/                       # Terraform
β”‚   β”œβ”€β”€ azure/                     # Bicep + Terraform
β”‚   β”œβ”€β”€ oracle/                    # OCI Terraform
β”‚   └── docker/                    # Local development
β”‚
β”œβ”€β”€ monitoring/                     # Observability setup
β”‚   β”œβ”€β”€ langfuse/                  # Open-source tracing
β”‚   └── dashboards/                # Grafana dashboards
β”‚
└── evaluation/                     # Testing and benchmarks
    β”œβ”€β”€ test-suites/               # Standard evaluation patterns
    └── benchmarks/                # Performance baselines

πŸš€ Quick Start

1. Clone and Setup

git clone https://github.com/frankxai/production-agent-patterns.git
cd production-agent-patterns

# Choose your framework
cd agents/research-assistant/openai-sdk  # or claude-sdk, etc.

# Install dependencies
pip install -r requirements.txt

# Set environment variables
cp .env.example .env
# Edit .env with your API keys

2. Run the Reference Agent

# OpenAI version
python main.py "Research the latest developments in quantum computing"

# Claude version
cd ../claude-sdk
python main.py "Research the latest developments in quantum computing"

3. Compare Implementations

All implementations produce the same output format, making direct comparison possible.

πŸ”§ The Research Assistant Agent

Our reference agent is a Research Assistant that:

  • Searches the web for information
  • Reads and summarizes documents
  • Synthesizes findings into reports
  • Cites sources

This agent is complex enough to demonstrate all 7 pillars while simple enough to understand.

Agent Specification (Framework-Agnostic)

agent:
  name: ResearchAssistant
  description: Researches topics and produces synthesized reports

  tools:
    - web_search: Search the web for information
    - fetch_url: Retrieve and parse web pages
    - summarize: Condense long documents

  memory:
    short_term: conversation context
    long_term: research history, user preferences

  guardrails:
    input: content_policy, pii_detection
    output: citation_required, format_validation

  output_format:
    - summary: 2-3 paragraph synthesis
    - key_findings: bullet points
    - sources: cited URLs

πŸ“¦ MCP Server Templates

Build once, use with any agent framework.

Database Connector

# mcp-servers/database-connector/server.py
from mcp import Server, Tool

server = Server("database-connector")

@server.tool()
async def query_database(sql: str) -> dict:
    """Execute read-only SQL query"""
    # Implementation with safety checks
    return execute_safe_query(sql)

🌐 Deployment

AWS (Bedrock + AgentCore)

cd deployment/aws
terraform init
terraform apply -var="agent_name=research-assistant"

Google Cloud (Vertex AI)

cd deployment/gcp
terraform init
terraform apply

Azure (AI Foundry)

cd deployment/azure
az deployment group create \
  --resource-group myResourceGroup \
  --template-file main.bicep

Oracle (OCI + ADK)

cd deployment/oracle
terraform init
terraform apply -var="compartment_id=ocid1.compartment..."

Local Development

cd deployment/docker
docker-compose up

πŸ“ˆ Monitoring

Langfuse Setup

cd monitoring/langfuse
docker-compose up -d

# Open http://localhost:3000
# Default credentials: admin / admin

Import Dashboards

cd monitoring/dashboards
./import-to-grafana.sh

πŸ§ͺ Evaluation

Run Test Suite

cd evaluation
pytest test-suites/ -v --benchmark

Test Categories

Suite Tests
Functional Agent produces correct outputs
Safety Guardrails block bad inputs/outputs
Performance Latency and throughput
Cost Token usage tracking
Adversarial Red team attack resistance

πŸ’° Cost Comparison

Based on 1,000 research queries (as of January 2026):

Framework Avg Cost/Query Latency (p50) Notes
OpenAI SDK $0.08 2.3s GPT-4o
Claude SDK $0.07 2.1s Claude Sonnet 4
AWS Bedrock $0.06 2.5s Claude via Bedrock
Google ADK $0.07 2.4s Gemini 2.0
Oracle ADK $0.05 2.6s Cohere Command A

Costs include all API calls, tool usage, and retry logic.

🀝 Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

Priority Areas

  1. Additional agent implementations (AutoGen, CrewAI)
  2. More MCP server examples
  3. Cost optimization patterns
  4. Evaluation benchmark improvements

πŸ“š Related Resources

πŸ“„ License

MIT License - see LICENSE for details.


Built with ❀️ by Frank | AI Architect

Helping you ship production agents, not just demos.

About

The same AI agent implemented across 6 frameworks (OpenAI, Claude, LangGraph, AWS, Google, Oracle) with production deployment configs

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages