Skip to content

smuttumu/AgenticAI_foundry

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

38 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

AgenticAI Foundry πŸ€–

Streamlit App Docker License: MIT

MIT Professional Education: Applied Generative AI for Digital Transformation
Interactive demos for understanding AI economics, multi-agent systems, and agent integration


🎯 What's Included

Demo Module Description API Key?
πŸ’° LLM Cost Explorer Module 1 Calculate and compare LLM API costs across providers No
πŸ€– Multi-Agent Demo Module 2 Watch three AI agents collaborate (CrewAI) Optional
πŸ”— LangChain Agent Demo Module 2 Single agent with web search tool (LangChain) Optional
πŸ”Œ MCP Explorer Module 3 Understand the Model Context Protocol β€” how AI agents connect to tools No

More demos will be added as the course progresses.


πŸš€ Quick Start

Option 1: Docker (Recommended)

# Clone the repository
git clone https://github.com/dlwhyte/AgenticAI_foundry.git
cd AgenticAI_foundry

# Build and run
docker build -t agenticai-foundry .
docker run -p 8501:8501 agenticai-foundry

Open http://localhost:8501

Option 2: Python

# Clone and install
git clone https://github.com/dlwhyte/AgenticAI_foundry.git
cd AgenticAI_foundry
pip install -r requirements.txt

# Run
streamlit run Home.py

✨ Demo Details

πŸ’° LLM Cost Explorer (Module 1)

The same AI transaction can cost between $1 and $230 β€” a 200x variance!

  • Real-time Token Counter β€” Uses OpenAI's tiktoken
  • Multi-Model Comparison β€” 10+ models from OpenAI, Anthropic, Google
  • Scale Analysis β€” See costs from 1K to 1M API calls
  • Export Results β€” CSV, JSON for assignments

Assignment: Use this to analyze model pricing at scale for your write-up.

πŸ€– Multi-Agent Demo (Module 2)

Watch three agents collaborate: Researcher β†’ Writer β†’ Editor

  • Three Collaborating Agents β€” Sequential task handoff via CrewAI
  • Dual Provider Support β€” Ollama (free, local) or OpenAI (paid, cloud)
  • Live Agent Activity β€” Watch agents hand off work in real-time
  • CLI Support β€” Run from command line or Streamlit

Assignment: Observe agent specialization, telemetry, and collaboration patterns.

πŸ”— LangChain Agent Demo (Module 2)

Single agent with tools: Think β†’ Search β†’ Answer

  • Single Agent + Tools β€” Contrast with CrewAI's multi-agent approach
  • Real-Time Web Search β€” Get current crypto prices via DuckDuckGo
  • ReAct Pattern β€” Watch the agent think, act, and observe
  • Same Provider Options β€” Works with Ollama or OpenAI

Assignment: Compare single-agent vs multi-agent patterns.

πŸ”Œ MCP Explorer (Module 3)

MCP is USB-C for AI β€” one standard protocol connecting agents to any tool.

  • Step-by-Step Scenarios β€” Walk through real MCP interactions (calendar, Spotify, Salesforce, DevOps)
  • Protocol Messages β€” See the actual JSON-RPC requests and responses
  • MCP vs Alternatives β€” Side-by-side comparison with Zapier and custom APIs
  • Integration Framework β€” Understand when to use which approach

Assignment: Supports Q3 (integration), Q4 (safety), and the overall proposal design.

No API key required β€” this is an educational simulation tool.


πŸ€– Multi-Agent Demo Setup

The Multi-Agent and LangChain demos need an AI "brain." You have two options:

What is Ollama?

Ollama lets you run powerful AI models locally on your own computer β€” for free, with no data leaving your machine.

Feature Ollama (Local) OpenAI (Cloud)
Cost Free ~$0.01/run
Privacy Data stays local Data sent to cloud
Speed Depends on your hardware Consistently fast
Internet Not required Required
Setup Install + download model Just need API key

Option A: Ollama (Free, Local) β€” Recommended for Learning

# 1. Install Ollama
# macOS:
brew install ollama
# Linux:
curl -fsSL https://ollama.ai/install.sh | sh
# Windows: Download from https://ollama.ai

# 2. Download an AI model (2 GB, takes 2-5 min)
ollama pull llama3.2

# 3. Start the Ollama server (keep this running)
ollama serve

# 4. Install Python dependencies (if running outside Docker)
pip install -r requirements-crewai.txt

Option B: OpenAI (Paid, Cloud) β€” Faster Results

# 1. Get an API key from platform.openai.com
# 2. Set it in your environment
export OPENAI_API_KEY="sk-your-key-here"

# 3. Install Python dependencies (if running outside Docker)
pip install -r requirements-crewai.txt

πŸ“š Documentation

Guide Best For What It Covers
Beginner's Guide Absolute beginners Full explanations of every technology, step-by-step setup, glossary
LLM Cost Guide Module 1 Token economics, model selection, cost drivers
Multi-Agent Guide Module 2 CrewAI vs LangChain, single-agent vs multi-agent patterns
MCP Guide Module 3 Understanding the Model Context Protocol
CrewAI Setup Quick reference Commands, troubleshooting, CLI usage
Docker Guide Container users Docker-specific setup

New to AI agents? Start with the Beginner's Guide β€” it explains everything from scratch.


πŸ“ Project Structure

AgenticAI_foundry/
β”œβ”€β”€ Home.py                        # Landing page β€” course hub
β”œβ”€β”€ pages/
β”‚   β”œβ”€β”€ 1_LLM_Cost_Calculator.py   # Cost calculator (Module 1)
β”‚   β”œβ”€β”€ 2_Multi_Agent_Demo.py      # CrewAI multi-agent demo (Module 2)
β”‚   β”œβ”€β”€ 3_LangChain_Agent_Demo.py  # LangChain tool agent (Module 2)
β”‚   └── 4_MCP_Explorer.py          # MCP protocol explorer (Module 3)
β”œβ”€β”€ crews/                         # 🧠 CrewAI multi-agent logic
β”‚   β”œβ”€β”€ __init__.py
β”‚   └── research_crew.py           # Agent definitions & orchestration
β”œβ”€β”€ agents/                        # πŸ”— LangChain single-agent logic
β”‚   β”œβ”€β”€ __init__.py
β”‚   └── crypto_agent.py            # Web search agent for crypto prices
β”œβ”€β”€ docs/
β”‚   β”œβ”€β”€ BEGINNERS_GUIDE.md         # Comprehensive beginner tutorial
β”‚   β”œβ”€β”€ LLM_COST_GUIDE.md          # Module 1: Token economics & cost analysis
β”‚   β”œβ”€β”€ MULTI_AGENT_GUIDE.md       # Module 2: CrewAI vs LangChain patterns
β”‚   β”œβ”€β”€ MCP_GUIDE.md               # Module 3: Model Context Protocol
β”‚   β”œβ”€β”€ CREWAI_SETUP.md            # Quick setup reference
β”‚   └── DOCKER_GUIDE.md            # Docker setup guide
β”œβ”€β”€ Dockerfile
β”œβ”€β”€ requirements.txt               # Base Streamlit dependencies
β”œβ”€β”€ requirements-crewai.txt        # CrewAI + LangChain dependencies
└── README.md

πŸ“Š Module Connections

Module 1: LLM Cost Explorer

The same AI transaction can cost between $1 and $230 β€” a 200x variance!

Use this tool to understand token economics and model pricing before scaling AI in your org.

Module 2: Multi-Agent Systems

Watch agents collaborate: Researcher β†’ Writer β†’ Editor

See multi-agent orchestration (CrewAI) and single-agent reasoning (LangChain) side by side.

CrewAI vs LangChain β€” Two Approaches

Aspect CrewAI (Multi-Agent) LangChain (Tool Agent)
Metaphor Team of employees Single agent with tools
Pattern Sequential handoff ReAct (Reason + Act)
Example Research β†’ Write β†’ Edit Question β†’ Search β†’ Answer
Best For Complex workflows Real-time data retrieval

How CrewAI Specializes Agents

Agent(
    role="Research Analyst",           # Job title
    goal="Gather info about {topic}",  # What to achieve
    backstory="You are an experienced  # Shapes behavior
              researcher with expertise..."
    llm=llm
)

CrewAI combines these attributes with task instructions to construct prompts sent to the LLM. See crews/research_crew.py for the full implementation.

Module 3: MCP & Agent Integration

MCP is USB-C for AI β€” one standard protocol connecting agents to any tool.

The MCP Explorer walks you through how agents connect to external tools (calendars, CRMs, monitoring systems) using a standardized protocol, and compares this approach to alternatives like Zapier and custom APIs.

MCP vs Other Approaches

Aspect Zapier / n8n Custom APIs MCP
Complexity Low (no-code) High (custom dev) Medium (standard)
AI Awareness None β€” trigger/action Manual integration Native AI support
Context / Memory No Build it yourself Built-in
Best For Simple automations Unique business logic AI agent ecosystems

πŸ–₯️ CLI Usage

The Multi-Agent Demo also works from the command line:

# With Ollama (free)
python -m crews.research_crew --provider ollama --task "Research AI in healthcare"

# With OpenAI
python -m crews.research_crew --provider openai --task "Research AI in healthcare"

# Check your setup
python -m crews.research_crew --check

πŸ› οΈ Technologies

Technology What It Does Learn More
Streamlit Web app framework Creates the UI
CrewAI Multi-agent orchestration Coordinates agents
Ollama Local LLM runtime Runs AI on your machine
LangChain LLM integrations Connects to AI providers
Plotly Interactive charts Visualizes cost data
Docker Containerization Easy deployment

❓ Troubleshooting

Quick Fixes

Problem Solution
"Ollama not running" Run ollama serve in a terminal
"Model not found" Run ollama pull llama3.2
"Out of memory" Try smaller model: ollama pull phi3
"Slow responses" Normal for local AI; try OpenAI for speed
"Import errors" Run pip install crewai langchain-community

For detailed troubleshooting, see Beginner's Guide β€” Troubleshooting.


πŸ“„ License

MIT License β€” see LICENSE


MIT Professional Education | Applied Generative AI for Digital Transformation
Demos work locally β€” API keys optional (Ollama mode)

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 99.6%
  • Dockerfile 0.4%