MIT Professional Education: Applied Generative AI for Digital Transformation
Interactive demos for understanding AI economics, multi-agent systems, and agent integration
| Demo | Module | Description | API Key? |
|---|---|---|---|
| π° LLM Cost Explorer | Module 1 | Calculate and compare LLM API costs across providers | No |
| π€ Multi-Agent Demo | Module 2 | Watch three AI agents collaborate (CrewAI) | Optional |
| π LangChain Agent Demo | Module 2 | Single agent with web search tool (LangChain) | Optional |
| π MCP Explorer | Module 3 | Understand the Model Context Protocol β how AI agents connect to tools | No |
More demos will be added as the course progresses.
# Clone the repository
git clone https://github.com/dlwhyte/AgenticAI_foundry.git
cd AgenticAI_foundry
# Build and run
docker build -t agenticai-foundry .
docker run -p 8501:8501 agenticai-foundry# Clone and install
git clone https://github.com/dlwhyte/AgenticAI_foundry.git
cd AgenticAI_foundry
pip install -r requirements.txt
# Run
streamlit run Home.pyThe same AI transaction can cost between $1 and $230 β a 200x variance!
- Real-time Token Counter β Uses OpenAI's tiktoken
- Multi-Model Comparison β 10+ models from OpenAI, Anthropic, Google
- Scale Analysis β See costs from 1K to 1M API calls
- Export Results β CSV, JSON for assignments
Assignment: Use this to analyze model pricing at scale for your write-up.
Watch three agents collaborate: Researcher β Writer β Editor
- Three Collaborating Agents β Sequential task handoff via CrewAI
- Dual Provider Support β Ollama (free, local) or OpenAI (paid, cloud)
- Live Agent Activity β Watch agents hand off work in real-time
- CLI Support β Run from command line or Streamlit
Assignment: Observe agent specialization, telemetry, and collaboration patterns.
Single agent with tools: Think β Search β Answer
- Single Agent + Tools β Contrast with CrewAI's multi-agent approach
- Real-Time Web Search β Get current crypto prices via DuckDuckGo
- ReAct Pattern β Watch the agent think, act, and observe
- Same Provider Options β Works with Ollama or OpenAI
Assignment: Compare single-agent vs multi-agent patterns.
MCP is USB-C for AI β one standard protocol connecting agents to any tool.
- Step-by-Step Scenarios β Walk through real MCP interactions (calendar, Spotify, Salesforce, DevOps)
- Protocol Messages β See the actual JSON-RPC requests and responses
- MCP vs Alternatives β Side-by-side comparison with Zapier and custom APIs
- Integration Framework β Understand when to use which approach
Assignment: Supports Q3 (integration), Q4 (safety), and the overall proposal design.
No API key required β this is an educational simulation tool.
The Multi-Agent and LangChain demos need an AI "brain." You have two options:
Ollama lets you run powerful AI models locally on your own computer β for free, with no data leaving your machine.
| Feature | Ollama (Local) | OpenAI (Cloud) |
|---|---|---|
| Cost | Free | ~$0.01/run |
| Privacy | Data stays local | Data sent to cloud |
| Speed | Depends on your hardware | Consistently fast |
| Internet | Not required | Required |
| Setup | Install + download model | Just need API key |
# 1. Install Ollama
# macOS:
brew install ollama
# Linux:
curl -fsSL https://ollama.ai/install.sh | sh
# Windows: Download from https://ollama.ai
# 2. Download an AI model (2 GB, takes 2-5 min)
ollama pull llama3.2
# 3. Start the Ollama server (keep this running)
ollama serve
# 4. Install Python dependencies (if running outside Docker)
pip install -r requirements-crewai.txt# 1. Get an API key from platform.openai.com
# 2. Set it in your environment
export OPENAI_API_KEY="sk-your-key-here"
# 3. Install Python dependencies (if running outside Docker)
pip install -r requirements-crewai.txt| Guide | Best For | What It Covers |
|---|---|---|
| Beginner's Guide | Absolute beginners | Full explanations of every technology, step-by-step setup, glossary |
| LLM Cost Guide | Module 1 | Token economics, model selection, cost drivers |
| Multi-Agent Guide | Module 2 | CrewAI vs LangChain, single-agent vs multi-agent patterns |
| MCP Guide | Module 3 | Understanding the Model Context Protocol |
| CrewAI Setup | Quick reference | Commands, troubleshooting, CLI usage |
| Docker Guide | Container users | Docker-specific setup |
New to AI agents? Start with the Beginner's Guide β it explains everything from scratch.
AgenticAI_foundry/
βββ Home.py # Landing page β course hub
βββ pages/
β βββ 1_LLM_Cost_Calculator.py # Cost calculator (Module 1)
β βββ 2_Multi_Agent_Demo.py # CrewAI multi-agent demo (Module 2)
β βββ 3_LangChain_Agent_Demo.py # LangChain tool agent (Module 2)
β βββ 4_MCP_Explorer.py # MCP protocol explorer (Module 3)
βββ crews/ # π§ CrewAI multi-agent logic
β βββ __init__.py
β βββ research_crew.py # Agent definitions & orchestration
βββ agents/ # π LangChain single-agent logic
β βββ __init__.py
β βββ crypto_agent.py # Web search agent for crypto prices
βββ docs/
β βββ BEGINNERS_GUIDE.md # Comprehensive beginner tutorial
β βββ LLM_COST_GUIDE.md # Module 1: Token economics & cost analysis
β βββ MULTI_AGENT_GUIDE.md # Module 2: CrewAI vs LangChain patterns
β βββ MCP_GUIDE.md # Module 3: Model Context Protocol
β βββ CREWAI_SETUP.md # Quick setup reference
β βββ DOCKER_GUIDE.md # Docker setup guide
βββ Dockerfile
βββ requirements.txt # Base Streamlit dependencies
βββ requirements-crewai.txt # CrewAI + LangChain dependencies
βββ README.md
The same AI transaction can cost between $1 and $230 β a 200x variance!
Use this tool to understand token economics and model pricing before scaling AI in your org.
Watch agents collaborate: Researcher β Writer β Editor
See multi-agent orchestration (CrewAI) and single-agent reasoning (LangChain) side by side.
| Aspect | CrewAI (Multi-Agent) | LangChain (Tool Agent) |
|---|---|---|
| Metaphor | Team of employees | Single agent with tools |
| Pattern | Sequential handoff | ReAct (Reason + Act) |
| Example | Research β Write β Edit | Question β Search β Answer |
| Best For | Complex workflows | Real-time data retrieval |
Agent(
role="Research Analyst", # Job title
goal="Gather info about {topic}", # What to achieve
backstory="You are an experienced # Shapes behavior
researcher with expertise..."
llm=llm
)CrewAI combines these attributes with task instructions to construct prompts sent to the LLM. See crews/research_crew.py for the full implementation.
MCP is USB-C for AI β one standard protocol connecting agents to any tool.
The MCP Explorer walks you through how agents connect to external tools (calendars, CRMs, monitoring systems) using a standardized protocol, and compares this approach to alternatives like Zapier and custom APIs.
| Aspect | Zapier / n8n | Custom APIs | MCP |
|---|---|---|---|
| Complexity | Low (no-code) | High (custom dev) | Medium (standard) |
| AI Awareness | None β trigger/action | Manual integration | Native AI support |
| Context / Memory | No | Build it yourself | Built-in |
| Best For | Simple automations | Unique business logic | AI agent ecosystems |
The Multi-Agent Demo also works from the command line:
# With Ollama (free)
python -m crews.research_crew --provider ollama --task "Research AI in healthcare"
# With OpenAI
python -m crews.research_crew --provider openai --task "Research AI in healthcare"
# Check your setup
python -m crews.research_crew --check| Technology | What It Does | Learn More |
|---|---|---|
| Streamlit | Web app framework | Creates the UI |
| CrewAI | Multi-agent orchestration | Coordinates agents |
| Ollama | Local LLM runtime | Runs AI on your machine |
| LangChain | LLM integrations | Connects to AI providers |
| Plotly | Interactive charts | Visualizes cost data |
| Docker | Containerization | Easy deployment |
| Problem | Solution |
|---|---|
| "Ollama not running" | Run ollama serve in a terminal |
| "Model not found" | Run ollama pull llama3.2 |
| "Out of memory" | Try smaller model: ollama pull phi3 |
| "Slow responses" | Normal for local AI; try OpenAI for speed |
| "Import errors" | Run pip install crewai langchain-community |
For detailed troubleshooting, see Beginner's Guide β Troubleshooting.
MIT License β see LICENSE
MIT Professional Education | Applied Generative AI for Digital Transformation
Demos work locally β API keys optional (Ollama mode)