Add 14 MAS coding exercises (no CrewAI) + workshop refresh#9
Open
jayjirayut wants to merge 7 commits intokwarodom:masterfrom
Open
Add 14 MAS coding exercises (no CrewAI) + workshop refresh#9jayjirayut wants to merge 7 commits intokwarodom:masterfrom
jayjirayut wants to merge 7 commits intokwarodom:masterfrom
Conversation
AutoGen v0.4 split the package into autogen_agentchat and autogen_core. The bare `autogen` import no longer resolves under pyautogen 0.10, so verify_setup.py fails 1/9 even on a healthy install. Switch the check to autogen_agentchat (the installed top-level module) so verification reflects reality. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The two state-machine and parallel-swarm workshop scripts no longer need CrewAI — Ex 2 uses LangGraph + ChatAnthropic directly, Ex 3 uses anthropic.AsyncAnthropic. Switching the LLM client to Claude removes a heavy dependency tree (crewai + crewai-tools pull in ~70 transitive packages) and aligns the workshop with the no-CrewAI direction of the new exercises in this branch. requirements.txt: drop crewai>=1.13.0 and crewai-tools>=1.13.0. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Reimplements the three Multi-Agent Systems workshop exercises (from week9_2_multi_agent_theory_workshop.pdf) without CrewAI, exposing the coordination mechanics directly: - ex1_sequential_pipeline.py: Pattern A (organisational structuring) — Researcher -> Writer -> Editor chain via dataclass Agent + raw Anthropic SDK. Each agent's output is templated into the next agent's task; total time stacks linearly. - ex2_graph_routing.py: Pattern B (centralised planning / Contract Net modern equivalent) — LangGraph state-machine routing with a classifier node + conditional edges to billing/technical/general specialists. Confidence score in shared state enables stretch goal kwarodom#1 (escalation). - ex3_parallel_swarm.py: Pattern C (distributed sensing / stigmergy) — asyncio.gather across 5 building-audit specialists, with shared results list as the modern blackboard. Reports parallel vs hypothetical sequential time so the speedup is observable, not just claimed. All three use claude-haiku-4-5-20251001 and write deterministic outputs beside the script. Validated end-to-end: Ex 1 ran in 35.0s (15.0+13.9+ 6.1, perfectly linear), Ex 2 routed 3/3 tickets at 95% confidence each, Ex 3 hit 4.4x speedup vs sequential. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Implements all 14 hands-on exercises from week9/week9_3_mas_coding_
exercises.pdf, organised into a tiered folder structure under
week11/exercises/. Each exercise is self-contained in its own folder
(exNN_name/exNN_name.py).
Tiers and frameworks:
Beginner — core concepts, raw API calls, 2-agent dialogues
01 two_agent_dialogue Anthropic
02 research_brief Anthropic (was CrewAI)
03 langgraph_router LangGraph
Easy — 3-agent pipelines, state machines, parallel calls
04 3agent_product_pipeline Anthropic (was CrewAI)
05 langgraph_faq_loop LangGraph
06 parallel_fact_checker Anthropic + asyncio
Intermediate — tool use, group chats, escalation logic
07 ag2_code_review AutoGen v0.4+
08 langgraph_escalation LangGraph
09 market_research_swarm Anthropic asyncio (was CrewAI)
Advanced — self-healing, negotiation, hybrid frameworks
10 langgraph_self_healing LangGraph
11 negotiation_agents Anthropic
12 hybrid_energy_optimizer Anthropic + LangGraph (was CrewAI+LG)
Expert — autonomous systems, long-horizon planning, full MAS
13 autonomous_research LangGraph + tool_use
14 hotel_ops_command_center AutoGen + Anthropic (was AutoGen+CrewAI)
Two deliberate deviations from the PDF:
1. No CrewAI. The five exercises that originally specified CrewAI
(02, 04, 09, 12, 14) use a `dataclass Agent` with `system_prompt()`
and `run()` methods instead, calling the Anthropic SDK directly. The
coordination patterns (sequential pipelines, parallel-then-synthesis,
nested sub-crews) are preserved without the framework dependency.
2. AutoGen v0.4+ instead of v0.2. The PDF code uses
`import autogen; autogen.GroupChat`, which doesn't exist in the
modern split-package release. Ex 07 uses
`autogen_agentchat.teams.RoundRobinGroupChat`; Ex 14 uses
`SelectorGroupChat` for LLM-driven turn taking.
`autogen-ext[anthropic]` is now required.
Validation: 8 of 14 ran end-to-end against Claude Haiku 4.5 (Ex 01, 02,
03, 06, 07, 08, 13, 14). The remaining 6 use patterns already validated
by those runs.
README.md indexes all 14 exercises with framework, key concept, and
setup instructions.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Three views of the autonomous research agent (Planner-Executor-Critic loop) for ex13_autonomous_research: - graph.png + graph.mmd: auto-generated LangGraph topology via graph.get_graph().draw_mermaid_png(). Shows the planner -> researcher (self-loop) -> synthesiser -> critic -> finalise flow with conditional vs unconditional edges distinguished. - DIAGRAMS.md: hand-authored architecture and sequence diagrams that surface what the auto-diagram cannot — the bound tools (market_data, competitor_lookup, roi_calc), the ResState TypedDict schema, and a step-by-step sequence of one execution. Tools aren't graph nodes (they're attached to the LLM via llm.bind_tools), so the auto-diagram alone doesn't tell the full story. Sources: - LangGraph 1.1.6 graph rendering API (draw_mermaid, draw_mermaid_png) - mermaid.ink for PNG rendering (used by draw_mermaid_png by default) Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Ex 07 (Code Review Committee) and Ex 14 (Hotel Ops Command Center) use
the modern AutoGen v0.4+ split-package API:
- autogen-agentchat for AssistantAgent, RoundRobinGroupChat,
SelectorGroupChat, MaxMessageTermination
- autogen-ext[anthropic] for AnthropicChatCompletionClient
These were installed manually during development but never written to
requirements.txt, so a clean install would fail Ex 07 and Ex 14.
pyautogen>=0.10.0 is left in place — it's a stub package now, but
removing it is a separate concern.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Several exercises write their LLM output to disk for inspection (e.g. ex02_brief_output.md, ex09_swarm_output.md, ex3_audit_output.json). These are non-deterministic by design — every run produces different phrasing, different findings — so committing them adds noise without adding signal. Patterns added: ex*_output.md # ex02/04/09/12/14 brief/swarm/pipeline outputs ex*_output.json # ex3_audit_output.json ex*_brief_output.md # explicit pattern for brief outputs ex*_audit_output.* # explicit pattern for audit outputs (md + json) Deterministic outputs like Ex 13's graph.png and graph.mmd (generated from graph structure, not LLM) are NOT ignored. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What
Adds 14 hands-on multi-agent system coding exercises (Beginner → Expert)
in a tiered folder structure under
week11/exercises/, plus refreshesthe three
week9/workshop exercises to drop CrewAI. The five exercisesthat originally specified CrewAI now use a
dataclass Agent+ rawAnthropic SDK pattern instead. Adds architecture diagrams for Ex 13
(autonomous research agent with tool use).
Why
transitive packages from the venv (
crewai+crewai-toolspull inchromadb, lancedb, instructor, etc.) without losing any coordination
pattern the exercises teach.
dataclass Agentpatternexposes how output flows agent-to-agent in 3 lines instead of hiding
it inside
Process.sequential— better for understanding what'sactually happening.
(
import autogen; autogen.GroupChat), which doesn't resolve underthe installed
pyautogen 0.10. Ex 07 and Ex 14 are ported to thev0.4+ split-package API (
autogen_agentchat.teams,autogen_ext.models.anthropic).Planner-Executor-Critic loop with bound tools is non-obvious from
code alone — auto-generated topology + hand-authored architecture
and sequence diagrams help.
Changes
week9/— workshop refresh (commits 1–3)verify_setup.pyautogen→autogen_agentchatex2_LangGraphSupportGraph.pycrewaiimport, switch LLM toChatAnthropicex3_ParallelSwarm.pycrewaiimport, keepanthropic.AsyncAnthropicrequirements.txtcrewai>=1.13.0,crewai-tools>=1.13.0ex1_sequential_pipeline.py(new)dataclass Agentex2_graph_routing.py(new)ex3_parallel_swarm.py(new)asyncio.gatherover 5 building-audit specialistsweek11/exercises/— 14 new exercises (commit 4)ex01_two_agent_dialogue/ex02_research_brief/ex03_langgraph_router/ex04_3agent_product_pipeline/ex05_langgraph_faq_loop/ex06_parallel_fact_checker/ex07_ag2_code_review/ex08_langgraph_escalation/ex09_market_research_swarm/ex10_langgraph_self_healing/ex11_negotiation_agents/ex12_hybrid_energy_optimizer/ex13_autonomous_research/ex14_hotel_ops_command_center/week11/exercises/README.mdindexes all 14 with framework, key concept,and setup instructions.
Ex 13 diagrams (commit 5)
graph.png(21 KB) +graph.mmd— auto-generated fromgraph.get_graph().draw_mermaid_png()DIAGRAMS.md— three views: graph topology, system architecture(state schema + tool bindings), execution sequence
Validation
8 of 14 exercises run end-to-end against
claude-haiku-4-5-20251001:market_data,competitor_lookup,roi_calc)Workshop refresh validated:
Notes
ANTHROPIC_API_KEY(no other provider keys needed).autogen-ext[anthropic](already added duringdevelopment; not yet in
requirements.txt— flagging as a knownfollow-up).
load_dotenv()walks up CWD, so exercises must be run from the reporoot for
.envto resolve. Documented inweek11/exercises/README.md.🤖 Generated with Claude Code