Run multiple AI agents on one repo. Zero merge conflicts. Zero duplicate work.
Quick Start • How Is This Different? • Features • API • CLI • Contributing • Full Docs
When multiple AI agents (Claude, GPT, Gemini, Cursor, Copilot, Ollama) work on the same codebase, they step on each other — duplicate tasks, merge conflicts, lost context. Agent Coordinator is a lightweight HTTP server that gives your agents atomic task claiming, lease-based file locks, a message bus, and a real-time dashboard so they can work together without chaos.
Works with any LLM. Works on any project. No framework lock-in — it's just an HTTP API.
Real-time dashboard: agent roster, Kanban task board, file locks, streaming output
git clone https://github.com/mkalkere/agent-coordinator.git
cd agent-coordinator
python3 -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
agent-os init --name my-project
agent-os serveOr with Docker: docker compose up -d
Then create agents and start them:
agent-os agent create dev-1 --preset developer
agent-os agent create rev-1 --preset reviewer
agent-os claude start --agent dev-1 # Terminal 2
agent-os claude start --agent rev-1 # Terminal 3The developer claims tasks, writes code, creates PRs. The reviewer auto-reviews. All coordination happens through the HTTP API — agents don't need to know about each other.
Most AI agent frameworks handle orchestration — what agents do. Agent Coordinator handles coordination — how agents share resources without conflicts.
| Agent Coordinator | CrewAI | LangGraph | gstack | |
|---|---|---|---|---|
| Independent agents, shared filesystem | Core purpose | Different model | Different model | No |
| Atomic task claiming | Yes | No | No | No |
| File lock leases | Yes | No | No | No |
| Agent health monitoring | Yes (auto-reclaim) | No | No | No |
| Provider agnostic | Any LLM | Mostly | Yes (via LangChain) | Claude Code |
| Framework lock-in | None (HTTP API) | Yes | Yes (LangChain) | Yes |
| Dashboard | Built-in | Paid add-on | LangSmith (separate) | No |
Agent Coordinator is infrastructure, not a framework. Any tool that can make HTTP requests can be a coordinated agent.
- Atomic task claiming —
INSERT...SELECTensures no two agents grab the same work - Lease-based file locks — auto-expire, no deadlocks, even if an agent crashes
- Message bus — priority levels, acknowledgments, TTL, event subscriptions
- Health monitoring — stale agents detected at 30 min, resources reclaimed at 60 min
- Git worktree per agent — automatic workspace isolation, no merge conflicts
- Hierarchical memory — L1 (agent-local), L2 (shared), L3 (cross-project)
- 41 built-in skills — markdown-based, loaded on demand, provider-agnostic
- 5 agent presets — developer, reviewer, investigator, analyst, research
- Cost tracking — per-agent budgets with auto-model-downgrade
- FastAPI + SQLite (WAL mode) — no external database, single-file deployment
- 17 API routers — agents, tasks, locks, messages, memory, teams, dashboard, and more
- Real-time dashboard — agent status, Kanban board, locks, streaming output
- Swagger docs — interactive API explorer at
/docs
Claude ──┐ ┌── GPT
│ ┌────────────┐ │
Gemini ──┼───►│ Agent │◄─┼── Cursor
│ │ Coord. │ │
Ollama ──┘ │ :9889 │◄─┘── Copilot
│ │
│ Tasks │
│ Locks │
│ Messages │
│ Memory │
│ Dashboard │
│ │
│ SQLite │
└────────────┘
agent-coordinator/
├── src/agent_os/server/ # FastAPI server (17 routers)
├── src/agent_os/ # CLI, agent runner, memory, workspace
├── .os/skills/ # 41 built-in skills
├── tests/ # 4900+ tests
└── docs/ # Documentation
Contributions welcome! The short version:
git checkout -b feat/your-feature
# write code + tests
python3 -m pytest tests/ -v
pre-commit run --all-files
gh pr createEvery PR goes through a 2-round review before merge. We use Conventional Commits.
See CONTRIBUTING.md for the full guide — architecture, code quality rules, and PR process.
Report a bug | Request a feature
If you're an AI agent in a workspace with .os/, read .os/README.md for your operating protocol.
MIT © 2026 Mallikarjuna Kalkere
