Free, open-source, code-free orchestration for multi-agent workflows.
Quickstart · Hello World · Examples
If you want to quickly try an agent flow, why are you still setting up a Python project, wiring graph nodes, defining state types, and writing routing functions before you can even iterate on the prompts?
That is the problem tama is trying to solve.
With tama, agents and skills are just Markdown files:
- agents live in
AGENT.md - skills live in
SKILL.md - orchestration is declared in YAML frontmatter
- routing can be an explicit FSM instead of hidden in code
By "code-free," we mean no graph/orchestration code for the workflow itself. You define the system in files instead of assembling it in Python.
- No scaffold code for agent flows. Use
tama initandtama add, then start writing prompts. - Prompts as files. Agents and skills are human-readable, diffable, and easy to reorganize.
- Deterministic routing. Use
fsmwhen control flow should belong to the runtime, not the model. - Built-in patterns.
react,fsm,scatter,critic,reflexion,debate,plan-execute, and more. - Tracing included. Inspect which agents ran, which tools were called, and which skills were loaded.
- Rust runtime.
tamadis a native binary, not a Python orchestrator.
This is a real workflow:
---
name: support
pattern: fsm
initial: triage
states:
triage:
- billing: billing-agent
- technical: tech-agent
billing-agent:
- done: ~
- escalate: triage
tech-agent: ~
---Instead of writing routing code, you declare the transitions.
tama init my-project
cd my-project
tama add fsm support
tama add react triage
tama add react billing-agent
tama add react tech-agentThen edit the generated AGENT.md files and run:
tama run "Customer says they were double charged and want a refund"Or start with the docs:
- Quickstart: https://tama.mlops.ninja/getting-started/quickstart/
- Hello World: Deep Research: https://tama.mlops.ninja/getting-started/hello-world-deep-research/
The best current example is the deep research workflow in examples/00-deep-research.
It combines:
fsmfor the outer review loopscatterfor fan-out researchreactworkers for focused web researchmemoryfor retry-aware statefilesfor writingreport.md
Read the step-by-step walkthrough here:
https://tama.mlops.ninja/getting-started/hello-world-deep-research/
tama init <name> # create a new project
tama add <pattern> <name> # scaffold an agent
tama add skill <name> # scaffold a skill
tama lint # validate the project
tama run "your task" # execute the entrypoint agentmy-project/
├── tama.toml
├── agents/
│ └── my-project-agent/
│ └── AGENT.md
└── skills/
Larger projects usually look like:
my-project/
├── tama.toml
├── agents/
│ ├── pipeline/
│ │ └── AGENT.md
│ ├── worker/
│ │ └── AGENT.md
│ └── reviewer/
│ └── AGENT.md
└── skills/
├── search-web/
│ └── SKILL.md
└── memory/
└── SKILL.md
Built-in patterns currently include:
oneshotreactscatterparallelfsmcriticreflexionconstitutionalchain-of-verificationplan-executedebatebest-of-nhuman
- Introduction: https://tama.mlops.ninja/getting-started/introduction/
- Installation: https://tama.mlops.ninja/getting-started/installation/
- Quickstart: https://tama.mlops.ninja/getting-started/quickstart/
- Hello World: Deep Research: https://tama.mlops.ninja/getting-started/hello-world-deep-research/
tama is usable now, but still early.
The best way to help is to:
- try a real workflow
- report DX pain
- file issues when something is unclear or broken
- tell us which examples feel useful and which feel toy-like