fleet-rlm provides secure, cloud-sandboxed recursive language model workflows with a Web UI, API, and optional MCP server. The default product path is Modal-backed. The Daytona path is experimental, but it plugs into the same workspace and transport contract instead of acting like a separate app.
This documentation is for both:
- users operating
fleet-rlmlocally or in deployment workflows - contributors building integrations, extending runtime behavior, or maintaining the codebase
uv init
uv add fleet-rlm
uv run fleet webThen open http://localhost:8000.
Next steps:
- Installation
- Runtime settings
- LiteLLM proxy model availability
- Deploying the API server
- Troubleshooting
When docs conflict with implementation, treat these as authoritative:
- CLI truth:
uv run fleet-rlm --helpanduv run fleet --help - API truth:
openapi.yaml - WebSocket truth:
src/fleet_rlm/api/routers/ws/api.py
Historical docs are archived and non-operational:
- Legacy planning docs are stored under the local-only
plans/archive/docs-legacy/path.