Skip to content

Casys-AI/DenoClaw

Repository files navigation

DenoClaw

DenoClaw

License: MIT

DenoClaw is a Deno-native runtime for brokered multi-agent workflows. It runs the control plane on Deno Deploy, agent runtimes as dedicated Deno Deploy apps, and arbitrary code execution in Deno Sandbox.

The architecture is intentionally split into three layers:

Deploy : Broker App (Deno Deploy) → Agent Apps (Deno Deploy) → Sandbox
Local  : Process                   → Workers                  → Subprocess (`Deno.Command`)

That split is the core of the system, not an implementation detail:

  • Broker is the control plane. It owns routing, auth, LLM proxying, cron, tunnel coordination, and durable task state.
  • Agent apps run the agent runtime. Agents are reactive HTTP services with per-agent KV-backed state, not long-running daemons.
  • Sandbox executes arbitrary code and tool calls under an isolated runtime boundary with explicit permissions.

The local runtime mirrors the same model with a main process, one Worker per agent, and one subprocess per tool execution. The goal is the same semantics in dev and deploy, with transport and isolation swapped to the local equivalents.

Why this architecture

Deno Deploy for the Broker

The Broker needs to stay awake, own public ingress, schedule cron jobs, hold durable coordination state, and mediate all agent-to-agent traffic. Deno Deploy fits that control-plane role well: HTTP-native, KV-native, and always-on.

Deploy agent apps for agent runtimes

Agents are a good match for dedicated Deno Deploy apps because they are request-driven, can benefit from warm isolate reuse, and need bound state, but they should not own schedulers or durable message transport. DenoClaw treats agent apps as reactive endpoints, not daemon processes.

Sandbox for code execution

Tool execution and arbitrary code are isolated from both the Broker and the agent runtime. This keeps the control plane clean, keeps agent logic portable, and makes the security boundary explicit. Sandbox permissions are derived from tool requirements intersected with agent policy.

Why Deno is a strong fit

DenoClaw uses the same runtime family across local and deploy paths:

Need Deno primitive
HTTP server Deno.serve()
Durable state Deno.openKv()
Scheduled work Deno.cron() (Broker/local only)
Local agent runtime new Worker()
Local code exec Deno.Command
Cloud code exec Deno Sandbox
Network I/O fetch() + Deno.upgradeWebSocket()
Observability built-in OpenTelemetry support
Tests Deno.test()

That matters because DenoClaw does not need a Node.js compatibility layer, separate deploy runtime, or a second language/runtime for orchestration. The same TypeScript codebase spans CLI, local workers, broker deploy, agent deploy apps, and Sandbox-oriented execution paths.

Core architectural advantages

  • One mental model across local and deploy. Broker / Agent / Execution maps cleanly to Process / Worker / Subprocess in local mode.
  • Clear isolation boundaries. Control plane, agent runtime, and code execution are separate concerns.
  • Centralized security and observability. LLM calls, tool requests, A2A routing, traces, and auth flow through the Broker.
  • Reactive agents instead of hidden daemons. The system does not depend on unsupported deployed-agent-runtime patterns such as Deno.cron() or kv.listenQueue() inside agent apps.
  • Deno-native end to end. No Node.js dependency chain and no split runtime model.

Quickstart

# Install Deno 2.7+
curl -fsSL https://deno.land/install.sh | sh

# Copy environment template and fill in your keys
cp .env.example .env

# Guided setup
denoclaw init

# Run the local runtime without watch
deno task start

# Run the local runtime with watch
deno task dev

# Interactive agent session
denoclaw dev --agent alice

# Send a one-off message
denoclaw dev --agent alice -m "Hello DenoClaw"

Postgres analytics (optional)

KV handles real-time metrics out of the box. For historical analytics and the full dashboard experience, add Postgres:

docker-compose up -d                # Local Postgres 17
deno task db:generate               # Generate Prisma client
deno task db:push                   # Push schema to local DB

Set DATABASE_URL in .env (see .env.example). Without it, analytics is a no-op and everything runs on KV alone.

Deploy

A single command provisions everything and deploys:

deno task deploy

This automatically:

  1. Creates the broker app on Deno Deploy (if needed)
  2. Provisions and assigns a shared KV database
  3. Provisions and assigns a Prisma Postgres database
  4. Sets environment variables (API tokens, secrets)
  5. Deploys the code to production

Naming is derived automatically (denoclaw-broker, denoclaw-broker-kv, denoclaw-broker-db). No manual configuration needed.

To publish agent definitions after deploy:

deno task publish           # All agents
deno task publish alice     # Single agent

For the operator workflow and current deploy status, see docs/users/setup-broker-and-agent-deploy.md.

Development

deno task dev       # Backend with watch
deno task start     # Backend without watch
deno task dashboard:dev     # Standalone dashboard with Vite dev server
deno task dashboard:preview # Preview the built dashboard locally
deno task test      # Source test suite (default)
deno task test:e2e  # End-to-end tests
deno task test:all  # Source + tests/ suites
deno task check     # Type-check
deno task lint      # Lint
deno task fmt       # Format

deno task dashboard:dev runs the UI on http://127.0.0.1:3001 with the manual login flow, so it can connect directly to a remote broker without a local runtime. Enter the broker URL and optional bearer token on /login.

Project Docs

Repository layout

src/
├── agent/          # Agent loop, runtime, memory, skills, tools
├── llm/            # LLM providers and routing
├── messaging/      # Channels, sessions, A2A protocol
├── orchestration/  # Broker, gateway, auth, relay, transports
├── cli/            # CLI entrypoints and setup flows
├── config/         # Config loading and validation
├── shared/         # Shared errors, helpers, logging, types
└── telemetry/      # Metrics and tracing
docs/
├── architecture-distributed.md
├── adr-001-*.md → adr-014-*.md
└── plans/

See docs/architecture-distributed.md, the ADRs in docs/, and CLAUDE.md for the project conventions and architectural decisions.

License

MIT

About

Deno-native AI agent framework — Subhosting + Sandbox + A2A protocol. Zero Node.js deps.

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors