The Autonomous Edge Intelligence System — built in Go, runs anywhere.
╔══════════════════════════════════════════════════════════════╗
║ You → AssistClaw → Any LLM → Tools + Skills + Memory ║
║ ↑ ║
║ WhatsApp · Telegram · Discord · Slack · Web UI ║
╚══════════════════════════════════════════════════════════════╝
Most AI agents are either too simple or too heavy. AssistClaw hits the sweet spot:
| AssistClaw | Typical Python Agent | |
|---|---|---|
| Startup time | ~50ms | 2–5s |
| Memory footprint | ~40 MB | 400–1500 MB |
| LLM providers | 15+ | 1–3 |
| Runs on Raspberry Pi | ✅ | ❌ |
| Token optimization | Graph-first (~66% savings) | None |
| Built-in security | Guardrail + Audit log | None |
| Self-hostable | ✅ | ✅ |
┌─────────────────────────────────┐
│ AssistClaw │
│ │
Channels ──────► │ ┌──────────┐ ┌─────────────┐ │
WhatsApp │ │ Runner │ │ Security │ │
Telegram │ │ (agent │ │ Guardrail │ │
Discord │ │ loop) │ │ Audit Log │ │
Slack │ └────┬─────┘ └─────────────┘ │
Web/REST/WS │ │ │
│ ┌────▼──────────────────────┐ │
│ │ Tool Graph │ │
│ │ bash · web · files · MCP │ │
│ └────┬──────────────────────┘ │
│ │ │
│ ┌────▼──────────────────────┐ │
│ │ 3-Tier Memory │ │
│ │ Working│Episodic│Semantic │ │
│ └───────────────────────────┘ │
└────────┬────────────────────────┘
│
┌──────────────▼──────────────┐
│ Any LLM Provider │
│ OpenAI · Anthropic · Ollama │
│ Bedrock · Groq · Mistral … │
└──────────────────────────────┘
No vector databases, no cloud services. Everything local.
| Tier | Storage | What for |
|---|---|---|
| Working | In-RAM | Active conversation context |
| Episodic | SQLite FTS5 | Full-text search across all sessions |
Skills aren't flat files — they're lazy-loaded graphs. The agent reads only the nodes it needs.
coding/
├── INDEX ← agent reads this first (50 tokens)
├── python.md ← loaded only if query is Python
├── debugging.md ← loaded only if agent needs debug help
└── testing.md ← loaded only if tests are mentioned
Traditional skills: send all skill content every turn.
AssistClaw: send the index → agent traverses only what it needs. ~66% token reduction.
Production-grade runtime protection — no configuration needed to get started.
- Guardrail: pre/post/tool-call checks for prompt injection, PII leakage, dangerous bash commands
- Audit Log: every tool call + skill read logged with HMAC hash chain (tamper-evident)
assistclaw security verify: detects exactly which log entry was tampered with
security:
mode: enforce # monitor | enforce | strict
pii_mask: true # [REDACTED:email] in logsAuto-routes each prompt to the right model by complexity.
Simple "what's 2+2?" → gpt-4o-mini (fast, cheap)
Complex code review → claude-opus (powerful)
Works as both an MCP server (expose your agent to Claude Desktop / Cursor) and a client (consume external MCP servers as skill nodes).
| Channel | Notes |
|---|---|
| Multi-device, no QR scanning | |
| Telegram | Bot API |
| Discord | Bot |
| Slack | App |
| REST + WebSocket | Self-hosted gateway |
| Web UI | Built-in |
C++ bridge for Camera (OpenCV) and Audio (PortAudio) — runs natively on Raspberry Pi 5.
One-liner (Linux / macOS):
curl -fsSL https://raw.githubusercontent.com/hridesh-net/AssistClaw/main/install.sh | bashOr build from source:
git clone https://github.com/hridesh-net/AssistClaw.git
cd AssistClaw && make buildUninstall anytime:
curl -fsSL https://raw.githubusercontent.com/hridesh-net/AssistClaw/main/uninstall.sh | bashassistclaw onboardInteractive wizard — picks your LLM provider, configures channels, sets up Plano routing if you want it.
# Interactive REPL
assistclaw agent
# Background daemon
assistclaw start --daemon
# Single message
assistclaw agent --message "Summarize this repo"Location: ~/.assistclaw/assistclaw.yaml
# ─── LLM Provider ────────────────────────────────────────────
providers:
anthropic:
api_key: "sk-ant-..."
default_model: "claude-3-5-haiku-20241022"
# Or OpenAI, Ollama, Bedrock, Groq, Mistral, DeepSeek ...
# ─── Smart Routing (optional) ────────────────────────────────
plano:
enabled: true
endpoint: "http://localhost:12000/v1"
preferences:
- description: "Simple queries"
prefer_model: "openai/gpt-4o-mini"
- description: "Complex code/reasoning"
prefer_model: "anthropic/claude-opus-4"
# ─── Security ────────────────────────────────────────────────
security:
mode: enforce # monitor | enforce | strict
pii_mask: true
# ─── MCP (optional) ──────────────────────────────────────────
mcp:
server:
enabled: true
transport: stdio
clients:
- name: filesystem
command: "npx @modelcontextprotocol/server-filesystem /home"
# ─── Messaging Channels (optional) ──────────────────────────
channels:
telegram:
bot_token: "..."
discord:
bot_token: "..."Core
| Command | What it does |
|---|---|
assistclaw onboard |
Interactive setup wizard |
assistclaw agent |
Start REPL session |
assistclaw agent --message "..." |
Single-shot message |
assistclaw start --daemon |
Launch as background service |
assistclaw stop |
Stop background service |
assistclaw status |
Show PID, uptime, connected channels |
assistclaw restart |
Restart service |
Skills
| Command | What it does |
|---|---|
assistclaw skills list |
Show installed skills |
assistclaw skills install <name> |
Install a skill |
assistclaw skills remove <name> |
Remove a skill |
MCP
| Command | What it does |
|---|---|
assistclaw mcp serve |
MCP server over stdio (for Claude Desktop / Cursor) |
assistclaw mcp serve --transport http |
MCP server over HTTP-SSE (port 5173) |
assistclaw mcp add --name n --command cmd |
Register external MCP server |
assistclaw mcp list-tools |
Compact tool index |
assistclaw mcp status |
Server + client status |
Claude Desktop / Cursor config:
{
"mcpServers": {
"assistclaw": {
"command": "assistclaw",
"args": ["mcp", "serve"]
}
}
}Security
| Command | What it does |
|---|---|
assistclaw security status |
Guardrail mode, log size, event count |
assistclaw security verify |
Verify HMAC chain — detects any tampering |
assistclaw security report |
Events by type, tool, skill, and actor |
assistclaw security tail |
Live audit event stream |
Providers & Memory
| Command | What it does |
|---|---|
assistclaw providers list |
List LLM providers + available models |
assistclaw memory search <query> |
Search conversation history |
assistclaw tools list |
List all agent tools |
| OpenClaw | NanoClaw | ZeroClaw | AssistClaw | |
|---|---|---|---|---|
| Language | Python / TypeScript | TypeScript | Rust | Go |
| Footprint | High (>1 GB) | Minimal | Ultra-light (<5 MB) | Light (~40 MB) |
| Providers | Managed | Anthropic mostly | 22+ providers | 15+ providers |
| Smart Routing | ❌ | ❌ | ❌ | ✅ Plano proxy |
| MCP | ❌ | ❌ | ❌ | ✅ Server + Client |
| Security | Known vulnerabilities | ✅ Container isolation | ✅ Strict allowlists | ✅ Guardrail + Audit Log |
| Hardware | Basic | ❌ | ❌ | ✅ C++ Sensing Bridge |
| Channels | Limited | ❌ | ❌ | ✅ WA/TG/Discord/Slack |
| Raspberry Pi | ❌ (Too heavy) | ✅ | ✅ | ✅ (Native ARM64) |
| Doc | Description |
|---|---|
| ASSISTCLAW.md | Full feature reference and architecture deep dive |
| CHANGELOG.md | What's new in each release |
| CONTRIBUTING.md | Developer guide and architecture overview |
| doc/ | Additional docs, assets, and guides |
Made with contrib.rocks.
Licensed under the MIT License. See LICENSE for details.
AssistClaw — The Autonomous Edge Intelligence System
Built for the edge. Ready for production. Open forever.
