█████╗ ██╗██████╗ ███████╗███╗ ██╗
██╔══██╗██║██╔══██╗██╔════╝████╗ ██║
███████║██║██║ ██║█████╗ ██╔██╗ ██║
██╔══██║██║██║ ██║██╔══╝ ██║╚██╗██║
██║ ██║██║██████╔╝███████╗██║ ╚████║
╚═╝ ╚═╝╚═╝╚═════╝ ╚══════╝╚═╝ ╚═══╝
Autonomous AI Operating System
1,500+ Skills • 89+ Tools • 14+ Providers • AGPL-3.0
Windows • Linux • WSL • macOS (API Mode)
Self-Healing • Browser Automation • Terminal Control • Persistent Memory
Website · Contact · Discord · Download
v3.18 — live dropdown UX · real PC control · smart model failover · anti-confabulation Type
/for 63 commands or@for 61 tools with instant dropdown. Open/close apps, change volume, and control your PC for real — no more fake responses. Smart per-model failover with free-tier defaults. See changelog below.
Most AI agents answer questions. Aiden executes work.
- Runs on your machine — local-first, no telemetry, no cloud required
- Controls your desktop — vision loop, mouse, keyboard, window management
- Automates any browser — navigate, click, extract via playwright-cli
- Learns from every session — writes skills from successes, lessons from failures
- Works fully offline — Ollama support, zero cloud dependency
- One command to start —
npx aiden-osinstalls, configures, runs everything
Aiden is a local-first AI operating system. It runs entirely on your machine — no cloud account required, no telemetry, no data leaving your hardware unless you configure a cloud provider. It ships with a signed Windows installer, and runs in headless API mode on Linux, WSL, and macOS. Features: 1,400+ composable skills, 80+ built-in tools, a 6-layer memory architecture, self-healing provider routing, and the ability to control your screen, browse the web, run code, send emails, manage files, and hold a full conversation — offline via Ollama.
| Platform | GUI app | API + CLI | Skills available |
|---|---|---|---|
| Windows 10/11 | ✅ signed installer | ✅ | All 1,400+ (including Windows-only skills) |
| Linux | — | ✅ headless | ~1380 (Windows-only skills auto-skipped) |
| WSL 2 | — | ✅ headless | ~1380 (Windows-only skills auto-skipped) |
| macOS | — | ✅ headless | ~1380 (Windows-only skills auto-skipped) |
Windows-only skills (clipboard history, Defender, OneNote, Outlook COM, registry, etc.) are tagged platform: windows and are silently skipped on other platforms at load time.
npx aiden-osThat's it. Node.js 18+ is the only prerequisite. On first run it asks which AI provider you want (Groq is free), validates your key, saves config to ~/.aiden/app/, and starts both the server and CLI together in one terminal. Subsequent runs skip the wizard and go straight to the assistant.
Or install globally for the aiden command:
npm install -g aiden-os
aiden- Node.js 18+
- Git
- Ollama (optional, for offline mode): ollama.ai
irm aiden.taracod.com/install.ps1 | iexOr download the signed installer manually. Windows 10/11, 64-bit, ~500 MB disk space.
curl -fsSL aiden.taracod.com/install.sh | bashgit clone https://github.com/taracodlabs/aiden.git
cd aiden
npm install
cp .env.example .env
# Edit .env — add at minimum one API key (Groq is free: console.groq.com)# Terminal 1 — build and start server
npm run build
npm start
# Terminal 2 — start CLI
npm run cligit pull
npm run build
npm startWindows Open Settings → Apps (or Control Panel → Programs) and uninstall Aiden. To also remove user data:
Remove-Item -Recurse -Force "$env:APPDATA\aiden"
Remove-Item -Recurse -Force "$env:LOCALAPPDATA\aiden"Linux / macOS / WSL
curl -fsSL aiden.taracod.com/uninstall.sh | bashOr manually:
rm -rf ~/.local/share/aiden ~/.config/aiden
npm uninstall -g devos-ai # if installed via npmGROQ_API_KEY=your_key_here # free at console.groq.com/keys
Set AIDEN_HEADLESS=true to suppress the Electron GUI when running the packaged app.
Once Aiden is running, type these in the chat prompt:
| First thing to do | What to type |
|---|---|
| See all available commands | /help |
| Check which AI provider is active | /switch |
| See your daily token budget | /budget |
| Browse available skills | /skills |
| Install a skill from the registry | /install <skill-name> |
| Open the web UI in a browser | navigate to localhost:4200/ui |
| Check model availability | /models |
Ask anything in plain English — no special syntax needed for regular tasks:
summarize the PDF on my desktop
open chrome and search for latest AI news
close spotify
take a screenshot and describe what you see
what files did I download today
Type / to browse all 63 commands with instant search. Type @ to select any of 61 tools directly.
"Cannot find module" or TypeScript errors
npm run build # always rebuild after git pull"npm run serve" not found
There is no serve script. Use npm start instead.
Server not responding
# Check if server is running on port 4200
netstat -ano | findstr :4200 # Windows
lsof -i :4200 # Linux/macOSOllama not connecting
ollama serve # make sure Ollama is running
ollama pull qwen2.5:7b # pull your chosen modelChanging Ollama model or inference settings (no recompile needed — edit .env):
OLLAMA_MODEL=qwen2.5:7b
OLLAMA_TEMPERATURE=0.3
OLLAMA_CONTEXT_LENGTH=4096
OLLAMA_NUM_GPU=99
Use with any OpenAI client (Open WebUI, Chatbox, Cursor, etc.)
Base URL: http://localhost:4200
API Key: none required
Model: aiden-3.13
Full command palette, 1,400+ skills, 89+ tools, automatic provider routing (Groq → OpenRouter → Ollama). Runs in any terminal.
Full chat interface with live activity panel. Local-first, connects to Ollama or any of 15+ cloud providers via your own API key.
6-layer memory visualized — every conversation, task, and learned pattern becomes a node in the knowledge graph. Fully local, persisted to disk, searchable.
| Category | What Aiden does |
|---|---|
| Inference & providers | Local Ollama (Llama 3, Mistral, Qwen, Gemma, Phi…) with optional cloud fallback to OpenAI, Anthropic, Groq, Cerebras, NVIDIA NIM, OpenRouter, and more — 15+ providers including custom OpenAI-compatible endpoints |
| 80+ tools | Web search, file read/write, shell execution, Playwright browser automation (open_browser, browser_click, browser_type, browser_extract, browser_get_url), screen capture & OCR, calendar, email (IMAP/SMTP), code execution sandbox, clipboard, LocalSend LAN transfer, system info |
| 1,400+ skills | Composable plugins each with a SKILL.md prompt, tool implementations, and optional sandbox runner — install per-session or globally. Includes: LocalSend (AirDrop-style LAN transfer), Decepticon security scanner (opt-in), and more |
| Subagent swarm | Spawn N parallel agents on any task; vote, merge, or pick the best result automatically |
| 6-layer memory | Episodic (in-context), BM25 keyword, vector semantic, procedural (skill), goal tracking, and LESSONS.md permanent-failure moat that grows every session |
| Voice | Speech-to-text (Groq → OpenAI → local Whisper.cpp) + text-to-speech (Edge TTS → ElevenLabs → Windows SAPI); full offline voice loop |
| Channel adapters | Discord, Slack, Telegram, WhatsApp, Email, Webhook, Twilio — any channel triggers the same agent loop |
| Computer use | Screenshots, screen state reader, GUI automation via keyboard/mouse when asked — full OS control mode |
| Feature | Aiden | Hermes | OpenClaw |
|---|---|---|---|
| Windows native installer | ✅ | ❌ | ❌ |
| Desktop OS control | ✅ vision + mouse + keyboard | ❌ | ❌ |
| One-command install | ✅ npx aiden-os |
❌ | ❌ |
| OpenAI-compatible API | ✅ /v1/chat/completions |
❌ | ❌ |
| agentskills.io skills | ✅ 1500+ | ✅ | ✅ 13K+ |
| Offline (Ollama) | ✅ | ✅ | ✅ |
| Local web dashboard | ✅ | ✅ | ✅ |
| Pro license system | ✅ | ❌ | ❌ |
| Zero CVEs | ✅ | ❌ | ❌ |
| License | AGPL-3.0 | MIT | MIT |
User input (any channel)
│
▼
┌─────────────┐
│ Planner │ ← breaks task into steps
└──────┬──────┘
│
▼
┌─────────────┐ ┌──────────────────┐
│ Agent loop │────▶│ Tool dispatcher │──▶ 80+ tools
│ agentLoop │ └──────────────────┘
└──────┬──────┘
│
▼
┌─────────────────────────────────┐
│ Memory (6 layers) │
│ episodic · BM25 · vector · │
│ procedural · goal · LESSONS.md │
└─────────────────────────────────┘
│
▼
┌─────────────┐
│ Provider │ ← self-healing chain, 15+ providers
│ router │
└─────────────┘
│
▼
Response (streamed to originating channel)
See ARCHITECTURE.md for a full layer-by-layer breakdown, data flow diagrams, and the skill system design.
Copy .env.example to .env in the Aiden data directory.
cp .env.example .envKey environment variables:
| Variable | Default | Notes |
|---|---|---|
OLLAMA_HOST |
http://127.0.0.1:11434 |
Override if Ollama runs on a different host/port |
OLLAMA_MODEL |
mistral-nemo:12b |
Default chat model |
ANTHROPIC_API_KEY |
— | Optional cloud fallback |
OPENAI_API_KEY |
— | Optional cloud fallback |
GROQ_API_KEY |
— | Free tier: fast Llama 3 inference |
DAILY_BUDGET_USD |
5.00 |
Hard cap on daily cloud API spend |
See .env.example for the full list of ~90 variables covering voice, messaging integrations, search, computer use, and more.
Aiden exposes an OpenAI-compatible API at localhost:4200. Point any OpenAI client at Aiden to get the full 89-tool agent instead of raw GPT:
| Setting | Value |
|---|---|
| Base URL | http://localhost:4200 |
| API Key | (none required locally) |
| Model | aiden-3.13 |
Works with: Open WebUI · LibreChat · Chatbox · Continue.dev · Cursor · TypingMind · any app using the OpenAI SDK.
# Python example — zero config
from openai import OpenAI
client = OpenAI(base_url="http://localhost:4200", api_key="none")
response = client.chat.completions.create(
model="aiden-3.13",
messages=[{"role": "user", "content": "search news about AI agents"}]
)
print(response.choices[0].message.content)Optional: set AIDEN_API_KEY=your-secret in .env to require Bearer token authentication.
Aiden includes an opt-in Docker sandbox backend that runs shell_exec and run_python tool calls inside isolated containers instead of directly on the host.
- Docker Desktop (Windows/macOS) or Docker Engine (Linux)
AIDEN_SANDBOX_MODE |
Behaviour |
|---|---|
off (default) |
Tools run on the host — no Docker required |
auto |
Try Docker first; silently fall back to host if Docker is unavailable |
strict |
Require Docker — error if Docker is not available |
# In .env
AIDEN_SANDBOX_MODE=autoOr toggle live from the Aiden CLI without restarting:
/sandbox auto # switch to auto mode
/sandbox strict # require Docker
/sandbox off # disable
/sandbox status # show current mode + Docker availability
/sandbox build # pre-build the container image
--network=none— no outbound network access (configurable per-call)--memory=512m --cpus=1— hard resource caps--read-only --tmpfs /tmp— immutable FS, only/tmpis writable--rm— container removed immediately after each tool call- Host
workspace/bind-mounted at/workspaceso results are accessible
| Command | Description |
|---|---|
npx aiden-os |
Install, configure, and start (recommended) |
npm start |
Start API server (port 4200) |
npm run cli |
Start TUI (connect to running server) |
npm run build |
Rebuild after source changes |
aiden --reconfigure |
Re-run setup wizard, change providers |
| Command | Description |
|---|---|
/help |
Show all commands |
/switch <provider> |
Change primary provider live |
/budget |
Show daily token spend + remaining |
/budget set <n> |
Set daily limit in USD |
/memory |
View distilled facts and memory stats |
/memory search <q> |
Search remembered facts |
/profile |
View structured user profile |
/failed [reason] |
Teach Aiden from a wrong answer |
/skills |
List loaded skills |
/install <skill> |
Install from community registry |
/publish <skill> |
Publish skill to registry |
/skills validate <n> |
Validate agentskills.io compliance |
/sandbox status |
Docker sandbox mode |
/sandbox auto |
Enable sandboxed shell/python |
/permissions |
View permission mode |
/permissions ask |
Require approval for destructive ops |
/permissions allow |
Allow all operations silently |
/retry |
Retry last query |
/exit |
Save memory and exit |
Both the terminal TUI and the browser dashboard (localhost:4200/ui) expose the full feature set. Use whichever fits your workflow.
| Feature | Terminal CLI | Browser (localhost:4200/ui) |
|---|---|---|
| Chat | ✅ inline prompt | ✅ chat panel |
| Streaming responses | ✅ token-by-token | ✅ live SSE |
| Markdown rendering | ✅ | ✅ |
| Slash commands | ✅ /help, /switch, /budget… |
✅ same commands |
/ command dropdown |
✅ instant, 63 commands | 🔜 v3.19 |
@ tool picker |
✅ instant, 61 tools | 🔜 v3.19 |
| Provider panel | /switch |
✅ Providers tab |
| Memory panel | /memory |
✅ Memory tab |
| Skills panel | /skills |
✅ Skills tab |
| Plugin hooks | ✅ | ✅ |
| MCP server mode | aiden mcp |
— |
| OpenAI-compatible API | — | ✅ localhost:4200/v1 |
Contributions are welcome — see CONTRIBUTING.md for the full guide.
Quickstart:
git clone https://github.com/taracodlabs/aiden.git
cd aiden
npm install
cp .env.example .env # add at minimum one API key (Groq is free: console.groq.com/keys)
npm run build
npm start # server on :4200
npm run cli # TUI in a second terminal- Bug fixes and new skills are the easiest entry points
- All contributors sign the CLA once via PR comment
- Follow Conventional Commits
- Run
npx tsc --noEmitbefore opening a PR
| Discord | discord.gg/gMZ3hUnQTm — chat, support, share what you build |
| Skills registry | agentskills.io — 1,500+ community skills |
| Bug reports & features | github.com/taracodlabs/aiden/issues |
| Star the repo | github.com/taracodlabs/aiden ⭐ |
| npm | npm install -g aiden-os |
| Sponsor | github.com/sponsors/taracodlabs |
| Document | Description |
|---|---|
| ARCHITECTURE.md | Layer-by-layer breakdown, data flow diagrams, skill system design |
| CONTRIBUTING.md | How to contribute — skills, tools, providers, docs |
| docs/ROADMAP.md | Planned features and milestone tracker |
| docs/mcp/ | MCP server setup — Claude Code, Cursor, VS Code integration |
| .env.example | All ~90 environment variables with descriptions |
| workspace-templates/ | Starter workspace configs and example plugins |
| Download installer | github.com/taracodlabs/aiden-releases/releases/latest |
| Releases & changelog | github.com/taracodlabs/aiden-releases |
| License | AGPL-3.0 core · Apache-2.0 skills |
Detailed guides coming in v3.19. Short version:
- Skills — Aiden is fully compatible with agentskills.io. Any Hermes or OpenClaw skill with a valid
skill.jsonmanifest loads automatically via/install <name>. - API clients — Aiden exposes an OpenAI-compatible API at
localhost:4200/v1. If you pointed your client at another agent, update the base URL and you're done. - Config / env — Most standard keys (
OPENAI_API_KEY,ANTHROPIC_API_KEY,GROQ_API_KEY, etc.) are recognized as-is. Copy your existing.envand Aiden picks them up on first start.
Live dropdown UX (Hermes-style)
- Type
/for instant command dropdown (63 commands) - Type
@for tool dropdown (61 tools) - Prefix-match filter, arrow nav, Tab to select, Esc to close
Real PC control
close chrome/spotify/notepad→ actually closes via taskkillincrease/decrease volume by N→ actually changesmute/unmute→ actually toggles- 30+ app name → exe map
system_volumedetects intent from any natural input
YouTube auto-plays
play X on youtube→ opens browser, auto-clicks first result- Bypasses fast-path that was blocking it
Anti-confabulation rules
- SOUL.md updated: never claim actions completed without tool calls
- InstantAction shortcuts that faked actions removed
- Honest fallback messages when providers fail
Smart provider failover
- 3-strike rule: provider disabled for 15 min after 3 rate limits
- Permanent disable on 401/403 (invalid key)
- All cloud failed → automatic Ollama fallback
Smart model selection
- Free tier defaults per provider (Llama 70B free, Gemini 2.5 Flash, etc.)
- Per-model failover within provider before marking provider rate-limited
- Override with
PROVIDER_MODELenv var /modelscommand shows per-provider table with FREE/PAID badges
Server logs no longer leak into chat
console.logredirected to stderr- CLI output is clean even with both in same terminal
Skill loader fix
- 1,484 skills now load (was blocking 1,445 due to overly broad patterns)
Known issues — fixing in v3.19
- Cross-provider failover not always reliable (Groq may not try other providers)
- Real-time state queries (now playing, open tabs) need dedicated tools
- YouTube auto-click occasionally fails on slow-loading pages
Local web dashboard
- Browser UI at localhost:4200/ui — no terminal needed
- Chat, Providers, Memory, Skills panels
- Live SSE streaming, markdown rendering
Plugin system
- Drop workspace/plugins/.js → auto-loads
- preTool/postTool hooks, custom tool registration
- Session lifecycle hooks (onSessionStart/onSessionEnd)
- Hot-reload with /plugins reload
- Examples: audit-log.js, hello-tool.js
MCP server mode
- Expose Aiden's tools to Claude Code, Cursor, VS Code
- Run: node dist-bundle/cli.js mcp
- 28 safe tools exposed by default
- MCP_ALLOW_DESTRUCTIVE=true for full tool access
- Config examples in docs/mcp/
Bug fixes
- Dashboard chat showed "(no response)" for every message — SSE event field names in the browser client (
ev.type) didn't match the server's wire format (ev.token,ev.done,ev.tool). All event handlers rewritten to match actual shapes. - SOUL.md provider honesty: removed stale BayOfAssets reference, added explicit rule against claiming Ollama when running on Groq/OpenRouter.
One-command install
npx aiden-os— zero-install launcher; works on Windows, macOS, Linux (Node.js 18+)aiden-osnpm package bootstrapsaiden-runtimeautomatically, no git clone needed- Setup wizard on first run with
--reconfigureflag to re-run anytime
Security
- Shell blocklist — dangerous commands flagged before execution
- Permission mode — explicit user approval gate for destructive actions
- Token budget enforcement — per-request ceiling to prevent runaway loops
Memory
- Conflict resolution — contradictory memories detected and reconciled automatically
/memorycommand — inspect, edit, and prune the memory store from the CLI
UX
- Aiden branded banner replaces DevOS; orange
#FF6B35identity throughout CLI --reconfigureflag to re-run first-time setup without reinstalling
Browser & Automation
- Centralised Playwright session (
core/playwrightBridge.ts) — single persistent Chromium context shared across all browser tools, idle auto-close after 5 min, clean shutdown on SIGINT/SIGTERM browser_get_url— new tool to read the URL currently loaded in the browser- All browser tools now in
ALLOWED_TOOLSandNO_RETRY_TOOLS;send_file_local/receive_file_localadded to planner allow-list
Community & OSS
CONTRIBUTING.md, issue templates (bug, feature, skill submission), CLA workflow- Public roadmap (
docs/ROADMAP.md), architecture docs (docs/ARCHITECTURE.md), skill development guide - GitHub labels automated + 5 good-first-issues pinned
New skills
- LocalSend — AirDrop-style LAN file transfer (
send_file_local/receive_file_local); works over WiFi with no cloud - Security scanner — opt-in Decepticon integration with safety guards for scanning your own servers
Security
- 9 npm audit vulnerabilities resolved (safe + vitest chain)
- Security headers + rate limiting on
aiden.taracod.comlanding worker (CSP, HSTS, X-Frame-Options)
Ecosystem & Interoperability
- OpenAI-compatible API —
/v1/chat/completions+/v1/models. Point Open WebUI, LibreChat, Cursor, or any OpenAI SDK atlocalhost:4200and get Aiden's full 89-tool agent (not just raw LLM inference) - agentskills.io compatibility — skills now ship with
skill.jsonmanifest. Compatible with Hermes, OpenClaw, and any agentskills.io agent. 1,515 existing skills backfilled automatically - Streaming tool output — shell commands, Python scripts, and browser extraction stream live progress lines as they execute. Set
AIDEN_SHOW_TOOL_OUTPUT=falseto suppress
Community & Intelligence
- Public skill registry —
/install <skill>pulls from skills.taracod.com; browse with/skills registry <query>; publish with/publish <skill> - Deep GEPA — learns from failures, not just successes;
/failedanalyzes the exchange trace, writes a permanent lesson toLESSONS.md, degrades responsible skill confidence; skills failing 3× are auto-deprecated - Honcho user modeling — structured cross-session profile (identity, projects, goals, preferences); only the relevant slice injected per query; view and edit with
/profile - Docker sandbox — opt-in sandboxed
shell_execandrun_pythonexecution;AIDEN_SANDBOX_MODE=auto|strict|off; containers run--network=none --memory=512m --cpus=1 --read-only - GitHub CI/CD — TypeScript type-check + full build + secret scan on every PR to main
- CODEOWNERS — sensitive files auto-request maintainer review on every PR
- Sponsor button — Razorpay + GitHub Sponsors
Memory & Agents
- Post-task skill writer (GEPA-lite) — writes a new skill after every multi-step success
- Session-end memory distillation — 5–15 durable facts extracted per session
- Progressive token budget — tool names only; schema loaded on demand
- Real parallel subagents — isolated context, LLM synthesis pass
- Streaming verbs — "Pondering…", "Hunting…" in real time
- Real scheduler —
remind me in N minutesactually waits - Path C-lite — YouTube/Google/DDG/Bing search + click first result
- Electron auto-updater
- Identity honesty — transparent about inference provider
- Capacity fallback — auto-switches provider on 503/rate-limit
Custom provider routing
- Full support for custom OpenAI-compatible endpoints via
customProvidersindevos.config.json— add any endpoint with abaseUrl,apiKey, andmodel; no code changes required - Fixed silent Groq fallback bug in
callLLM: custom providers now correctly route to their configuredbaseUrlinstead of falling back to the Groq URL - Fixed
raceProviderspin-first logic:primaryProvideris now resolved fromcustomProviderslist when not found inproviders.apis - Fixed health/status endpoint (
/api/providers) to include custom providers in the returned list, tier-sorted
BayOfAssets Claude Haiku 4.5 as default primary
- Swapped default primary provider to BayOfAssets Claude Haiku 4.5 (
claude-haiku-4-5) at tier 1 - Groq and Gemini remain as tier-2 fallback chain
Memory & greeting
- Fixed
buildGreetingPreambledouble-label bug:"Active goals: Active goals:\n..."→ compact single-line goal titles - Added empty-string guard on greeting reply: blank preamble no longer produces
"Currently tracking: . What do you need?"
See releases page for older changelogs.
Aiden is built and maintained by one person. If it saves you time, consider sponsoring:
| Component | License |
|---|---|
Core (src/, cli/, api/, core/, providers/, dashboard-next/) |
AGPL-3.0-only |
Skills (skills/) |
Apache-2.0 |
Aiden's core is AGPL-3.0. You can self-host, modify, and study it freely. Embedding it in a commercial product or offering it as a hosted service requires either releasing your modifications under AGPL-3.0 or purchasing a commercial license.
Skills in skills/ are Apache-2.0 and can be used in commercial products without copyleft obligations.
For commercial licensing and enterprise deployments: aiden.taracod.com/contact?type=enterprise
Built by Taracod · Built by Shiva Deore · AGPL-3.0


