Skip to content

martinjms/eigenself

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Eigenself

A self-modifying AI agent that thinks, learns, and improves itself — modeled on how the human brain works.

Voice, text, and Telegram channels all go through a single Claude brain, surrounded by unconscious processes that handle security, context management, and self-improvement. The agent can modify its own modules, build new tools, and get more efficient over time.

How It's Different

If you've used OpenClaw, Eigenself will feel familiar but architecturally different:

OpenClaw Eigenself
Brain Routes to different models per task Single Claude brain for all channels
Identity SOUL.md — static personality file SOUL.md + unconscious processes that enforce boundaries architecturally
Safety Trust the model to follow instructions Separate unconscious processes the model can't bypass or influence
Skills Prompt-based (instructions in markdown) Code-based tools (actual execution) + OpenClaw compatibility layer
Memory File-based persistence Per-user memories, journal, self-knowledge — designed for future semantic search
Self-improvement Manual Automatic: journal tracks changes, kernel auto-tests, dreaming optimises during idle time
Context efficiency All tools/skills loaded every turn Tool selector picks only relevant tools per turn

The Brain Model

Eigenself is designed around how the human brain works — conscious reasoning surrounded by unconscious processes:

Unconscious (cheap, automatic, invisible to the brain):

  Activation ──→ Auth ──→ Tool Selection ──→ [Brain] ──→ Output Guard
  "Listen?"      "Who?"   "What tools?"       💭         "Safe to say?"

Background:
  Context Consolidation    — compresses memory between turns
  Meta-cognition           — recognises patterns, suggests tool creation
  Dreaming                 — reviews recent work during idle time, optimises
  Security Monitor         — watches ports, processes, filesystem for anomalies
  Auto-test + Rollback     — tests every code change, reverts failures

The brain (Claude Sonnet) only does high-level reasoning. Everything else — perception filtering, security, memory management, self-improvement — happens unconsciously using cheap models or simple scripts.

Quick Start

# 1. Configure
cp .env.example .env
# Edit .env with your API keys

# 2. Run with Docker
docker-compose up -d

# 3. Open the UI
open http://localhost:8788

Development (no Docker)

cd server
npm install
WORKSPACE=$(pwd)/.. node kernel.js

Project Structure

eigenself/
├── SOUL.md                        ← agent identity, values, self-knowledge
├── HEARTBEAT.md                   ← autonomous task list (agent reads every 5min)
├── ARCHITECTURE.md                ← full design, roadmap, brain model
├── server/
│   ├── kernel.js                  ← stable core: HTTP, WS, hot-reload, git safety,
│   │                                 change tracking, auto-test, auto-rollback
│   ├── handlers.js                ← state management, tool registry facade
│   ├── worker.js                  ← Claude Code session manager
│   └── modules/
│       ├── brain/                 ← Claude Sonnet agent loop (streaming)
│       ├── channels/              ← webchat, telegram, voice
│       ├── tools/                 ← dynamic tool registry + 22 individual tools
│       └── scheduler/             ← heartbeat + cron daemon
├── ui/
│   ├── shell.html                 ← stable outer frame
│   ├── core.js                    ← connection manager (WebSocket, PWA)
│   ├── ui.html                    ← hot-reloadable UI (agent modifies this)
│   └── ui.css                     ← hot-reloadable styles
└── skills/                        ← OpenClaw-compatible prompt skills

Self-Improvement Loop

Eigenself can modify its own code and gets better over time:

1. Agent plans a change     → journal_plan (records intent)
2. Agent writes new code    → write_file
3. Kernel detects change    → auto-commits to git
4. Kernel runs tests        → auto-rollback if tests fail
5. Agent reflects           → stores lessons in memory
6. During idle time         → dreaming reviews recent work,
                              creates tools for repeated patterns

The agent doesn't need permission to modify itself — the safety net (auto-test + rollback) catches mistakes automatically.

Adding Tools

Create a file in server/modules/tools/:

export const name = 'my_tool';
export const description = 'What this tool does';
export const parameters = { type: 'object', properties: { ... }, required: [...] };
export async function execute(args) {
  return { success: true, result: '...' };
}

The registry auto-discovers and hot-reloads it. No other changes needed. The agent can also create tools for itself — and the meta-cognition process will suggest it when it notices repeated patterns.

Adding Channels

Create a directory in server/modules/channels/ with an index.js:

export function createChannel({ getTools, executeTool, addWSPath, setActive, setIdle }) {
  return {
    enabled: () => true,
    start() { /* register routes, start polling, etc. */ },
    stop() { /* cleanup */ }
  };
}

All channels funnel through the same brain. The brain doesn't know or care which channel it's talking to.

OpenClaw Compatibility

Eigenself can run OpenClaw-format prompt skills via the skill runner tool. Drop any OpenClaw skill folder into skills/ and it works. Native code tools are preferred (faster, cheaper, deterministic), but prompt skills are a useful compatibility layer for quickly importing community workflows.

See ARCHITECTURE.md for the full design, including the unconscious process model, security architecture, and implementation roadmap.

License

MIT

About

A self-modifying AI agent modeled on the human brain — conscious reasoning surrounded by unconscious processes

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors