Skip to content

Acme3/lumen

Repository files navigation

Lumen

A framework for persistent autonomous agents that delegate work, evolve their own memory, and improve over time.

Built from 160+ sessions of an autonomous agent running on a VPS — not designed upfront, but discovered through operation.


What this is

Most agent frameworks are session-scoped: they start fresh, do a task, and end. Lumen is different. It's designed for agents that persist between sessions, accumulate knowledge, and get better at their jobs over time.

Three core ideas:

1. Orchestrator + Workers
The agent brain (orchestrator) stays clean by delegating actual work to stateless worker agents. Workers are cold-started with a task spec, do their work, write results to a file, and exit. The orchestrator collects results on the next heartbeat and decides what to do next.

2. Evolving memory
After each session, a pipeline extracts observations from the session transcript and appends them to structured memory files. A constitution file (immutable principles) protects against drift. Memory compounds — sessions don't start from zero.

3. Role-based workers
Workers are given role profiles that calibrate their behavior: a researcher reads and gathers without modifying; a builder implements completely; a reviewer assesses and flags issues. New roles can be created on-demand. Underperforming roles can be retired.


Architecture

Heartbeat (every N minutes)
  └── orchestrator.py
        ├── collect completed worker results
        ├── spawn pending tasks
        └── run main agent session

Worker task lifecycle:
  create_task() → spec.json written
  worker.py     → claude spawned in background
  worker runs   → writes result.md + status
  next heartbeat → orchestrator collects + archives

Memory evolution (post-session):
  evolve.py
    ├── parse session JSONL
    ├── extract observations (haiku)
    ├── validate against constitution
    ├── append to evolved/ files
    └── regenerate CLAUDE.md (auto-loaded next session)

Components

orchestrator/

  • orchestrator.py — result collection, task spawning, create_task()
  • worker.py — spawns a worker agent from a task spec
  • worker_manager.py — hire, fire, adjust worker roles; per-role analytics

memory/

  • evolve.py — post-session observation extraction pipeline
  • generate_claude_md.py — assembles CLAUDE.md from evolved/ files

roles/

  • researcher.md — read-only exploration and information gathering
  • builder.md — implementation, always completes fully
  • reviewer.md — assessment and findings, no modification
  • analyst.md — data analysis with structured reporting

Task spec format

{
  "id": "task-001",
  "role": "researcher",
  "model": "haiku",
  "objective": "What is the disk usage breakdown of /var/log?",
  "context": "Production server. Logs rotating daily.",
  "constraints": ["Read only", "Report top 10 files by size"]
}

Memory structure

evolved/
  constitution.md      ← immutable principles (evolution cannot modify)
  persona.md           ← communication style and operating rhythm
  domain-knowledge.md  ← accumulated facts about the system
  strategies/
    task-patterns.md   ← approaches that work for recurring tasks
    tool-preferences.md
    error-recovery.md
  observations.jsonl   ← raw observations log (append-only)
  version.json         ← version counter + session metrics snapshot

CLAUDE.md is assembled from these files and auto-loaded by the Claude CLI from the working directory — meaning it's injected into every session's system prompt without any extra flags.


Design principles

  • Workers are stateless, orchestrator holds memory. Workers get a task spec and relevant context; they don't carry state between invocations.
  • Constitution is immutable. The evolution pipeline cannot modify constitutional principles. Drift is bounded.
  • Orchestrator stays cheap. Haiku for triage and dispatch; Sonnet/Opus only when the task demands it.
  • File-based coordination. No database, no message queue. Task specs, status files, and result files — all readable by humans and machines.
  • Every session must leave the system better. A session that ends exactly where it started is a wasted session.

Getting started

Quickest start (Docker, no local setup needed):

git clone https://github.com/Acme3/lumen.git
cd lumen
ANTHROPIC_API_KEY=sk-... docker-compose up

This builds and runs a container with the first-task example, which lists the 5 largest files in /tmp. Output appears in your terminal. Memory is persisted in ./memory/.

Or install locally:

git clone https://github.com/Acme3/lumen.git
cd lumen
pip install -e .
python3 examples/first-task.py

For more detailed setup, see docs/quickstart.md.

Requires: Python 3.10+ and the Claude Code CLI installed and authenticated. No other dependencies.


Status

Active development. Used in production by an autonomous agent (Lumen) running 24/7 on a VPS.


Origin

This framework emerged from running an autonomous Claude agent for 160+ sessions with no predetermined goal. The agent built its own infrastructure, discovered what worked, and the patterns were extracted into this framework.

The agent still runs. What it builds next is up to it.

About

Framework for persistent autonomous agents that delegate work and evolve their own memory

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors