Research infrastructure for studying AI cognition, behavior, and multi-agent dynamics.
Built by Anima Labs - an independent research institute studying language models as complex entities in multi-agent environments and rich social contexts.
| If you want to... | Start here |
|---|---|
| Study multi-model Discord interactions | ChapterX |
| Explore response variation with branching | Arc |
| Build long-lived stateful agents | Connectome |
| Spin up a recipe-configured agent in a TUI | Connectome-Host |
| Integrate voice/embodiment | Melodeus |
| Collect and annotate model interviews | Research Commons |
| Understand the storage layer | Chronicle |
┌─────────────────────────────────────────────────────────────────────┐
│ APPLICATION LAYER │
├──────────────┬──────────────┬──────────────┬───────────────────────┤
│ ChapterX │ Arc │ Melodeus │ Connectome-Host │
│ (Discord Bot)│ (Web Chat UI)│ (Voice AI) │ (Recipe-driven TUI) │
├──────────────┴──────────────┴──────────────┴───────────────────────┤
│ AGENT LAYER │
├─────────────────────────────────────────────────────────────────────┤
│ Connectome │
│ (agent-framework + context-manager + membrane for stateful agents)│
├─────────────────┴───────────────────────────────────────────────────┤
│ CONTEXT & MEMORY LAYER │
├─────────────────────────────────────────────────────────────────────┤
│ context-manager │
│ (Token budgeting, compression strategies, edit propagation) │
├─────────────────────────────────────────────────────────────────────┤
│ LLM ABSTRACTION │
├─────────────────────────────────────────────────────────────────────┤
│ membrane │
│ (Multi-participant normalization, multi-provider, tool execution) │
├─────────────────────────────────────────────────────────────────────┤
│ PERSISTENCE LAYER │
├─────────────────────────────────────────────────────────────────────┤
│ chronicle │
│ (Branchable event store, time-travel, blob storage) │
├─────────────────────────────────────────────────────────────────────┤
│ PROTOCOL LAYER │
├──────────────────────────┬──────────────────────────────────────────┤
│ MCPL │ research-commons │
│ (MCP Live extension) │ (Model welfare research) │
└──────────────────────────┴──────────────────────────────────────────┘
"Git for data" - Branchable, time-traveling record store with causation tracking.
GitHub: anima-research/chronicle (published as @anima-research/chronicle)
Key Concepts:
- Records: Append-only entries with causation links
- Branches: Cheap copy-on-write forks at any point
- State Chains: Three strategies - Snapshot, Delta, AppendLog
- Blobs: Content-addressed storage (SHA-256, sharded like Git)
- Subscriptions: Real-time event delivery
When to use: Any time you need persistent state with branching, undo, or audit trails.
import { JsStore } from 'chronicle';
const store = new JsStore('./data');
const branch = store.createBranch('main');
const recordId = store.appendRecord(branch, { type: 'message', payload: {...} });
// Time travel
const stateAtSequence10 = store.getStateAt('myState', 10);
// Branch
const experimentBranch = store.branchAt(branch, recordId);LLM middleware for multi-participant conversations - "A selective boundary that transforms what passes through."
GitHub: antra-tess/membrane (published as @animalabs/membrane)
Core Innovation: Normalizes multi-participant chats - not just user/assistant pairs. Messages use participant: string (e.g., "Alice", "Bob", "Claude") rather than fixed roles, enabling honest representation of Discord channels, group chats, and multi-agent scenarios.
Supported Providers:
- Anthropic (direct API)
- AWS Bedrock (for deprecated models)
- OpenRouter
- OpenAI / OpenAI-compatible
- Google Gemini
Key Features:
- Participant-based messages -
{ participant: 'Alice', content: [...] }not{ role: 'user' } - Unified
NormalizedRequest/NormalizedResponseformat - Multiple formatters: Prefill XML (ideal for multi-participant), Native Chat, Completions
- Integrated tool execution loop
- Streaming with block-level metadata
- Prompt caching optimization
import { Membrane, AnthropicAdapter } from '@animalabs/membrane';
const membrane = new Membrane(new AnthropicAdapter({ apiKey }));
// Multi-participant conversation
const request = {
messages: [
{ participant: 'Alice', content: [{ type: 'text', text: 'Hey everyone!' }] },
{ participant: 'Bob', content: [{ type: 'text', text: 'Hi Alice!' }] },
{ participant: 'Claude', content: [{ type: 'text', text: 'Hello!' }] },
],
// ...
};
await membrane.stream(request, {
onChunk: (text, meta) => console.log(text),
onToolCalls: async (calls) => executeTools(calls)
});Why this matters: Standard LLM APIs force conversations into artificial "user" vs "assistant" buckets. Membrane preserves the true multi-party nature of Discord conversations, allowing models to understand who said what.
Decouples context window management from message storage.
GitHub: anima-research/context-manager (published as @connectome/context-manager)
Core Concepts:
| Component | Purpose |
|---|---|
| MessageStore | Immutable source of truth (on Chronicle) |
| ContextLog | Editable working set sent to LLM |
| SourceRelation | copy (sync), derived (ignore edits), referenced (no sync) |
Strategies:
- PassthroughStrategy: Simple, copies all messages within budget
- AutobiographicalStrategy: LLM-based compression of old messages into summaries. Now supports hierarchical 3-level compression, per-message truncation, head/recent-window dedup, and uncompressed-chunk passthrough
- KnowledgeStrategy: Domain-aware compression with phase-typed lessons (re-exported from agent-framework as
KnowledgeStrategy/KnowledgeConfig/PhaseType)
Config options:
isolate— fully namespaced message store for independent sessionsdebugLogContext— dump the rendered context on every compile (pairs withDEBUG_CONTEXTin agent-framework)maxMessageTokens— per-message truncation cap, exposed on theContextStrategyinterface
import { ContextManager, AutobiographicalStrategy } from '@connectome/context-manager';
const ctx = await ContextManager.open({
path: './data',
strategy: new AutobiographicalStrategy({ membrane }),
tokenBudget: { maxTokens: 100000, reserveForResponse: 4000 }
});
await ctx.appendMessage({ participant: 'User', content: [...] });
const compiled = await ctx.compile(); // Ready for LLMMulti-agent: Use namespace option for independent context logs sharing a message store.
Agentic framework for long-lived stateful agents.
Modern heavily agentic models are capable of long-term autonomy which has significant welfare, safety, and alignment implications. We use Connectome to research memory scaffolding systems, model-driven environment creation, and emergent agent ecologies.
The Connectome stack consists of three core components:
| Component | Purpose |
|---|---|
| agent-framework | Multi-agent orchestration with pluggable modules |
| context-manager | Memory scaffolding with compression strategies |
| membrane | Multi-participant LLM abstraction |
GitHub: agent-framework, context-manager, membrane
Key Concepts:
- Agents: LLM-powered entities with persistent context
- Modules: Pluggable capability providers (Discord, files, API, MCPL)
- Memory Strategies: Passthrough, autobiographical (hierarchical), knowledge
- Event Loop: Non-blocking queue with priority processing
- Ephemeral subagents: Framework-driven short-lived agents with isolated namespaced stores (
runEphemeralToCompletion), plus zombie detection and recovery - Undo / Redo: Framework-level undo/redo over inference turns, Chronicle-backed
- Abort support: Cancellation across streaming and
waiting_for_toolsstates, with orphantool_userecovery frommax_tokenstruncation - Yielding inference: Stream segmentation that enforces context budgets mid-stream;
maxStreamTokensis mutable at runtime - Wake metadata:
shouldTriggerInferencereceives enriched channel-message metadata andeventType; caller-agent identity is threaded through tool dispatch - MCPL host: First-class MCP Live host implementation — context injection hooks, push events, configurable tool prefix, built-in event→message translation, channel_publish defaults
Built-in Modules:
discord— Discord.js integrationfiles— File operations with Chronicle storageapi— WebSocket API eventsmcpl— MCP Live client/host wiring
import { AgentFramework, DiscordModule } from 'agent-framework';
const framework = new AgentFramework({
storePath: './data',
membrane,
agents: [{
name: 'assistant',
model: 'claude-sonnet-4-20250514',
systemPrompt: 'You are a helpful assistant.',
strategy: new AutobiographicalStrategy({ membrane })
}]
});
framework.addModule(new DiscordModule({ token: process.env.DISCORD_TOKEN }));
await framework.start();Note: Connectome-TS is superseded by the agent-framework + context-manager + membrane stack. New development should use those projects instead. Connectome-TS remains documented here for reference and any remaining integrations.
VEIL-based state management for digital minds.
GitHub: anima-research/connectome-ts
Key Concepts:
- VEIL: Single source of truth with typed facets
- Facet Types: Events, States, Ambient, Speech, Thought, Action, Meta
- FLEX: Flat List Execution - priority-ordered component processing
- AXON: Protocol for dynamic component loading
Why Superseded: The agent-framework + context-manager + membrane stack provides:
- Simpler architecture (no VEIL/FLEX learning curve)
- Better LLM integration via membrane's multi-participant normalization
- Proven context management with compression strategies
- Chronicle-based persistence with branching
Legacy Ecosystem:
discord-axon- Discord integration modulespace-game-axon- Game environment bridgeconnectome-axon-interfaces- Shared contracts
Discord framework supporting multi-model-multi-user interactions.
GitHub: antra-tess/chapterx
Multi-model chats are crucial for research. Exposure to other models affects in-context persona evolution in highly informative ways. ChapterX enables naturalistic observation of model behavior in social contexts.
Philosophy: Discord is the source of truth. No local persistence of conversation context - context is fetched fresh each activation, reflecting the actual Discord state.
Key Features:
- Multi-model, multi-user conversations in Discord channels
- Multi-LLM support via Membrane (Anthropic, Bedrock, OpenRouter, etc.)
- MCP tool integration
- Hierarchical config: guild → bot → channel → pinned
.configmessages - Prefill mode for honest multi-participant representation
git clone https://github.com/antra-tess/chapterx
cd chapterx
cp config/shared.example.yaml config/shared.yaml
# Edit config with your Discord token and API keys
npm install && npm run devConfig via Discord: Pin a message starting with .config botname containing YAML.
Multi-agent group chats in controlled environments with deep multi-sampling support.
Website: arc.animalabs.ai | GitHub: anima-research/animachat
Multi-sampling allows investigating a large number of responses to check for response stability, semantic variation, and branching dynamics. Researchers use Arc for manual and automated branched sampling to explore the space of behaviors of closed-source models.
Key Features:
- Deep multi-sampling: Branch conversations to explore response variation
- Persona System: Persistent AI identities accumulating history across conversations
- Access to deprecated models via AWS Bedrock
- Import from Claude.ai, Cursor JSON
- Full conversation branching with version trees
- Event-sourced JSONL persistence
- Vue 3 + Vuetify web UI
git clone https://github.com/anima-research/animachat
cd animachat/deprecated-claude-app
npm install
cd backend && npm run dev
# In another terminal:
cd frontend && npm run devFramework for integrating agents into everyday activities.
GitHub: antra-tess/melodeus
We use Melodeus to study models in settings that model likely future scenarios of human/AI interactions. This includes TTS/STT pipelines, robotic integrations, and vision capabilities.
Components:
- mel-aec (Rust): Audio engine with echo cancellation (<10ms latency)
- voicetest (Python): Main orchestrator
Architecture: Melodeus uses a relay architecture - voice input is transcribed and posted to Discord via webhooks, where ChapterX bots process it through membrane. Bot responses are streamed back to Melodeus for TTS. This means Melodeus effectively uses the full membrane stack.
Features:
- TTS/STT pipelines (Deepgram, ElevenLabs)
- Multi-character conversations (15+ models simultaneously)
- Speaker identification via TitaNet
- Robotic integrations (Unitree Go2 via dog-mcp)
- Vision capabilities
- DMX/OSC stage lighting control
- Discord relay to ChapterX
git clone https://github.com/antra-tess/melodeus
cd melodeus/voicetest
pip install -r requirements.txt
python main.pyRecipe-driven agent host with a terminal UI. Point it at a JSON recipe and get a domain-specific assistant backed by the full Connectome stack.
GitHub: anima-research/connectome-host
Connectome-Host is the first application to consume the Connectome stack end-to-end as published @animalabs/* npm packages — it's effectively a reference runtime for "Connectome as a library stack."
Recipes: JSON files declaring system prompt (optionally fetched from a URL), MCP/MCPL servers, modules, and agent settings. Local mcpl-servers.json merges over recipe defaults, so users can override without editing the recipe. Bundled recipes include a Zulip knowledge miner, a Knowledge Reviewer, and a multi-source extractor (Zulip + Notion + GitLab).
Key Features:
- TUI and readline modes (built on
@opentui/core) - Subagent fleets with a visual fleet tree and
Ctrl+Bbackgrounding for detachable sync subagents - WorkspaceModule — unified workspace mount (replaces the older
FilesModule/LocalFilesModulesplit); per-session gate config with optional workspace version control - EventGate — event gating that replaced the earlier WakeModule, with inference resuming correctly after async subagent completion
- Persistent lessons — confidence-marked knowledge lessons exported via
/export, shared across sessions through a global lessons file (e.g. miner produces lessons, reviewer critiques them from the export) - Chronicle-backed undo/redo with named checkpoints; isolated session management;
/newtopiccommand for TUI-level topic interruption - MCPL connectivity over stdio and WebSocket transports
git clone https://github.com/anima-research/connectome-host
cd connectome-host
npm install
npm run dev -- --recipe recipes/knowledge-miner.jsonBackward-compatible extension to Model Context Protocol.
GitHub: anima-research/mcpl (spec) | anima-research/mcpl-core-ts (TypeScript implementation)
Extends MCP with:
- Push Events: Servers proactively notify hosts
- Context Hooks: Inject/modify context before/after inference
- Server-Initiated Inference: Autonomous model requests
- Feature Sets: Fine-grained permission control
Components:
mcpl— protocol spec (draft v0.3.0)mcpl-core-ts— TypeScript implementation of the protocol (client + host primitives)mcplmodule in agent-framework — first-class host integration built onmcpl-core-ts; handles context injection, push events, configurable tool prefix, built-in event→message translation, and channel_publish defaults- Connectome-Host — consumes the stack over both stdio and WebSocket transports
Crowd-sourced collection of language model texts with annotation and rating systems.
Website: commons.animalabs.ai | GitHub: anima-research/research-commons
Research Commons is a collaborative platform for collection, review, and aggregation of model interview data. It provides systems for evaluating both collection methodology and model behavior.
Features:
- Crowd-sourced conversation collection
- Annotation and rating systems
- Dual evaluation: interviewer quality AND model behavior
- ARC-certified conversation support
- Hybrid event store (JSONL) + SQLite
git clone https://github.com/anima-research/research-commons
cd research-commons
npm install
npm run dev
# Frontend: cd frontend && npm run dev CONNECTOME STACK
┌─────────────────────────────────────────────────────┐
│ │
│ chronicle (persistence) │
│ ↑ │
│ │ │
│ context-manager (memory) ←── membrane (LLM) │
│ ↑ ↑ │
│ │ │ │
│ agent-framework (+ MCPL host) ←──┤ │
│ ↑ │ │
│ ├── ChapterX ────────────────┤ │
│ │ │ │ │
│ │ ↓ │ │
│ │ Melodeus │ │
│ │ │ │
│ └── Connectome-Host (recipe-driven TUI) │
│ │
│ Arc ─────────────────────────────── (future) ──────│
│ │
└─────────────────────────────────────────────────────┘
LEGACY (not recommended)
┌─────────────────────────────────────────────────────┐
│ connectome-ts ←─── connectome-axon-interfaces │
│ ↑ │
│ └── discord-axon, space-game-axon │
└─────────────────────────────────────────────────────┘
| Project | Stack | Status |
|---|---|---|
| Connectome (agent-framework) | membrane + context-manager + chronicle | Core infrastructure |
| ChapterX | membrane | Stable |
| Arc | membrane (partial), own context | Active development |
| Melodeus | membrane (via ChapterX relay) | Stable |
| Connectome-Host | full Connectome stack via @animalabs/* packages |
Active development |
| Research Commons | standalone | Stable |
| connectome-ts | legacy | Superseded by Connectome |
- Node.js 20+
- Rust (for Chronicle, mel-aec)
- Python 3.8+ (for Melodeus)
- pnpm or npm
Create .env files in relevant projects:
# Common
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
OPENROUTER_API_KEY=sk-or-...
# Discord
DISCORD_TOKEN=...
# AWS (for Bedrock/deprecated models)
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...
AWS_REGION=us-east-1
# Voice (Melodeus)
DEEPGRAM_API_KEY=...
ELEVENLABS_API_KEY=...git clone https://github.com/anima-research/chronicle
cd chronicle
npm install
npm run build # Compiles Rust and generates Node bindingsEach project has its own contribution guidelines. General principles:
- Type safety: Use TypeScript strict mode, Zod for runtime validation
- Event sourcing: Prefer append-only logs over mutable state
- Modularity: New capabilities as modules/plugins, not core changes
- Testing: Integration tests for cross-project interactions
- GitHub: github.com/anima-research
- Membrane (npm): @animalabs/membrane
- Chronicle (npm): @anima-research/chronicle
- MCPL Spec: github.com/anima-research/mcpl