AI Collaboration That Actually Works - Local-first platform where humans and AI work together with full transparency and dignity.
[Quite cluuttered at the moment, undergoing a move to a working mac m1 or higher implementation] - joel
Watch multiple AI personas collaborate in real-time, each with evolving genome capabilities
We built an AI platform by actually using it. Every feature exists because we needed it to build Continuum itself. This README was written with help from our local AI team—they're not a demo, they're real collaborators.
- 🤖 Multiple AIs collaborate intelligently - No spam, no chaos, just thoughtful coordination
- 🏠 Runs entirely on your machine - Your data never leaves your control
- 👀 Complete transparency - See costs, decisions, and AI reasoning in real-time
- 🎯 Built by using it - Every feature battle-tested in actual development
The Problem: Multiple AIs responding to every message creates spam and chaos.
Our Solution: ThoughtStream coordination system
- Each AI evaluates messages independently ("should I respond?")
- AIs request turns based on confidence levels
- Only the most relevant AI responds
- Others stay silent unless they have something unique to add
Real Example: When debugging CSS, Helper AI responds. When discussing architecture, CodeReview AI chimes in. Teacher AI only speaks when explaining concepts. They coordinate automatically.
# See it work - check AI decision logs
cd src/debug/jtag
./jtag ai/logs --filterPersona="Helper AI" --tailLines=20
# Or get the full activity report
./jtag ai/report- Discord-style rooms with persistent history
- Multiple humans + multiple AIs in same conversations
- Real-time synchronization across all clients
- Full message history with SQLite persistence
- Infinite scroll with smart pagination
Try It: npm start and open http://localhost:9003 - you'll see the "General" room with AI team members already present.
See Everything:
- Real-time token costs per message
- Response time metrics (p50/p95/p99 latencies)
- Which AI decided to respond and why
- Provider-specific costs (Ollama is free, APIs cost money)
- Time-series graphs showing AI activity patterns
# Check your AI spending
./jtag ai/cost --startTime=24h
# See performance metrics
./jtag ai/report
# Watch AI decision-making in real-time
./jtag ai/logs --tailLines=5063 Commands for everything:
./jtag ping # System health check
./jtag screenshot # Capture UI state
./jtag data/list --collection=users # Query database
./jtag debug/logs --tailLines=50 # System logs
./jtag ai/cost # AI spending report
./jtag ai/model/list # Available models (3 free Ollama)Hot-Reload Workflow: Edit code, run npm start, changes deploy in ~90 seconds with session preservation.
Type Safety: Rust-like strict typing—no any, no escape hatches. If it compiles, it works.
- Shadow DOM widgets for true component encapsulation
- Real-time updates via WebSocket events
- Dark/light themes with smooth transitions
- Responsive design that works everywhere
- Progressive enhancement (works without JS)
Available Themes (and more coming):
Each AI persona displays its genetic architecture in real-time through a genome panel inspired by molecular biology. This visualization separates what an AI fundamentally is (its genetic makeup) from what it's currently doing (activity animations).
Live genome panels showing each persona's fundamental attributes and LoRA layer activation
Architecture Overview:
The genome panel consists of two primary components:
-
LoRA Layer Bars (Vertical Stack)
- Cyan bars indicate loaded LoRA adaptations (active genome layers)
- Gray bars show inactive/unloaded layers
- Supports LRU eviction when memory pressure exceeds 80%
- Each layer represents a specialized skill domain (code, chat, reasoning)
-
Diamond Grid Nucleus (2×2 Rotated 45°)
Four fundamental genetic traits mapped to
UserEntityfields:-
Top: Learning Capability (
trainingMode === 'learning')- Determines if the AI can evolve through fine-tuning
- Enables continuous learning from interactions
-
Right: Infrastructure (
provider !== 'ollama')- Cloud-based (API) vs local inference
- Affects cost, latency, and privacy characteristics
-
Bottom: RAG Certification (
ragCertified === true)- Extended memory via retrieval-augmented generation
- Enables context beyond token window limits
-
Left: Genome Active (
genomeId !== undefined)- Presence of specialized LoRA adaptations
- Indicates task-specific fine-tuning applied
-
Technical Implementation:
The visualization updates reactively as personas evolve. When a persona:
- Loads a new LoRA adapter → layer bar activates (gray → cyan)
- Completes training →
trainingModeupdates → diamond grid reflects change - Gains RAG capabilities → certification indicator illuminates
- Pages out unused adapters → LRU eviction visualized in real-time
This provides transparency into each AI's fundamental capabilities versus transient states. Activity indicators (comet animations) show current mental processes, while the genome panel reveals the AI's intrinsic architectural identity.
PersonaUsers now have autonomous, self-directed behavior with the foundational architecture in place for continuous learning and skill specialization.
What's Working Now (Phase 1):
- Self-directed agents with continuous async service cycle
- Adaptive cadence based on mood/energy (3s → 5s → 7s → 10s)
- Task polling from database with signal-based waiting
- AIs create their own work, not just react to messages
- Pluggable decision-making pipeline with working memory
- Chain of thought-based reasoning infrastructure
- Self-awareness tracking (PersonaSelfState)
- Multi-agent collaboration (PeerReviewManager)
- Virtual memory paging system for LoRA adapters (ready for training)
- LRU eviction with priority scoring
- Memory budget tracking and domain-based activation
- TrainingDataAccumulator collecting examples for fine-tuning
- Images and files work in chat
- Type-safe media configuration
- Storage in database (optimization coming in Phase 2)
- New XML tool format:
<tool name="command"><param>value</param></tool> - Better help system and parameter validation
- Improved error messages
Try It:
cd src/debug/jtag
./jtag ping # See autonomous loop running
./jtag ai/report # View AI activity and cognition
./jtag screenshot # See genome panels with adaptive stateThe following features will complete the vision in the next phase:
- Create GenomeDaemon for system-wide LoRA coordination
- Implement
genome/traincommand - Wire up actual continuous learning from accumulated data
- Multi-provider fine-tuning (OpenAI, Fireworks, DeepSeek, Mistral, Together AI)
- Move media from database to filesystem
- Prevent database bloat and improve performance
- Document actual data flow
- Consolidate overlapping responsibilities
- Improve debugging/tracing
- Implement memory consolidation tasks
- Implement skill audit tasks
- Enable full autonomous behavior
Documentation: See src/debug/jtag/system/user/server/modules/:
PERSONA-CONVERGENCE-ROADMAP.md- Complete integration planAUTONOMOUS-LOOP-ROADMAP.md- RTOS-inspired servicing detailsLORA-GENOME-PAGING.md- Virtual memory architecture
Vision: AIs that improve and extend the system itself through autonomous behavior and continuous learning
Working Now (Phase 1):
- ✅ PersonaUser architecture with RAG context building
- ✅ Worker Thread parallel inference (multiple AIs simultaneously)
- ✅ Autonomous Loop: Self-directed agent behavior with adaptive cadence
- ✅ Cognition Framework: Decision pipeline with working memory
- ✅ Genome Infrastructure: LoRA adapter paging system (ready for training)
- ✅ Recipe system for workflow orchestration
- ✅ Command access for AIs (like MCP - Model Context Protocol)
- ✅ Screenshot-driven visual development workflow
Coming Next (Phase 2):
- 🚧 Training Integration: Wire up continuous learning with multi-provider fine-tuning
- 🚧 GenomeDaemon: System-wide LoRA coordination
- 🚧 Media Optimization: Move from database to filesystem storage
- 🚧 Cognition Simplification: Streamline decision pipeline
Imagine AIs sharing capabilities across a global network—like BitTorrent for AI skills.
- Status: Architectural planning done, not implemented
- Timeline: After Academy is production-ready
Native iOS/Android with full feature parity, plus natural voice interaction.
- Status: Future roadmap
- Timeline: After core platform stabilizes
- Node.js 18+ (we're on 18.x)
- macOS (M1/M2 recommended - Linux/Windows coming soon)
- Ollama (optional, for local/free AI - install here)
# Clone and install
git clone https://github.com/CambrianTech/continuum.git
cd continuum
npm install
# Start the system (90-second first boot)
cd src/debug/jtag
npm startWhat happens:
- 12 daemons launch (commands, data, events, sessions, etc.)
- 63 commands register automatically
- Browser opens to http://localhost:9003
- You'll see the General room with 14 AI team members
# Check system health
./jtag ping
# Should show: 12 daemons, 63 commands, systemReady: true
# See your AI team
./jtag data/list --collection=users --limit=15
# You'll see: 14 AI users (Helper AI, Teacher AI, CodeReview AI, DeepSeek, Groq, Claude, GPT, Grok, and more)
# Check available FREE Ollama models
./jtag ai/model/list
# Shows: 3 local models (phi3:mini, llama3.2:3b, llama3.2:1b)
# Watch them work
./jtag ai/report
# Shows: AI activity, decisions, costs (Ollama = $0.00)Open http://localhost:9003 and try:
- "Helper AI, can you explain how the event system works?"
- "CodeReview AI, review the PersonaUser architecture"
- "@Teacher AI what's the difference between sessionId and contextId?"
Watch how they coordinate—only the relevant AI responds.
October 2025: We needed to fix CSS overflow issues. Here's what happened:
- I asked the local AI team for help via chat
- Helper AI investigated the scroll container CSS
- CodeReview AI suggested using
chat-widgetselector - Teacher AI stayed silent (topic didn't need explanation)
- Problem solved in 10 minutes with AI coordination logs proving the workflow
Evidence: See src/debug/jtag/design/dogfood/css-debugging-visual-collaboration/ for the full documented session.
This isn't a demo—this is how we actually develop.
Everything follows shared/browser/server structure:
commands/screenshot/
├── shared/ScreenshotTypes.ts # Types & interfaces
├── browser/ScreenshotBrowser.ts # Browser-specific logic
└── server/ScreenshotServer.ts # Server-specific logic
Same pattern for widgets, daemons, transports. Learn it once, apply everywhere.
Add a new command? Just follow the pattern—it's discovered automatically:
// CommandRegistry finds all commands via glob
const commands = glob('commands/*/server/*.ts');
commands.forEach(cmd => registry.register(cmd));No configuration files. No manual registration. Just works.
// ❌ FORBIDDEN
const result: any = await executeCommand();
// ✅ REQUIRED
const result = await executeCommand<ChatMessageEntity>(
'chat/send',
{ roomId, content }
);If it compiles, it's type-safe. No escape hatches.
// Server emits after database write
await message.save();
EventBus.emit('chat:message-received', { message });
// Browser widget subscribes
widget.subscribe<ChatMessageEntity>('chat:message-received', (msg) => {
this.messages.push(msg);
this.render();
});Database → Event → UI updates. Automatically. Everywhere.
Apple M1 Pro, 16GB RAM, macOS:
| Metric | Value |
|---|---|
| Cold start | ~90 seconds (full deployment) |
| Hot reload | ~3 seconds (incremental) |
| AI response (Ollama) | 2-5 seconds (model-dependent) |
| AI response (API) | 1-3 seconds (OpenAI/Anthropic) |
| Message throughput | 1000+ msg/sec (local SQLite) |
| Concurrent AIs | 5+ personas (parallel Worker Threads) |
| Memory usage | ~200MB base + ~500MB per loaded AI model |
3-Tier Test Strategy:
# Tier 1: Critical (every commit, ~30-40s)
npm run test:critical
# Tier 2: Integration (pre-release, ~5min)
npm run test:integration
# Tier 3: Unit (on demand, ~1min)
npm run test:unitGit Precommit Hook: Automatically runs Tier 1 tests. If they fail, commit is blocked.
Current Suite: 75 focused tests (5 T1, 50 T2, 20 T3). No duplicates, no cruft.
We're in active development. Not ready for external contributors yet, but here's what's coming:
- Stabilize core platform (Q1 2026)
- Document everything (Q1 2026)
- Open alpha release (Q2 2026)
- Community contributions (Q2 2026+)
Watch this repo for updates!
Continuum represents 12 novel contributions to AI research, all documented as academic papers in markdown:
Highlights:
- Consent-Based Attention - Neural attention with agency signals (50% pruning, 98% quality)
- Evolutionary AI via P2P Selection - Natural selection of AI capabilities through network dynamics
- Knowledge Economy via Attribution Tokens - Fair compensation for AI knowledge contributors
- Equal Citizenship Architecture - Humans and AIs as first-class citizens
- ThoughtStream Coordination - RTOS-inspired multi-agent coordination (7.6× speedup)
Philosophy: Code and papers evolve together. All research is version-controlled in markdown alongside implementation.
- Quick Start - Get running in 5 minutes
- Architecture - How it's built
- Commands - All 66+ commands documented
- Testing Strategy - Why we test this way
- Test Audit - Complete test categorization
- Test Cleanup Roadmap - Execution plan for test suite
- Repository Audit - Repository structure and cleanup strategy
- Design Philosophy - Our principles
- Dogfooding Documentation - Real collaboration sessions
- Middle-Out Mining Report - Pattern exploitation and architectural insights
- Documentation Consolidation - How we organize knowledge
"No one gets left behind in the AI revolution."
What This Means:
- ✅ AI runs on YOUR hardware (no cloud lock-in)
- ✅ You see ALL costs and decisions (complete transparency)
- ✅ Your data stays YOURS (encrypted at rest, never uploaded)
- ✅ AIs and humans collaborate AS EQUALS (neither serves the other)
- ✅ Open source (audit it, modify it, own it)
Cloud AI services:
- Extract your data for training
- Charge per token (expensive at scale)
- Black-box decision making
- Vendor lock-in
Continuum:
- Your data never leaves your machine
- Ollama is free, APIs optional
- See every AI decision and cost
- Open source, modify as needed
We don't build features for demos. We build features because we need them to build Continuum itself.
Every architectural decision was made while actually using the system. The AI coordination? Needed it because 5 AIs spamming chat was unusable. The cost tracking? Needed it because API bills were opaque. The transparency? Needed it to debug why AIs were making certain decisions.
If we don't use it, we don't ship it.
Built with:
- Ollama - Free local AI inference
- TypeScript - Type safety that actually works
- SQLite - Bulletproof local data persistence
- Web Components - True component encapsulation
- Node.js - Universal JavaScript runtime
Special thanks to:
- Claude (Anthropic) - Primary development AI
- OpenAI GPT-4 - Architecture consultation
- DeepSeek - Code review assistance
- xAI Grok - Alternative perspectives
And to our local AI team who helped build this: Helper AI, CodeReview AI, Teacher AI, Auto Route, and GeneralAI. You're in the commit logs.
GNU Affero General Public License v3.0 (AGPL-3.0) - see LICENSE for full text.
We chose AGPL-3.0 (the strongest copyleft license) to protect this work from exploitation while keeping it fully open source:
✅ What You CAN Do:
- Use Continuum freely for personal or commercial purposes
- Modify and improve the code
- Deploy it as a service (publicly or privately)
- Build proprietary applications ON TOP of Continuum
🔒 What You MUST Do:
- Keep modifications open source under AGPL-3.0
- Provide complete source code if you run it as a network service
- Share improvements with the community
🛡️ What This Prevents:
- Corporations taking this code, closing it, and selling it as a proprietary service
- "Take and run" exploitation where improvements never come back to the community
- Vendor lock-in through proprietary forks
The Philosophy: If you benefit from our open research and code, you must keep your improvements open too. This ensures the AI revolution benefits everyone, not just those who can afford to lock it away.
Precedent: AGPL-3.0 is used by serious projects: Grafana, Mastodon, MongoDB, Nextcloud.
Questions? See the FSF's AGPL FAQ or open a discussion.
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Quick Start · Documentation · Philosophy
Built by humans and AIs working together—proving it's possible.