Open source under Apache-2.0. Forks and pull requests are welcome.
Archon is a multi-agent application delivery platform that turns a prompt into a versioned execution with artifacts, live preview, evaluation, and governance surfaces. It is designed around traceability, recovery, scoring, and auditability rather than a single prompt-to-page interaction.
Live Demo • Video Walkthrough • Showcase Gallery
- versioned artifact pipeline: brief, plan, code, preview, and factsheet per execution
- model-agnostic orchestration across Anthropic, Google, IBM, OpenAI, and local Ollama-backed workflows
- live version history with preview, restore, and prompt lineage
- runtime repair and build-recovery work for brittle generated React/TypeScript/Vite outputs
- automated eval loops with vision-based scoring and benchmark-driven iteration
- governance surface with model registry, quality scoring, and human-review gating
Walkthrough: prompt to generated app, version history, preview, and governed delivery flow.
The governance / factsheet screen is one of the strongest product surfaces in the repo because it immediately communicates auditability, traceability, and enterprise delivery posture.
What it shows:
- prompt quality scoring
- build confidence scoring
- model registry visibility across providers
- human-review gating
- a client-facing print/export surface
These examples represent the strongest generated outputs currently included in the repository.
Prompt: "Build a crypto portfolio tracker with real-time prices, holdings table, and activity feed"
Prompt: "Build a premium Halo fan page centered on Master Chief, Cortana, and the Arbiter. Include a cinematic hero, polished character dossiers, a legendary weapon showcase, and an explorable ringworld atlas."
Prompt: "Build a landing page for an AI-powered writing assistant with features, pricing, and testimonials"
More examples live in docs/SHOWCASE_GALLERY.md.
This is the most differentiated surface in the repo: execution lineage, preview refresh, prompt history, and restore behavior in one place.
The system persists the intermediate work, not just the final output.
Artifacts remain visible and versioned, which makes the pipeline easier to inspect, review, and restore.
Archon is intentionally designed to route across providers rather than depend on a single model story.
- Anthropic Claude for premium code generation and showcase-quality runs
- Google Gemini / Vertex AI for planning, design direction, and image workflows
- IBM Watson NLU for governance, prompt analysis, and audit framing
- OpenAI in adjacent or legacy agent paths
- Local Ollama for lower-cost repeated evaluation and prompt-improvement loops
This design supports:
- provider abstraction
- premium vs economical routing
- cloud and local evaluation modes
- stable artifact/governance layers even as model choices change
The repo separates cheap repeated iteration from premium final demos:
bulkprofile for repeated eval and reliability workshowcaseprofile for a small number of premium hero builds
# Bulk reliability pass
python eval/eval_loop.py --config eval_config.json --profile bulk --archetype dashboard --runs 5 --skip-image-gen
# Premium showcase pass
$env:ENGINEER_MODEL = "claude"
$env:ENGINEER_CLAUDE_MODEL = "claude-opus-4-6"
$env:DESIGN_IMAGE_MODEL = "imagen-4.0-ultra-generate-001"
python eval/eval_loop.py --config eval_config.json --profile showcase --archetype game --runs 1User Prompt
-> NLU / prompt analysis
-> requirements artifact
-> plan artifact
-> design + image workflow
-> code generation
-> build / preview
-> eval scoring
-> governance factsheet
-> version timeline + restoreable execution
The result is a system where each run has visible lineage instead of a single opaque output.
- Python 3.11+
- Node.js 18+
- API keys for the providers you want to enable
git clone https://github.com/aiedwardyi/archon
cd archon
python -m venv venv
.\venv\Scripts\Activate
pip install -r requirements.txt
cd frontend-studio
npm install
cd ../frontend-consumer
npm install
cd ../frontend
npm install
cd ..# Backend
.\venv\Scripts\Activate
python backend/app.py
# Studio UI
cd frontend-studio
npm run dev
# Consumer UI
cd ../frontend-consumer
npm run dev
# Enterprise UI
cd ../frontend
npm run devDefault local ports:
- Studio UI:
http://localhost:3000 - Consumer UI:
http://localhost:3002 - Enterprise UI:
http://localhost:8080
For a safe public portfolio deployment, build the frontend in static demo mode instead of exposing the live builder:
cd frontend
$env:VITE_PUBLIC_DEMO_MODE = "true"
npm run buildThe repo includes amplify.yml for a frontend-only Amplify deployment that serves the read-only showcase page and keeps backend/model execution paths private.
Licensed under the Apache License, Version 2.0. See LICENSE.




