Same output. Less compute.
A side-by-side demonstration comparing standard LLM token usage against the AIEP incremental reasoning substrate — with live efficiency metrics.
| Column | What it represents |
|---|---|
| Standard | Stateless re-derivation — how many tokens a standard LLM call would consume starting from scratch |
| AIEP | Incremental recall substrate — actual tokens used after pulling from committed prior reasoning |
The efficiency panel (expandable) shows the full P117 parametric unburdening breakdown: reused steps, fresh steps, tokens avoided, source diversity score, and the raw evidence artefacts the substrate drew from.
- Open
index.htmlin any browser — no build step required. - Set your PIEA API endpoint on
<body data-api-base="https://your-piea.workers.dev">.
Without this, it defaults tohttp://localhost:8788(local development). - Enter a question and click Run.
If you have a PIEA worker running locally on port 8788, the demo works with no configuration. The demo calls:
POST /api/ask—{ question, session_id }→{ answer, efficiency, evidence_rail, usage, llm_usage, source_diversity_score, connection_rail }GET /api/stats—{ tokens_saved_total, … }
index.html — self-contained demo, no dependencies
README.md — this file
LICENSE — Apache 2.0
- aiep.dev — project home
- AIEP Live — real-time operating metrics dashboard
- AIEP Utilities — shared tooling for the AIEP protocol
Apache 2.0 — see LICENSE.