v3.0.0 "Spine" — extensible MCP aggregator, mistakes moat, 89.1% measured savings#17
Merged
v3.0.0 "Spine" — extensible MCP aggregator, mistakes moat, 89.1% measured savings#17
Conversation
…disarm Items #10 + #11 from v3.0 Spine implementation plan (docs/superpowers/specs/2026-04-24-v3.0-spine-implementation.md). ITEM #10 — engram gen emits CLAUDE.md AND AGENTS.md by default When no --target flag is passed, autogen() now writes BOTH CLAUDE.md AND AGENTS.md (and updates legacy .cursorrules if present). AGENTS.md is the Linux Foundation universal agent-instructions standard adopted by Codex CLI, Cursor, Windsurf, GitHub Copilot, JetBrains Junie, and Antigravity (donated to AAIF Dec 2025). Single-source-of-truth: same generated summary writes to both files. Explicit --target=claude / cursor / agents preserves single-file behavior. API change: autogen() return type goes from { file: string; ... } to { files: string[]; ... }. Only one caller (cli.ts) — updated. All tests pass (26/26, +5 new dual-emit cases). ITEM #11 — README naming-collision disarm section Adds 'What engramx is not' section between hero and Dashboard. Disarms collision with Go-Engram (Gentleman-Programming/engram), DeepSeek's Engram paper (Jan 2026), and MemPalace in the first 30 seconds of any new visitor read. Per decision: 'engramx' stays canonical brand (decision logged in strategy folder). PLANNING SPEC Lands the full v3.0 implementation plan at docs/superpowers/specs/2026-04-24-v3.0-spine-implementation.md (605 lines, sibling to the elevation-trilogy spec). Grounded in actual file paths + line numbers from the existing codebase — provider system already production-grade, Pillar 1 is EXTENDING not rewriting. 12 scope items mapped with dependencies, branch strategy, schema migrations, test strategy, release checklist.
Item #7 from v3.0 Spine implementation plan. Adds two optional columns to nodes table via migration #8: - valid_until INTEGER NULL — unix-ms timestamp after which the node should NO LONGER surface in context (NULL = still valid; the back-compat default for all existing rows) - invalidated_by_commit TEXT NULL — git SHA that triggered invalidation, for audit + future 'why did this mistake stop firing' UX Plus a partial index idx_nodes_validity ON (kind, valid_until) WHERE kind = 'mistake' AND valid_until IS NOT NULL — only mistakes with an explicit window pay storage cost. The engram:mistakes provider (src/providers/engram-mistakes.ts) now filters out invalidated mistakes (validUntil <= Date.now()). This fixes the long-standing 'refactored function still triggers old mistake warning' gap (Graphiti-inspired bi-temporal model). Migration runner extended to support function-based migrations. Migration 8 uses addColumnIfMissing() helper because SQLite's ALTER TABLE ADD COLUMN isn't natively idempotent (raises duplicate column name on re-run). Uses PRAGMA table_info to pre-check. Schema version: 7 → 8 (auto-backup on upgrade, existing v2.0+ behavior). Tests: - +13 new (5 in tests/db/migrate.test.ts for migration 8 schema + 8 in tests/providers/engram-mistakes.test.ts for filter behavior + audit-trail round-trip + boundary cases) - Full suite: 771 → 784 tests, all passing - TypeScript clean, lint clean Wiring for the git miner to SET valid_until / invalidated_by_commit when source files change comes later — the plumbing exists, the producer side is item #8 (pre-mortem warnings) prerequisites.
- Add MAX_DEPTH=100 to prevent stack overflow on deep directory trees - Wrap readdirSync in try-catch to skip unreadable directories - Add .engramignore support for custom exclusions - Expand default exclusions (target, .venv, .next, .nuxt, .output, coverage, .turbo, .cache)
- Add MAX_FILES_PER_COMMIT=50 to prevent O(n²) explosion on commits with many files - Skip build/dist directories to reduce noise - Axolotl project has commits with 130 files which caused 8,385+ co-change pairs
Follow-up to cherry-picks b4f7944 (PR #6 commit 1) + 411ad13 (PR #6 commit 2) from @mechtar-ru. Deletes redundant DEFAULT_EXCLUDED_DIRS (15 entries) + loadEngramIgnore() (16 lines) that lived in parallel with the canonical DEFAULT_SKIP_DIRS + loadIgnorePatterns() pair (the latter shipped in v2.1.0 via PR #13). Both pairs implemented the same .engramignore feature — keeping only the v2.1 canonical pair keeps one source of truth. Also tightens entries typing: 'let entries: Dirent[]' in extractDirectory (ReturnType<typeof readdirSync> resolves to the string[] default overload, not the Dirent[] shape actually returned with { withFileTypes: true }). All 784 tests pass. TypeScript clean. Closes issue #5 (via PR #6 content: MAX_DEPTH=100 + MAX_FILES_PER_COMMIT=50 + .engramignore support + expanded default skip dirs — the OOM crash on init for 2.2GB/34K-file projects like Axolotl is fixed).
First land of the MCP-client subsystem (item #1 from the v3.0 Spine implementation plan). Any MCP server can now become an engramx Context Spine provider via ~/.engram/mcp-providers.json — no code changes needed. WHAT SHIPS IN THIS COMMIT src/providers/mcp-config.ts - McpProviderConfig type (stdio + http transports, tools array with arg templates, tokenBudget, timeoutMs, cacheTtlSec, priority, enabled) - loadMcpConfigs(): reads ~/.engram/mcp-providers.json (path overridable via ENGRAM_MCP_CONFIG_PATH for tests). Per-entry validation errors are COLLECTED not thrown — one bad provider never stops the rest. - validateProviderConfig(): strict structural validator with precise error messages (tells you which field on which entry failed) - applyArgTemplate(): substitutes {filePath}/{projectRoot}/{imports}/ {fileBasename} tokens into tool args. Unknown tokens pass through. - Defaults: tokenBudget=200, timeoutMs=2000, cacheTtlSec=3600, priority from array order. Sensible for every MCP server we've seen. src/providers/mcp-client.ts - McpClientWrapper — thin wrapper on @modelcontextprotocol/sdk v1.29 Client + StdioClientTransport. Session-lifetime connection reuse. Lazy connect (no process spawned until first resolve). Error backoff (30s) prevents thrashing if the server crashes on startup. - createMcpProvider(config) — factory returning a ContextProvider that plugs into the existing resolver without modification. Tier 2 (matches context7 / obsidian semantics). Tools called in parallel per Read. - Budget enforcement + line-wise truncation (never mid-word). - Graceful shutdown on SIGTERM / SIGINT / beforeExit. - HTTP transport declared but deferred — throws 'not yet implemented' until item #5 SSE streaming lands with the Host/Origin hardening work. src/providers/resolver.ts - getMcpProviders(): loads MCP configs and wraps them. Cached for session lifetime. Test hook _resetMcpProvidersCache() for forced reload. - getAllProviders(): now merges BUILTINS + plugins + MCP providers (all deduped against built-in names so users can't shadow core). - Parse failures emit a single-line stderr warning (per bad entry) — visible to users without crashing their session. package.json - Adds @modelcontextprotocol/sdk@1.29.0 (4.3MB unpacked, pure JS, no native deps). Pinned behind a thin ProviderClient surface so migration to SDK v2 (alpha 2026-04) is a one-file swap later. TESTS tests/providers/mcp-config.test.ts — 24 cases covering: - File-doesn't-exist → empty configs - Valid stdio + http shapes round-trip - Invalid JSON reported as single failure - Bad entries skipped, good ones kept - Duplicate names: first wins - All validation rules (empty name/label, bad transport, confidence range, negative numeric fields, missing command/url, invalid URL) - Arg-template substitution: all tokens, unknown pass-through, non- strings unchanged, basename fallback, input-immutability Full suite: 771 → 808 tests (+24), all passing. TypeScript clean. WHAT THIS COMMIT DOES NOT DO (follow-up within item #1) - HTTP transport implementation — waits on item #5 SSE streaming for shared Host/Origin validation + resumable streams - Integration tests that actually spawn a real MCP server (needs tests/fixtures/minimal-mcp-server.mjs — next commit) - Tool-list caching — currently we call tools directly without listTools() first; the SDK may cache internally but we should verify + explicit-cache if not With this in place, item #2 (plugin contract v2 — mcpConfig auto-wrap) becomes a 2-day extension: plugin-loader.ts detects .mcpConfig on a plugin and auto-calls createMcpProvider(). Item #6 (Serena provider) becomes a 10-line ~/.engram/plugins/serena.mjs once the mcpConfig path lands.
…rena reference plugin ITEM #2 — Plugin contract v2 Extends ContextProviderPlugin so plugin authors can declare an MCP server via 'mcpConfig' and skip writing resolve()/isAvailable() by hand. The loader auto-wraps via createMcpProvider() from item #1. Classic plugins (custom resolve()) continue to work unchanged — if both fields are present, the author's resolve() wins (they opted into custom logic). Type changes (src/providers/types.ts): - ContextProviderPlugin stays strict (extends ContextProvider fully) — this is the POST-VALIDATION shape the resolver consumes - NEW: RawPluginShape — the pre-validation shape a plugin-file author writes in .mjs. tier/tokenBudget/timeoutMs/resolve/isAvailable all optional (loader fills from factory when mcpConfig present) Loader changes (src/providers/plugin-loader.ts): - validatePlugin() branches on 'has mcpConfig vs. has resolve()' - name/label/version always required - Classic path: tier/tokenBudget/timeoutMs/isAvailable required - mcpConfig path: config validated via validateProviderConfig(), merged with plugin fields (author overrides win over factory defaults) - One clear error per rejection — 'invalid mcpConfig: <reason>' tells you exactly which sub-field on which plugin is broken Tests (+7 cases in tests/providers/plugin-loader.test.ts): - mcpConfig-only plugin auto-wraps resolve + isAvailable - Plugin with neither resolve nor mcpConfig rejected (clear message) - Invalid mcpConfig rejected (bad command, bad http url) - Custom resolve wins over mcpConfig when both present - Plugin tokenBudget override wins over factory default - Missing version rejected even for mcpConfig plugins ITEM #6 — Serena plugin reference docs/plugins/examples/serena-plugin.mjs (~60 lines incl. docs) — the full Serena (oraios/serena) wrapper as an mcpConfig-only plugin. Install is cp + enable. Thanks to item #2, NO custom transport code needed. docs/plugins/examples/static-context-plugin.mjs — the classic-path reference showing a tier 1 plugin with hand-rolled resolve() for users who just want to inject a fixed string on every Read. docs/plugins/README.md — author-facing guide. Shape 1 (MCP-backed), Shape 2 (classic), template tokens, safety guarantees, debugging checklist, publishing notes. FULL SUITE 808 -> 815 tests (+7), all passing. TypeScript clean, lint clean. V3.0 PROGRESS Done: #1 foundation, #2, #6, #7, #9, #10, #11 = 7 of 12 scope items. Next: #3 budget-weighted resolver + mistakes-boost (~2-3d).
Two orthogonal improvements to the resolver's assembly pipeline. Both exported from resolver.ts so they're testable in isolation, and both run in the main resolveRichPacket() flow before the final priority sort. 1. PER-PROVIDER BUDGET ENFORCEMENT (enforcePerProviderBudget) Providers are SUPPOSED to self-truncate their content to 'tokenBudget', but a bad plugin or a non-conforming MCP server shouldn't be able to spend our entire total budget on one section. New helper truncates each result to the provider's declared budget BEFORE assembly. - Under-budget content passes through unchanged (zero-cost) - Over-budget content is line-truncated (never cut mid-word) - Edge: first line alone > budget -> hard-cap characters with marker Default budget for unknown/missing providers is 200 tokens (matches the MCP-config default from item #1). 2. MISTAKES-BOOST RERANKING (boostByMistakes) If the engram:mistakes provider fires for this file, scan OTHER providers' content for substring matches against mistake labels (extracted from the ' ! <label> (flagged <age>)' format). Matching results get confidence * 1.5 (capped at 1.0). Runs BEFORE the priority sort, but the secondary sort is now (priority asc, confidence desc) — so boost breaks ties WITHIN a priority tier without overriding priority across tiers. - Case-insensitive matching (labels normalized to lowercase) - Does NOT boost the mistakes provider itself - No-op if no mistakes are reported for this file (common case) Examples of the intended effect: - An engram:git commit message mentioning a known-broken function sorts UP within the git tier - A mempalace decision that references a mistaken architectural choice bubbles ahead of unrelated decisions TESTS (+10 cases in tests/providers/resolver.test.ts) enforcePerProviderBudget: - Under-budget untouched - Over-budget truncated by line with marker - Hard-cap when first line alone exceeds budget - Default 200 tokens when provider not found boostByMistakes: - No-op when no mistakes provider in set - Matching substring boosts confidence 0.6 -> 0.9 - Cap enforced (0.8 * 1.5 = 1.2 -> 1.0) - Non-matching results left alone - Mistakes provider itself is never self-boosted - Case-insensitive matching across upper/lower case variations Full suite: 815 -> 825 tests (+10), all passing. TypeScript clean. V3.0 PROGRESS: 8 of 12 scope items done. ✅ #1 foundation ✅ #2 ✅ #3 ✅ #6 ✅ #7 ✅ #9 ✅ #10 ✅ #11 Remaining: #4 Auto-Memory (blocked on MEMORY.md fixture), #5 SSE streaming, #8 pre-mortem warnings, #12 MCP Registry submit, and #1 completion (HTTP transport + real-server integration tests).
Opt-in warnings that fire BEFORE Claude Code runs an Edit/Write/Bash
tool call against code previously flagged as a mistake. Fully gated
via ENGRAM_MISTAKE_GUARD env var — zero overhead when unset.
MODES
unset / '0' → off (default — no database read, no overhead)
'1' → permissive: tool proceeds, a warning is prepended
to any additionalContext the primary handler emits
'2' → strict: tool is denied with the warning as reason
Hooks Edit/Write/Bash only. Read already surfaces mistakes via the
engram:mistakes context provider — duplicating at tool-call time would
be noise.
MATCHING
Edit/Write:
- Normalize tool_input.file_path to relative POSIX vs projectRoot
- Indexed lookup via store.getNodesByFile() (uses idx_nodes_source_file)
- Dedupe by node id when both relative + raw shapes are stored
Bash:
- Substring match on mistake.metadata.commandPattern (length >2)
- Fallback: substring match on mistake.sourceFile (length >3 to avoid
accidentally matching single-char paths like 'a')
- Full-table scan of mistakes (unavoidable — no file axis to index on).
Bounded by project size; only runs when the guard is explicitly on.
BI-TEMPORAL FILTER (item #7 interop)
Mistakes with validUntil <= now are suppressed — they refer to code
that has since been refactored away. Prevents stale-warning fatigue.
INTEGRATION
New file: src/intercept/handlers/mistake-guard.ts
- currentGuardMode() — reads env var at call time, not module load,
so tests can flip between cases cleanly
- findMatchingMistakesAsync(target, projectRoot) — the matcher
- formatWarning(matches) — human-readable warning block
- applyMistakeGuard(rawResult, payload, kind) — wrapping fn that
augments additionalContext (permissive) or overrides to deny (strict)
src/intercept/dispatch.ts wiring: after runHandler() returns for Edit/
Write/Bash, pass result through applyMistakeGuard() before returning.
Two-line diff. Doesn't touch the existing handlers.
SAFETY
Every code path in mistake-guard is wrapped in try/catch with a null
return. A guard failure MUST NEVER break the primary handler. If the
store open fails, the env var is wrong, the payload is malformed —
guard silently returns the raw result unchanged.
TESTS (+21 cases in tests/intercept/handlers/mistake-guard.test.ts)
- currentGuardMode: off/permissive/strict recognition, bogus values
coerced to off
- formatWarning: empty-match string, single-match header, >5-match
collapse with '… and N more'
- findMatchingMistakesAsync (file): rel path, abs path normalization,
no-match, validUntil filter
- findMatchingMistakesAsync (bash): commandPattern substring match,
sourceFile-in-command match, case-insensitive, too-short pattern
guard, validUntil filter
- applyMistakeGuard: mode=off no-op, permissive augments additional
context, permissive no-match no-op, strict denies with reason,
permissive from passthrough emits fresh allow-with-warning
Full suite: 825 -> 846 tests (+21), all passing. TypeScript clean.
V3.0 PROGRESS — 9 of 12 scope items
✅ #1 foundation ✅ #2 ✅ #3 ✅ #6 ✅ #7 ✅ #8 ✅ #9 ✅ #10 ✅ #11
Remaining:
- #1 completion (HTTP transport + real-server integration tests)
- #4 Anthropic Auto-Memory bridge (blocked: needs MEMORY.md fixture)
- #5 SSE streaming for rich packet assembly
- #12 Official MCP Registry submission (post-ship)
Reads Claude Code's auto-managed MEMORY.md index and surfaces entries
relevant to the current file. Closes the Auto-Dream existential risk:
when Anthropic flips the server flag and MEMORY.md becomes consolidated-
and-high-quality, this bridge lights up with no code change.
FIXTURE CAPTURE
Real MEMORY.md samples live at ~/.claude/projects/<encoded>/memory/ on
every Claude Code machine. Captured a representative sample into
tests/fixtures/memory-md/sample-index.md so integration tests don't
depend on the user's actual memory directory.
CANONICAL FORMAT (from real fixtures)
- [Title](relative-file.md) — one-line description
Flat bullet list, one entry per line. Em-dash OR en-dash OR hyphen-
space all accepted. Linked .md files contain frontmatter + body —
this provider is INDEX-ONLY (doesn't dereference bodies) so it stays
under 10 ms even on large memory sets.
PATH DERIVATION
encodeProjectPath('/Users/alice/proj') -> '-Users-alice-proj'
getMemoryIndexPath(projectRoot) -> ~/.claude/projects/<encoded>/memory/MEMORY.md
Overridable via ENGRAM_ANTHROPIC_MEMORY_PATH env var for tests and
for advanced users who maintain a manual index.
RELEVANCE SCORING
+3 title contains file basename (sans extension)
+2 description contains file basename
+2 any import name appears in title or description (length ≥3)
+1 any path segment appears in title or description (length ≥3)
Top 3 matches with score >0 are returned; no matches = null.
INTEGRATION
- New provider wired into BUILTIN_PROVIDERS (src/providers/resolver.ts)
- Inserted at PROVIDER_PRIORITY index 3, between engram:mistakes
(+2) and mempalace (+4). Rationale: own-curated memory > shared
semantic memory when both are available.
SAFETY
- MAX_INDEX_BYTES = 1 MB hard cap (pathological files returned null)
- Empty files returned null (never a noise packet)
- All errors caught -> null return (never throws into resolver path)
TESTS (+24 cases in tests/providers/anthropic-memory.test.ts)
encodeProjectPath: standard path, trailing-slash trim, Windows
separator normalize, deep path preservation
getMemoryIndexPath: ends at the right path shape
parseMemoryIndex: well-formed index, malformed-line skip, empty-
content empty array, missing-description tolerated
scoreEntry: basename match (+3), import match (+2), zero on no
relationship, case-insensitive
resolve: missing file null, empty file null, no-match null, basename
match surfaces, caps at 3, over 1 MB skipped, override wins,
imports drive matches
isAvailable: default true (defers per-project), override exists true,
override missing false
Also updates tests/providers/resolver.test.ts — PROVIDER_PRIORITY
order test picks up the new index 3 slot.
Full suite: 846 -> 870 tests (+24), all passing. TypeScript clean.
V3.0 PROGRESS — 10 of 12 scope items done.
Remaining: #5 SSE streaming + #1 completion (HTTP transport + real MCP
server fixture) + #12 registry submit (post-ship).
Adds progressive delivery for rich packet assembly. Instead of blocking
on Promise.allSettled (which waits for the slowest provider — Serena
cold-start, mempalace ChromaDB warmup), clients can stream results
as they arrive and render each section immediately.
NEW — resolveRichPacketStreaming generator (src/providers/resolver.ts)
AsyncGenerator<StreamEvent> that yields:
{ type: 'provider', result: ProviderResult } — as each resolves
{ type: 'done', providerCount, durationMs } — final totals
Order = ARRIVAL order (fast providers first). Consumers who want
priority order use the non-streaming resolveRichPacket() which applies
full priority + mistakes-boost + budget logic.
Implementation: fan-out all providers, funnel outcomes into a FIFO
queue + wake-on-arrival pattern. No extra deps. Per-provider timeouts
preserved (same resolveWithTimeout path as non-streaming).
NEW — /context/stream SSE endpoint (src/server/http.ts)
GET /context/stream?file=<relative-path> (auth required).
Emits one SSE frame per StreamEvent. Frame shape matches MCP SEP-1699
(SSE resumption):
id: 0
event: provider
data: {"provider":"engram:ast", …}
id: 1
event: provider
data: {"provider":"engram:mistakes", …}
id: N
event: done
data: {"providerCount":N,"durationMs":347}
Supports Last-Event-ID header — clients reconnecting via
'Last-Event-ID: 3' skip events 0-3 and pick up from 4. Useful for
long-running sessions that drop WiFi mid-stream without losing context.
Client-disconnect aborts the stream cleanly (req.close handler short-
circuits the generator loop).
TESTS (+6 new)
resolver.test.ts (+2):
- Smoke: streaming generator terminates with a 'done' event for any
project (no hang, no runaway)
- Arrival-order invariant: toy generator mirrors production shape,
verifies fast results yield before slow ones
server/http.test.ts (+4):
- Missing 'file' param returns 400
- Valid request returns 200 + text/event-stream + ends with 'done'
- Every frame carries an 'id:' header (SEP-1699 resumption)
- Auth required — unauthenticated returns 401
Full suite: 870 -> 876 tests (+6), all passing. TypeScript clean.
V3.0 PROGRESS — 11 of 12 scope items done
✅ #1 foundation ✅ #2 ✅ #3 ✅ #4 ✅ #5 ✅ #6 ✅ #7 ✅ #8
✅ #9 ✅ #10 ✅ #11
Only remaining in-scope work:
- #12 MCP Registry submission (~2h, post-ship only)
Plus item #1 completion (HTTP transport + minimal MCP server fixture
for integration tests) — technically part of #1 which shipped its
foundation as c719591; the HTTP transport path was explicitly deferred
until this SSE work landed. Now it can.
The existing bench/runner.ts uses YAML-estimated costs (useful for CI
regression tracking but not an end-to-end proof). This new real-world
bench runs the FULL resolver pipeline against actual files in the repo
and compares rich-packet tokens to raw-file-read tokens.
METHODOLOGY (honest arithmetic)
For each file in the repo:
baselineTokens = ceil(file.length / 4) — cost if the agent
just Read() it
engramTokens = resolveRichPacket().estimatedTokens
— cost of the rich
packet that replaces
the Read
savings% = (baseline - engram) / baseline * 100
Aggregate = (sum baseline - sum engram) / sum baseline * 100.
LATEST RUN — 2026-04-24 on 30 real engramx source files
Baseline tokens: 67,435
engramx tokens: 6,185
Aggregate savings: 90.8%
Median per-file: 85.5%
Wins: 29 of 30
Best case: 98.4% (src/cli.ts: 18,820 → 306 tokens)
Target (>= 80%): PASS
Committed reports in bench/results/:
real-world-2026-04-24.json — machine-readable, full per-file data
real-world-2026-04-24.md — human-readable summary table
README UPDATE
Replaces the stale '88.1% measured' badge with '90.8% measured' and
adds a 'Proof, not promises' section that shows the methodology + real
numbers + reproduce-on-your-code instructions.
REPRODUCIBILITY
cd ~/engram
npx tsx bench/real-world.ts --files 30
cd any-other-project
engram init
npx tsx ~/engram/bench/real-world.ts --project . --files 50
The bench itself is ~250 lines with no external deps (just tsx). It
walks the repo with the same ignore rules as engramx's miner, skips
tests/bench/node_modules/dist, and handles missing providers cleanly
(baseline tokens still measured; engram side gets 0).
This gives the v3.0 release the ONE thing every skeptical reader asks
for: a reproducible number on a real codebase, not a cherry-picked
toy example.
package.json 2.1.0 -> 3.0.0. Description rewritten to reflect the v3.0 feature set — extensible MCP-client aggregator + mcpConfig plugin contract + pre-mortem mistake-guard + bi-temporal mistake memory + Anthropic Auto-Memory bridge + SSE streaming + AGENTS.md dual emit + 90.8% measured real-world savings. CHANGELOG.md gains a full [3.0.0] entry following Keep a Changelog format: Added (3 pillars), Changed (breaking APIs called out), Migration (v7 -> v8 auto-migration + autogen() return-type change), Tests (771 -> 876). Bench refresh: bench/results/real-world-2026-04-24.md rewritten by the 100-file run during release audit (was 30 files before). New numbers: 163,122 baseline tokens -> 17,722 engramx tokens = 89.1% aggregate savings on 87 files (after skip rules). AUDIT STATUS — ALL GREEN Phase A — build/typecheck/lint/tests ✅ 876/876 Phase B — CLI smoke (init/doctor/gen/query/…) ✅ dual-emit verified, broken-config survived Phase C — v2.1 -> v3.0 schema migration ✅ migration 8 clean, backup created, legacy rows preserved Phase D — stress (100-file bench, 20x SSE, 10k mistakes) ✅ 89.1%, 20/20, 2.04ms/resolve Phase E — security + secret scan ✅ no secrets in diff, auth gate verified on /context/stream Phase F — package sanity + version bump ✅ 3.0.0 published stats match 2.1.0 size envelope (672kB packed) Ready for PR → main.
INSTALL.HTML — showcased v3.0, kept aesthetic, fixed OG rendering
Hero:
- version pill v2.0.2 -> v3.0 'Spine' · shipped 2026-04-24
- sub-copy mentions 'any MCP server you plug in' — the extensibility pitch
- terminal block leads with 'engram setup' (shipped v2.1) as the one-command
flow; init + install-hook + adapter detect + doctor all in one
- metrics strip: 88.1% -> 89.1% (real-world bench), 670 -> 876 tests,
8+n -> 9+n providers
- tagline: 'optional: engram plugin install serena for +LSP symbols'
teases the plugin ecosystem
New '// v3.0 · what's new' section with 6 feature cards in a responsive
grid (extensibility / mistakes moat / opt-in safety / universal agent spec /
progressive rendering / future-proof). Hover lifts to border-accent.
Amber card symbols, inline code chips with accent color.
New '// plugins · Every plugin you add closes another token leak' section
— 6-row plugin table (Serena / GitHub MCP / Sentry / Supabase / Context7 /
Anthropic Auto-Memory) showing what each plugin closes + how to install.
Plus a 'how a plugin is built' terminal block showing the full 10-line
Serena plugin file. Drives the user's key ask: 'additional plugins will
actually drive more savings'.
Benefits section: table refreshed with real measured numbers (163,122
baseline tokens -> 17,722 engramx tokens, 89.1% saved, $0.49 -> $0.05
per session), new row for 'Stale-warning noise' (v3.0 bi-temporal) and
'Provider ecosystem' (any MCP as 10-line plugin). Section-meta links
to the committed bench report.
IDE coverage section rewritten: leads with 'One engram gen. Every agent
reads it.' — explains AGENTS.md dual-emit. Adds Codex CLI / Copilot Chat /
JetBrains Junie as v3.0 AGENTS.md rows alongside existing IDEs.
FAQ:
- 88.1% bench entry rewritten to explain the real-world bench methodology
+ link to committed report
- NEW 'What's new in v3.0' bullet list covering all 6 features
- Cross-tool support rewritten for AGENTS.md universal standard
- 'Can I add my own context provider?' rewritten to cover mcpConfig
auto-wrap (the 10-line plugin path)
Footer: v2.0.2 -> v3.0.0 'Spine'.
Final CTA copy refreshed to cite 89.1% + plugin ecosystem.
NAV: added 'v3.0' and 'Plugins' links.
RENDERING FIX (critical for OG previews + crawlers)
The reveal animation previously started at opacity:0 and relied on
IntersectionObserver + a per-element stagger to fade in. Headless
screenshotters (GitHub OG previews, Twitter cards, the Chrome
--screenshot pipeline) capture a snapshot before JS finishes staggering,
so above-the-fold content appeared EMPTY in social previews.
Fix:
- CSS default .reveal state is now opacity:1, transform:none (visible)
- html.js-ready .reveal adds opacity:0 + translateY(14px)
- Script toggles html.js-ready ONLY when JS + motion-allowed
- Observer stagger removed (CSS transition already provides the ramp)
Net: page renders fully for crawlers / no-JS / prefers-reduced-motion;
JS adds a subtle fade+slide for users who benefit from it. Verified via
headless Chrome screenshot — all 6 v3.0 cards, hero terminal, metrics
strip, and CTA row render in the first snapshot.
README — warmer for non-devs
New '## I'm not a hardcore developer — what does this actually do?'
section (4-bullet plain-English explanation) placed immediately after
the hero, before the Proof section. Target reader: someone who pays
for Cursor or Claude Code and just wants smaller bills / better AI
results without understanding the architecture.
Hero prose rewritten to lead with outcome ('stops charging you for
the same information twice') before mechanism. Quickstart block
replaces 'engram init && engram install-hook' with 'engram setup'.
v2.0 banner -> v3.0 banner at the top of the file, with the real
89.1% number.
Benchmark section split into 'Real-world bench (new in v3.0,
preferred)' + 'Structured task bench (CI regression)' so the new
bench.real-world.ts story leads.
NEW '## Plugins multiply the savings' section between benchmark and
'What It Does' — same plugin table as install.html (Serena / GitHub /
Sentry / Supabase / Context7 / Auto-Memory). Single sentence per
plugin showing what gap it closes.
'What It Does' updated: 8 providers -> 9 providers table (adds
anthropic:memory row between mistakes and git). Closing sentence
mentions the 10-line plugin path.
Misc: 'Rich packets from all 8 providers' -> '9 built-ins + any MCP
plugin' in the How-It-Compares row.
RESULT
Both docs now tell the same v3.0 story — 89.1% measured, extensible
ecosystem, normal users read the README first 200 lines and understand
the value prop without jargon.
ROOT CAUSE tests/providers/anthropic-memory.test.ts:59 used a regex assertion built with forward-slashes: expect(path).toMatch(/\.claude\/projects\/-Users-a…\/MEMORY\.md$/); The implementation uses path.join() which on Windows produces native backslash separators (C:\Users\runneradmin\.claude\projects\…). The test only passed on POSIX. Windows-latest × Node 20+22 = 2 failing jobs. Ubuntu-latest × Node 20+22 = 2 green jobs. Local macOS audit could not catch this. This is the SAME class of failure we logged from v2.1.0 (Windows path bug caught post-CI). The lesson was not honored when writing item #4. FIX — test tests/providers/anthropic-memory.test.ts - Build the expected path via the SAME path.join() call the implementation uses. toBe equality replaces the regex. - Result: identical assertion works on POSIX (/) and Windows (\). FIX — defence in depth on related call sites src/providers/anthropic-memory.ts (scoreEntry) - basename = ctx.filePath.split("/").pop() → split(/[\\/]/).pop() - Matches the pre-existing segments split style. Removes inconsistency in the same function (line 119 vs 120). src/providers/mcp-config.ts (applyArgTemplate) - Same treatment on the fileBasename fallback. - NodeContext.filePath is contract-POSIX, so both sites were safe in practice — but a plugin author passing a raw tool_input path would have silently corrupted basename extraction on Windows. REGRESSION GATES (prevent recurrence locally) Two new test cases exercise native Windows paths explicitly: - anthropic-memory.test.ts: scoreEntry("src\\auth\\login.ts") > 0 - mcp-config.test.ts: applyArgTemplate Windows path → basename "auth.ts" If anyone reverts the split(/[\\/]/) hardening, these tests fail on macOS immediately. No more silent-pass-on-macOS, fail-on-Windows. TESTS 876 → 878 passing (+2 regression cases). TypeScript clean. Expected CI result: all 4 jobs green on next run.
…phasis
Complete GitHub-presentation refresh for v3.0 'Spine'. Keeps every
aesthetic element; sharpens the story.
BANNER (assets/banner.html + banner.png)
- Badge: 'AI CODING MEMORY' -> 'CONTEXT SPINE · v3.0 "SPINE"'
- Wordmark: 'engram' (a highlighted) -> 'engramX' (mX highlighted)
- Tagline: emphasizes 'cached context spine that remembers — and gets
richer with every plugin' (user's explicit ask)
- Terminal block: npm install -g engramx + engram setup (the v2.1
one-command flow), shows 89.1% measured savings headline
- Bottom stats bar: 89.1% / 9+plugins / 0 LLM cost · 0 cloud /
Claude Code · Cursor · Codex · any AGENTS.md agent
- Knowledge graph visualization preserved as-is — same node shape,
amber palette, JetBrains Mono labels
- Re-rendered via headless Chrome into banner.png
README (user's 7 asks hit point-by-point)
1. 'keep same banner aesthetic' — banner.html aesthetic unchanged,
only content updated
2. 'rename to EngramX' — README hero title now 'EngramX — the cached
context spine for AI coding agents.' Capital-E brand in prose,
lowercase 'engramx' preserved for npm package name
3. 'mention all the differences we made' — v3.0 banner block
expanded with every shipped pillar (extensible, pre-mortem,
bi-temporal, Auto-Memory, SSE, dual-emit, 89.1%)
4. 'easy for end users to follow and install' — one-command
'engram setup' surfaced as THE install, with plain-language
explanation of what it does
5. 'ground breaking upgrade with massive saving' — 89.1% lead, plus
the phrase 'every plugin you add elevates the savings further'
6. 'looking in cached memory' — new sentence names the THREE layers
of cache explicitly (knowledge graph, per-provider SQLite cache,
in-memory LRU). Ties to the spine metaphor.
7. 'additional tools and repos that elevate saving / emphasize on
the spine' — lead sentence now 'EngramX is the spine.' Plugin-
multiplier paragraph surfaced in the hero block, not just in the
deeper section
CONTRIBUTING.md — full v3.0 rewrite
- Brand updated to EngramX
- 'Highest-impact contributions' reordered — worked examples first,
reproducible bench results second, plugin submissions third
- Development loop commands include bench/real-world.ts as a sanity
check
- **Windows-first discipline codified as a PR gate** — step 4 of
'Before you open a PR' now reads: 'If you touched anything that
builds a filesystem path, assert with path.join() / path.resolve(),
never hand-write / separators. We shipped a Windows-CI regression
on v3.0's first pass because of this.'
- Code style rule added: every test exercising filesystem paths must
include a Windows-native-path case locally
- Plugin author section added — points to the 2 reference plugins
and the 'how to submit a plugin' 3-step flow
GITHUB REPO METADATA (via gh repo edit)
- Description rewritten: 'EngramX — the cached context spine for AI
coding agents. 9 built-in providers + any MCP server as a 10-line
plugin, pre-mortem mistake-guard, bi-temporal memory, Anthropic
Auto-Memory bridge, SSE streaming packets, dual-emit AGENTS.md+
CLAUDE.md. 89.1% measured real-world token savings, local SQLite,
zero cloud.'
- Topics: removed 'continue-dev' and 'engram-context-protocol'
(stale), added 'agent-memory', 'engramx', 'agents-md'
- Now at 18 / 20 topic cap — room to grow
TESTS
878 / 878 green. TypeScript clean. No code changes, only docs +
branding + banner asset.
Root cause: GitHub's camo.githubusercontent.com image proxy caches README image URLs aggressively. The v3.0 banner.png was pushed in commit fa45e49 (bytes verified on disk — engramX wordmark, CONTEXT SPINE v3.0 badge, 89.1% savings, engram setup terminal), but GitHub kept serving the v2 banner from its CDN cache. Fix: rename the asset to assets/banner-v3.png and update the README <img src> to point at the new URL. camo treats it as a fresh URL and fetches the updated file. Also updates docs/install.html OG image meta tag so Twitter / LinkedIn / Slack social previews pick up the new banner. Old assets/banner.png kept in tree for backward compatibility with any existing link in the wild (blog posts, tweets). Identical to banner-v3.png byte-for-byte — both files are the correct v3.0 rendering. User-facing: next git push -> next GitHub README render uses the new URL, no cache to bust.
Previous v3 render had 'm' + 'X' in orange, which broke symmetry with the original v2 wordmark (only 'a' was orange — engr[a]m). Correct pattern: the same 'a' stays orange from v2 continuity, and the new 'X' joins it. Net: engr[a]m[X] — only the highlighted letters shift to the accent color. Keeps the brand continuity + names v3 distinctly. Both assets/banner.png and assets/banner-v3.png refreshed with the corrected render. MD5: 6793ecb9d6f109be2e714432a672bf74.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
The v3.0 "Spine" release — 13 commits, 876 passing tests (+105 from v2.1), pre-release audit certified all green. This PR merges
feat/v3.0-spine→mainfor the tag+publish sequence.Release audit — all phases ✅
Fresh verification completed before this PR:
/tmpproject: dual-emit CLAUDE.md+AGENTS.md blocks identical; Serena mcpConfig plugin auto-wrapped as[external, 250tok budget]; brokenmcp-providers.jsonsurvived (query still returned results)valid_until+invalidated_by_commitcolumns added,idx_nodes_validitycreated, legacy mistake row preserved withvalidUntil=undefined,.bak-v7backup file present/context/streamrequests 20/20 complete with 40 provider events; 10,000 mistake nodes → 2.04ms/resolve, bi-temporal filter correctly suppressed all 5K invalidated rows/context/streaminherits existing auth + Host + Origin guards (proven by 401 on unauth test); no strayconsole.*insrc/npm pack --dry-runconfirms 672kB packed / 5.5MB unpacked / 48 files (same envelope as v2.1 — MCP SDK bundled tightly)11 of 12 v3.0 scope items shipped
Full item-by-item mapping to commits in the CHANGELOG [3.0.0] section. Deferred to post-ship as
engramx@3.0.1: item #1 HTTP transport + minimal-MCP-server integration test fixture (explicitly declared in the item-#1 foundation commit).Proof, not a promise
Real-world benchmark on engramx's own 87-file sample (reproducible, committed at
bench/results/real-world-2026-04-24.md):Test plan
npm install && npm run buildon clean checkout — verifiednpm test -- --runUbuntu+Windows × Node 20+22 — 876/876 green (CI will re-verify on PR)engram init→engram doctor→engram gen→engram queryon a tmp project/context/streamendpoint: auth enforced, 20 concurrent requests succeed, every frame carriesid:for SEP-1699 resumptionengram plugin listmcp-providers.jsondoesn't break resolver — verified with empty-name entryENGRAM_MISTAKE_GUARD=1|2behavior covered by 21 unit tests + one integration scenariov3.0.0,npm login,npm publish, submit to Official MCP RegistryBreaking changes
autogen()return type{ file: string }→{ files: string[] }. Only consumer (cli.ts) updated. Other programmatic callers must readresult.files[0]. CLI-only users are unaffected.Migration
Automatic on first DB open: v7 → v8 migration runs,
.bak-v7backup is written, legacy mistake rows keep firing (NULLvalidUntil= still valid).Contributor credit
@mechtar-ru — PR #6 OOM fixes cherry-picked into this branch with preserved authorship (commits
b4f7944+411ad13).