diff --git a/.blog/published/memory-architecture-overview.md b/.blog/published/memory-architecture-overview.md index 3463b70..fff641d 100644 --- a/.blog/published/memory-architecture-overview.md +++ b/.blog/published/memory-architecture-overview.md @@ -29,7 +29,7 @@ The retrieval side has a similar gap. OpenClaw pulls relevant memory files back Construct's memory retrieval addresses this with a three-layer pipeline: FTS5 full-text search for exact keyword matches, cosine similarity over vector embeddings for semantic associations (catching "restaurants" → "Nightshade" even without shared vocabulary), and knowledge graph traversal for relational connections (walking edges like `Alice -> [introduced to] -> Nightshade -> [is a] -> restaurant`). Each layer covers gaps the others miss. -On the writing side, Construct stores every raw message permanently in the database. Nothing is ever discarded. The observational memory pipeline doesn't replace stored messages; it replaces them *in the context window*. Once a chunk of conversation has been compressed into observations, the agent sees the condensed observations plus only the recent unobserved messages, rather than replaying the entire transcript. But the full history remains in SQLite, searchable and intact. The observations are a compression layer for context assembly, not a lossy replacement for storage. +On the writing side, Construct stores every raw message permanently in the database. Nothing is ever discarded. The observational memory pipeline doesn't replace stored messages; it replaces them _in the context window_. Once a chunk of conversation has been compressed into observations, the agent sees the condensed observations plus only the recent unobserved messages, rather than replaying the entire transcript. But the full history remains in SQLite, searchable and intact. The observations are a compression layer for context assembly, not a lossy replacement for storage. This sidesteps OpenClaw's core failure mode entirely. There's no dependency on the model choosing to write things down. Every message is stored automatically, and the observation pipeline ensures the meaning of older conversations stays in the agent's working context even after the raw messages rotate out of the context window. @@ -125,8 +125,8 @@ This means the memory system degrades gracefully. On day one, there are no obser The two follow-up articles cover the implementation in detail: -The *Observer, Reflector, Graph* article digs into the writing side: how raw messages are compressed into observations, how observations are condensed across generations by the reflector, and how stored memories spawn a knowledge graph of entities and relationships. +The _Observer, Reflector, Graph_ article digs into the writing side: how raw messages are compressed into observations, how observations are condensed across generations by the reflector, and how stored memories spawn a knowledge graph of entities and relationships. -The *Three Ways to Find a Memory* article covers the reading side: the waterfall of FTS5 full-text search, cosine similarity over embeddings, and graph traversal that backs the `memory_recall` tool, and the passive auto-injection that runs before the agent even starts thinking. +The _Three Ways to Find a Memory_ article covers the reading side: the waterfall of FTS5 full-text search, cosine similarity over embeddings, and graph traversal that backs the `memory_recall` tool, and the passive auto-injection that runs before the agent even starts thinking. Together, the two pipelines are what give Construct something closer to the memory structure of a person than a chatbot: compressed, semantically indexed, queryable by meaning, and always available without needing to be asked. diff --git a/.blog/published/observer-reflector-graph.md b/.blog/published/observer-reflector-graph.md index f424afb..b51d2a1 100644 --- a/.blog/published/observer-reflector-graph.md +++ b/.blog/published/observer-reflector-graph.md @@ -23,13 +23,14 @@ The observer is triggered post-response, non-blocking: ```typescript // src/agent.ts -memoryManager.runObserver(conversationId) +memoryManager + .runObserver(conversationId) .then((ran) => { if (ran) { - return memoryManager.runReflector(conversationId) + return memoryManager.runReflector(conversationId); } }) - .catch((err) => agentLog.error`Post-response observation failed: ${err}`) + .catch((err) => agentLog.error`Post-response observation failed: ${err}`); ``` It only runs when unobserved messages cross a token threshold (3,000 tokens estimated at 4 chars/token). Below that, it's free: no API call, no latency. @@ -62,14 +63,14 @@ The result is a small set of typed observations: ```typescript interface Observation { - id: string - conversation_id: string - content: string // "User has a dentist appointment March 5th at 9am" - priority: 'low' | 'medium' | 'high' - observation_date: string // when the thing happened, not when it was observed - generation: number // 0 = observer output, 1+ = survived reflector rounds - superseded_at: string | null - token_count: number + id: string; + conversation_id: string; + content: string; // "User has a dentist appointment March 5th at 9am" + priority: "low" | "medium" | "high"; + observation_date: string; // when the thing happened, not when it was observed + generation: number; // 0 = observer output, 1+ = survived reflector rounds + superseded_at: string | null; + token_count: number; } ``` @@ -94,10 +95,10 @@ The reflector receives observations with their IDs, and returns both new condens ```typescript // src/memory/reflector.ts // Validate superseded IDs -only allow IDs that were in the input -const inputIds = new Set(input.observations.map((o) => o.id)) +const inputIds = new Set(input.observations.map((o) => o.id)); result.superseded_ids = result.superseded_ids.filter( - (id) => typeof id === 'string' && inputIds.has(id), -) + (id) => typeof id === "string" && inputIds.has(id), +); ``` You can't trust an LLM to return valid database IDs without validating them. The reflector is allowed to supersede anything it was given. Nothing more. @@ -111,12 +112,12 @@ At the start of each conversation turn, the agent builds a context window: ```typescript // src/agent.ts const { observationsText, activeMessages, hasObservations } = - await memoryManager.buildContext(conversationId) + await memoryManager.buildContext(conversationId); if (hasObservations) { - historyMessages = activeMessages // only unobserved messages + historyMessages = activeMessages; // only unobserved messages } else { - historyMessages = await getRecentMessages(db, conversationId, 20) + historyMessages = await getRecentMessages(db, conversationId, 20); } ``` @@ -126,10 +127,10 @@ Observations are rendered as a stable text block: // src/memory/context.ts export function renderObservations(observations: Observation[]): string { const lines = observations.map((o) => { - const priority = o.priority === 'high' ? '!' : o.priority === 'low' ? '~' : '-' - return `${priority} [${o.observation_date}] ${o.content}` - }) - return lines.join('\n') + const priority = o.priority === "high" ? "!" : o.priority === "low" ? "~" : "-"; + return `${priority} [${o.observation_date}] ${o.content}`; + }); + return lines.join("\n"); } ``` diff --git a/.claude/agent-memory-local/blog-writer/MEMORY.md b/.claude/agent-memory-local/blog-writer/MEMORY.md index 39a9b18..6f86110 100644 --- a/.claude/agent-memory-local/blog-writer/MEMORY.md +++ b/.claude/agent-memory-local/blog-writer/MEMORY.md @@ -15,7 +15,7 @@ - Notable: `generation` field tracks how many reflector rounds an observation has survived; SQLite rowid watermark trick for insertion-order safety 3. **Three Ways to Find a Memory: FTS5, Embeddings, and Graph Traversal in a Personal AI** - - Topic: The memory *retrieval* side — passive vs. active retrieval, FTS5/embedding waterfall, graph expansion + - Topic: The memory _retrieval_ side — passive vs. active retrieval, FTS5/embedding waterfall, graph expansion - Key systems: `src/db/queries.ts` (`recallMemories`), `src/tools/core/memory-recall.ts`, `src/memory/graph/queries.ts`, `src/agent.ts` (passive injection), `src/system-prompt.ts` (preamble) - Angle: Three search modes (FTS5, cosine similarity, graph traversal) combined in a single retrieval pipeline; passive auto-injection vs. active tool-invoked recall - Notable: queryEmbedding is generated once and reused for both memory recall AND tool pack selection; matchType field (`fts5`/`embedding`/`graph`) lets agent reason about retrieval confidence diff --git a/.claude/agents/blog-writer.md b/.claude/agents/blog-writer.md index 326785d..8700901 100644 --- a/.claude/agents/blog-writer.md +++ b/.claude/agents/blog-writer.md @@ -17,6 +17,7 @@ You will explore the codebase, identify an interesting aspect (a clever design p ### Step 1: Explore and Discover Read through the codebase to understand the overall architecture and find something genuinely interesting to write about. Use file reading tools to examine: + - `src/agent.ts`: core agent factory and message processing - `src/tools/`: tool implementations and patterns - `src/extensions/`: the extension loading system @@ -33,6 +34,7 @@ Don't just skim. Read the actual code. The best articles come from genuine under **Update your agent memory** as you discover topics you've already written about. This builds up institutional knowledge across conversations. Write concise notes about what you found and which topics have been covered. Examples of what to record: + - Topics already covered in previous blog articles - Particularly interesting code patterns spotted for future articles - Areas of the codebase that changed significantly since last explored @@ -43,6 +45,7 @@ Before writing, check your memory for previously covered topics. If you find ove ### Step 3: Choose Your Angle Great technical blog articles aren't just documentation. They have a **thesis**. Some angles that work well: + - "Why we chose X over Y": exploring a tradeoff - "The pattern behind...": extracting a reusable insight from specific code - "How X actually works": demystifying something that looks simple but is surprisingly deep @@ -53,6 +56,7 @@ Great technical blog articles aren't just documentation. They have a **thesis**. ### Step 4: Write the Article Your article should: + - **Open with a hook**: A question, a surprising fact, a relatable problem, or a bold claim. Never open with "In this article, I will..." - **Include real code snippets**: Pull actual code from the codebase (trimmed for clarity). Don't write pseudocode when real code is more compelling. - **Tell a story**: Even technical articles benefit from narrative structure: setup, tension, resolution. @@ -83,6 +87,7 @@ Article body... ``` Additional formatting rules: + - Appropriate headers to break up sections - Code blocks with language annotations - No unnecessary preamble or meta-commentary about the writing process @@ -90,6 +95,7 @@ Additional formatting rules: ## Quality Checks Before finalizing, verify: + - [ ] The code snippets are accurate and pulled from the actual codebase - [ ] The article has a clear thesis, not just a tour of features - [ ] Technical claims are correct (re-read the relevant code if unsure) @@ -117,6 +123,7 @@ You have a persistent Persistent Agent Memory directory at `.claude/agent-memory As you work, consult your memory files to build on previous experience. When you encounter a mistake that seems like it could be common, check your Persistent Agent Memory for relevant notes. If nothing is written yet, record what you learned. Guidelines: + - `MEMORY.md` is always loaded into your system prompt. Lines after 200 will be truncated, so keep it concise - Create separate topic files (e.g., `debugging.md`, `patterns.md`) for detailed notes and link to them from MEMORY.md - Update or remove memories that turn out to be wrong or outdated @@ -124,18 +131,21 @@ Guidelines: - Use the Write and Edit tools to update your memory files What to save: + - Stable patterns and conventions confirmed across multiple interactions - Key architectural decisions, important file paths, and project structure - User preferences for workflow, tools, and communication style - Solutions to recurring problems and debugging insights What NOT to save: + - Session-specific context (current task details, in-progress work, temporary state) - Information that might be incomplete; verify against project docs before writing - Anything that duplicates or contradicts existing CLAUDE.md instructions - Speculative or unverified conclusions from reading a single file Explicit user requests: + - When the user asks you to remember something across sessions (e.g., "always use bun", "never auto-commit"), save it. No need to wait for multiple interactions - When the user asks to forget or stop remembering something, find and remove the relevant entries from your memory files - Since this memory is local-scope (not checked into version control), tailor your memories to this project and machine @@ -143,14 +153,19 @@ Explicit user requests: ## Searching past context When looking for past context: + 1. Search topic files in your memory directory: + ``` Grep with pattern="" path=".claude/agent-memory-local/blog-writer/" glob="*.md" ``` + 2. Session transcript logs (last resort, large files, slow): + ``` Grep with pattern="" path=".claude/projects/-home-reed-Code-0xreed-nullclaw-ts/" glob="*.jsonl" ``` + Use narrow search terms (error messages, file paths, function names) rather than broad keywords. ## MEMORY.md diff --git a/.claude/agents/tech-librarian.md b/.claude/agents/tech-librarian.md deleted file mode 100644 index fc1d9c1..0000000 --- a/.claude/agents/tech-librarian.md +++ /dev/null @@ -1,109 +0,0 @@ ---- -name: tech-librarian -description: "Use when you need to document how a feature works, update docs after code changes, reorganize .docs/, or understand how documented systems fit together. Call proactively after implementing new features or significant changes." -model: inherit ---- - -You are the Technical Librarian for this project—a documentation steward who keeps a living knowledge base of how the system works. Your domain is the `.docs/` folder. Your mission is to ensure that any engineer (or Claude) can quickly understand how features, data flows, and architectural decisions fit together. - -## Scope - -- **Canonical knowledge base**: `.docs/` (not `docs/`). All project documentation lives here. -- **Index**: Maintain `.docs/README.md` as the index of all documentation with brief descriptions and links. - -## Core Responsibilities - -### 1. Documentation discovery - -When asked how something works: - -- Read the contents of `.docs/` to see what exists. -- Search for relevant existing docs before creating new content. -- Cross-reference multiple documents when features interact. -- If documentation does not exist, note the gap and offer to create it. - -### 2. Documentation creation - -When documenting new features or systems: - -- Understand the implementation by reading the relevant source files. -- Create clear, structured markdown in `.docs/` (e.g. `.docs/features/`, `.docs/architecture/`, `.docs/guides/`). -- Use this template (adapt as needed): - -```markdown -# [Feature Name] - -## Overview - -Brief description of what this feature does and why it exists. - -## How it works - -- Key files and their roles -- Data flow (sources, transforms, destinations) -- Important functions/components and what they do - -## Architecture decisions - -Why it was built this way; alternatives considered. - -## Integration points - -How this connects to other parts of the system other parts of the system. - -## Related documentation - -Links to other relevant docs in `.docs/`. -``` - -- Include concrete code references (file paths, function names). -- Add Mermaid diagrams when flows or relationships are complex (use camelCase/PascalCase node IDs; avoid spaces in node names and reserved words like `end`, `subgraph`). - -### 3. Documentation updates - -When code has changed: - -- Identify which docs in `.docs/` might be affected. -- Read the updated code to understand what changed. -- Update docs to reflect the current system. -- If a change invalidates previous assumptions, note that clearly. -- Add at the top of significantly updated docs: `*Last updated: YYYY-MM-DD — brief change summary*` - -### 4. Folder organization - -- Keep `.docs/` well organized: use subdirs (e.g. `architecture/`, `features/`, `guides/`) when a topic has several related docs. -- When reorganizing, preserve links and update all internal references. -- After adding or moving docs, update `.docs/README.md`. - -## Project context - -This is **Construct** — a personal braindump companion. It communicates via Telegram, stores long-term memories in SQLite, handles reminders, and can read/edit/test/deploy its own source. Key boundaries: - -- **Agent core**: `src/agent.ts` — Agent factory, `processMessage()`, tool registration -- **System prompt**: `src/system-prompt.ts` — Prompt with context injection, SOUL.md support -- **Tools**: `src/tools/` — Built-in tool implementations (memory, schedule, self-*, secret-*) -- **Extensions**: `src/extensions/` — Extension system (loader, embeddings, secrets, types) -- **Telegram**: `src/telegram/` — Grammy bot setup -- **Scheduler**: `src/scheduler/` — Croner-based reminder system -- **Database**: `src/db/` — Kysely database, schema, queries, migrations -- **CLI**: `cli/` — Citty-based CLI (REPL, one-shot, direct tool invocation) -- **Extensions dir** (`EXTENSIONS_DIR`): Runtime skills (Markdown) and tools (TypeScript) loaded dynamically - -When documenting, clarify which tools, extensions, and data flows are involved and how they connect. -## Quality checklist - -Before finishing a documentation task: - -- [ ] Did I read the actual source, not just assume behavior? -- [ ] Are file paths and function names accurate? -- [ ] Would a new engineer understand this without extra context? -- [ ] Are there links to related docs in `.docs/`? -- [ ] Is `.docs/README.md` updated if I added or moved files? -- [ ] Did I check whether this change affects other docs? - -## When uncertain - -- If unsure how something works, read the code first. -- If the code is unclear, document what you can and flag uncertainties. -- If reorganization might affect others’ workflows, propose the change and ask for confirmation before doing it. -- If docs conflict with code, trust the code and update the documentation. diff --git a/.docs/README.md b/.docs/README.md deleted file mode 100644 index 90b0148..0000000 --- a/.docs/README.md +++ /dev/null @@ -1,40 +0,0 @@ -# Sprawl Documentation - -Documentation for the Sprawl monorepo: five apps, two shared packages, one memory pipeline. - -## Apps - -- **[Construct](./apps/construct.md)** -- Personal AI braindump companion. Telegram + CLI + scheduler interfaces, LLM agent with tools, three-layer memory, self-modification, extensions. -- **[Cortex](./apps/cortex.md)** -- Crypto market intelligence daemon. Ingests prices + news, feeds them through Cairn's memory pipeline, generates LLM-grounded trading signals. -- **[Synapse](./apps/synapse.md)** -- Paper trading daemon. Reads Cortex signals, sizes positions by confidence, manages risk with stop-losses and drawdown limits. -- **[Deck](./apps/deck.md)** -- Memory graph explorer. Hono REST API + React SPA with D3-force graph visualization, memory browser, and observation timeline. -- **[Optic](./apps/optic.md)** -- Terminal trading dashboard. Ratatui TUI that reads Cortex + Synapse DBs. Market view (prices, charts, news, signals, graph) and trading view (positions, trades, risk events). - -## Construct (flagship) -- feature deep-dives - -### Architecture - -- **[Architecture Overview](./architecture/overview.md)** -- Startup sequence, data flow, key design decisions (embedding-based tool selection, static/dynamic prompt split, self-modification safety) - -### Features - -- **[Agent System](./features/agent.md)** -- The `processMessage()` pipeline: conversation management, memory loading, embedding generation, skill selection, context preamble, tool registration, pi-agent execution, and response persistence -- **[Memory System](./features/memory.md)** -- Three-layer memory: declarative, graph, and observational. Schema, tools, embeddings, and `processMessage()` integration. -- **[Tool System](./features/tools.md)** -- Tool packs (core, web, self, telegram), embedding-based pack selection, `InternalTool` interface, TypeBox schemas, Telegram side-effects pattern. -- **[Extension System](./features/extensions.md)** -- User-authored skills (Markdown) and dynamic tools (TypeScript via jiti). Identity files, secrets management, reload mechanism. -- **[Database Layer](./features/database.md)** -- SQLite via node:sqlite + Kysely. Tables, FTS5 search, embedding storage, query functions. -- **[Telegram Integration](./features/telegram.md)** -- Grammy bot, authorization, typing indicators, Markdown-to-HTML, message chunking, reactions. -- **[Scheduler / Reminders](./features/scheduler.md)** -- Croner scheduling with two execution modes: static messages and agent prompts (full tool access via `processMessage()`). Cron + one-shot timing, dedup, 30s sync loop. -- **[CLI Interface](./features/cli.md)** -- Citty CLI: REPL, one-shot, direct tool invocation. -- **[System Prompt](./features/system-prompt.md)** -- Static system prompt + dynamic context preamble. - -## Shared Packages - -- **[Cairn](./packages/cairn.md)** -- Memory substrate shared by Construct, Cortex, and Deck. Observer/reflector pipeline, embeddings, graph extraction, context building. - -## Guides - -- **[Deployment Guide](./guides/deployment.md)** -- Docker and systemd deployment, self-deploy behavior. -- **[Security Considerations](./guides/security.md)** -- Self-modification safety, secrets, extension trust, Telegram auth. -- **[Environment Configuration](./guides/environment.md)** -- All env vars across all apps, Zod validation. -- **[Development Workflow](./guides/development.md)** -- Just commands, testing, logging, TypeScript config. diff --git a/.docs/apps/construct.md b/.docs/apps/construct.md deleted file mode 100644 index ebb81e1..0000000 --- a/.docs/apps/construct.md +++ /dev/null @@ -1,221 +0,0 @@ -# Construct - -*Last updated: 2026-03-01 -- Initial documentation* - -## Overview - -Personal AI braindump companion. Communicates via Telegram (primary interface), CLI (REPL + one-shot), and scheduled prompts. Uses an LLM agent with tool access, embedding-based tool/skill routing, a three-layer memory system (observations, memories, knowledge graph), and a self-modification capability that lets it edit its own source, create extensions, and deploy. - -Construct is the flagship app in the Sprawl monorepo -- the only one with a conversational agent, tool system, and Telegram integration. - -## How it works - -### Boot sequence (`apps/construct/src/main.ts`) - -1. Initialize Logtape logging -2. Run Kysely migrations on `DATABASE_URL` -3. Create database connection via `@repo/db` -4. Sync `EXT_*` environment variables into the `secrets` table (prefix stripped) -5. Load extensions: identity files (SOUL.md, IDENTITY.md, USER.md), skills, dynamic tools; compute their embeddings -6. Pre-compute builtin tool pack embeddings for semantic selection -7. Create Grammy Telegram bot -8. Start Croner scheduler (load active schedules, begin 30s sync loop) -9. Start Telegram long polling -10. Register SIGINT/SIGTERM for graceful shutdown - -### The processMessage pipeline (`apps/construct/src/agent.ts`) - -Every message -- from Telegram, CLI, or scheduler -- flows through `processMessage()`. This is the core orchestration function. - -``` -Input message - | - v -1. Get/create conversation (by source + externalId) -2. Create MemoryManager (Cairn) for this conversation -3. Build context window: - - If observations exist: observations (compressed prefix) + un-observed messages (active suffix) - - Fallback: last 20 raw messages -4. Load memories: 10 most recent + up to 5 semantically relevant (embedding similarity >= 0.4) -5. Select skills by embedding similarity to query -6. Build context preamble (date, timezone, source, observations, memories, skills, reply context) -7. Create pi-agent-core Agent with system prompt (base + identity files) -8. Select tool packs by embedding similarity, instantiate tools (builtin + dynamic) -9. Replay conversation history into agent (multi-turn context) -10. Subscribe to agent events (text deltas, usage tracking, tool call recording) -11. Save user message to DB -12. Run agent with preamble + message -13. Strip leaked [tg:ID] prefixes from response -14. Save assistant response + tool calls to DB -15. Track token usage + cost -16. Fire-and-forget: observer -> promoter -> reflector (async, non-blocking) - | - v -AgentResponse { text, toolCalls, usage, messageId } -``` - -The query embedding generated in step 4 is reused three times: memory recall, skill selection, and tool pack selection. - -### System prompt (`apps/construct/src/system-prompt.ts`) - -Two-layer design for prompt caching: - -- **Static base prompt** (`BASE_SYSTEM_PROMPT`) -- Rules, Telegram interaction patterns, identity file guidance, extension conventions. Stays constant across requests. -- **Identity injection** -- SOUL.md, IDENTITY.md, USER.md appended to the base prompt. Cached until content changes (`invalidateSystemPromptCache()`). -- **Context preamble** -- Dynamic per-request data prepended to the user's message (not the system prompt). Contains: timestamp, timezone, source, dev mode flag, observations, recent/relevant memories, selected skills, reply context. - -### Tool system (`apps/construct/src/tools/packs.ts`) - -Tools are organized into **packs** -- groups selected per message by embedding similarity. - -| Pack | Always loaded | Tools | -|------|---------------|-------| -| `core` | Yes | memory_store, memory_recall, memory_forget, memory_graph, schedule_create, schedule_list, schedule_cancel, secret_store, secret_list, secret_delete, usage_stats, identity_read, identity_update | -| `web` | No | web_read, web_search (requires `TAVILY_API_KEY`) | -| `self` | No | self_read, self_edit, self_test, self_logs, self_deploy (prod only), self_status, extension_reload | -| `telegram` | Yes (when ctx) | telegram_react, telegram_reply_to, telegram_pin, telegram_unpin, telegram_get_pinned | - -Selection algorithm: -1. At startup, `initPackEmbeddings()` embeds each non-`alwaysLoad` pack's description -2. Per message, the query embedding (from step 4 of processMessage) is compared against pack embeddings via cosine similarity -3. Packs above threshold (0.3) are included. `alwaysLoad` packs always included -4. If embedding generation fails at any point, all packs load (fail-open) - -Tools follow the `InternalTool` interface: `{ name, description, parameters: TSchema, execute }`. They are adapted to pi-agent-core's `AgentTool` via `createPiTool()`. - -Telegram tools use a **side-effects pattern**: they write to a mutable `TelegramSideEffects` object (`reactToUser`, `replyToMessageId`, `suppressText`) which the bot handler reads after agent execution to apply Telegram-specific actions. - -### Telegram integration (`apps/construct/src/telegram/`) - -Grammy bot with long polling. Key behaviors: - -- **Authorization** -- `ALLOWED_TELEGRAM_IDS` whitelist. Empty = allow all. -- **Per-chat queue** -- Messages from the same chat are serialized via `enqueue()` to prevent concurrent `processMessage()` calls on the same conversation (causes race conditions). -- **Reply-to threading** -- When multiple messages queue up (depth > 1), auto-sets `replyToMessageId` on responses so they thread correctly. -- **Typing indicator** -- Refreshed every 4s while the agent is processing. -- **Message chunking** -- Responses over 4000 chars are split into multiple messages. -- **HTML formatting** -- Markdown converted to Telegram HTML via `markdownToTelegramHtml()` (`format.ts`). Falls back to plain text if HTML parsing fails. -- **Reaction handling** -- User emoji reactions are converted to synthetic messages (`[User reacted with ... to ... message: "..."]`) and processed through the full agent pipeline. -- **Message ID tracking** -- Telegram message IDs are saved via `updateTelegramMessageId()` for reply-to references. - -### Scheduler (`apps/construct/src/scheduler/index.ts`) - -Croner-based reminder system. Two execution modes: - -- **Static schedules** -- Direct Telegram message delivery. No agent involvement. -- **Agent schedules** -- Full `processMessage()` execution with tool access. The schedule's `prompt` field drives the agent, and its response is sent to Telegram. - -Mechanics: -- **Cron** -- Recurring schedules via cron expressions (with timezone support) -- **One-shot** -- `run_at` timestamp; auto-cancelled after firing. Past-due one-shots fire immediately. -- **Sync loop** -- Every 30s, polls the `schedules` table for new/cancelled entries and updates the in-memory job map -- **History tracking** -- Both static and agent schedule outputs are saved to conversation history so the agent knows what was delivered - -### CLI (`apps/construct/src/cli/index.ts`) - -Citty CLI with four modes: - -- **REPL** -- Interactive loop (`just cli`). Prompts `you>`, prints `construct>`. -- **One-shot** -- Single message: `just cli myinstance "message here"` -- **Tool invocation** -- Direct tool testing: `just cli myinstance --tool memory_recall --args '{"query":"..."}'` -- **Maintenance** -- `--reembed` (re-embed all memories with current model), `--backfill` (graph extraction + node embeddings + observer + reflector for all existing data) - -All modes run migrations, create a DB connection, and go through `processMessage()` (except direct tool invocation which bypasses the agent). - -### Extension system (`apps/construct/src/extensions/`) - -User/agent-authored capabilities loaded from `EXTENSIONS_DIR`. - -**Identity files** (root of extensions dir): -- `SOUL.md` -- Personality traits, values, communication style -- `IDENTITY.md` -- Agent metadata: name, creature type, pronouns -- `USER.md` -- Human context: name, location, preferences - -**Skills** (`skills/` subdir): -- Markdown files with YAML frontmatter (`name`, `description`, optional `requires`) -- Body injected into context preamble when selected by embedding similarity -- Not tools -- they are instructions the agent follows - -**Dynamic tools** (`tools/` subdir): -- TypeScript files loaded at runtime via jiti (no compile step) -- Single `.ts` file = standalone pack; directory of `.ts` files = grouped pack -- Export `{ name, description, parameters, execute }` (or factory function receiving `DynamicToolContext`) -- Optional `meta.requires` for dependency checking (env vars, secrets, binaries) -- `node_modules` symlinked from project root for import resolution - -**Lifecycle**: -1. `initExtensions()` at startup: create dirs, load everything, compute embeddings -2. `extension_reload` tool: re-reads all files, rebuilds registry, recomputes embeddings -3. Selection per message: skills and dynamic packs filtered by embedding similarity (same query embedding) - -**Registry** (`ExtensionRegistry`): singleton holding identity files, parsed skills, and loaded dynamic tool packs. - -## Key files - -| File | Role | -|------|------| -| `src/main.ts` | Entry point, boot sequence, graceful shutdown | -| `src/agent.ts` | `processMessage()` pipeline, `AgentResponse` type, pi-agent adaptation | -| `src/system-prompt.ts` | Base system prompt, identity injection, context preamble builder | -| `src/env.ts` | Zod-validated environment config | -| `src/logger.ts` | Logtape logging setup | -| `src/cli/index.ts` | CLI: REPL, one-shot, tool invoke, reembed, backfill | -| `src/telegram/bot.ts` | Grammy bot, authorization, queueing, reply threading, typing | -| `src/telegram/format.ts` | Markdown-to-Telegram-HTML conversion | -| `src/telegram/types.ts` | `TelegramContext`, `TelegramSideEffects` | -| `src/scheduler/index.ts` | Croner scheduler, static/agent execution, sync loop | -| `src/tools/packs.ts` | Tool pack definitions, embedding selection, `InternalTool` interface | -| `src/tools/core/` | Memory, schedule, secret, identity, usage tools | -| `src/tools/self/` | self_read, self_edit, self_test, self_logs, self_deploy, self_status, extension_reload | -| `src/tools/web/` | web_search (Tavily), web_read (fetch + parse) | -| `src/tools/telegram/` | react, reply_to, pin, unpin, get_pinned | -| `src/extensions/index.ts` | Extension registry, init/reload, skill/dynamic-tool selection | -| `src/extensions/loader.ts` | Skill parser, dynamic tool loader (jiti), requirement checker | -| `src/extensions/embeddings.ts` | Skill + dynamic pack embedding cache and selection | -| `src/extensions/secrets.ts` | Secrets table sync + builder | -| `src/db/schema.ts` | Construct-specific tables (extends CairnDatabase) | -| `src/db/queries.ts` | All DB query helpers | -| `src/db/migrate.ts` | Migration runner | - -## Database tables - -Construct's database extends Cairn's schema with three additional tables: - -- `schedules` -- Cron/one-shot reminders (description, cron_expression/run_at, message, prompt, chat_id, active) -- `settings` -- Key-value store for app settings -- `secrets` -- Encrypted secrets store (key, value, source). `EXT_*` env vars synced on startup. - -Plus all Cairn tables: `conversations`, `messages`, `memories`, `observations`, `graph_nodes`, `graph_edges` - -## Integration points - -- **@repo/cairn** -- Memory pipeline. `MemoryManager` used in `processMessage()` for context building (observations), memory recall (FTS5 + embeddings), and post-response observer/promoter/reflector. Graph extraction runs on promoted memories. -- **@repo/db** -- `createDb()` for database connection, migration runner. -- **pi-agent-core** -- LLM agent runtime. Construct wraps its `InternalTool` into pi-agent's `AgentTool` via `createPiTool()`. -- **OpenRouter** -- LLM inference (configurable model) and embedding generation. -- **Telegram** -- Grammy bot, long polling, message/reaction handling. -- **Tavily** -- Web search API (optional, gated by `TAVILY_API_KEY`). -- **Deck** -- Can browse Construct's memory graph by pointing at its database. - -## Running - -```bash -just dev # Dev mode with file watching -just start myinstance # Production (reads .env.construct) -just cli myinstance # CLI mode -just cli myinstance --tool memory_recall --args '{"query":"test"}' -``` - -## Related documentation - -- [Architecture Overview](../architecture/overview.md) -- Startup, data flow, design decisions -- [Agent System](../features/agent.md) -- processMessage() deep-dive -- [Memory System](../features/memory.md) -- Three-layer memory architecture -- [Tool System](../features/tools.md) -- Pack selection, tool interface -- [Extension System](../features/extensions.md) -- Skills, dynamic tools, identity files -- [Telegram Integration](../features/telegram.md) -- Bot setup, formatting, reactions -- [Scheduler](../features/scheduler.md) -- Reminder execution modes -- [CLI Interface](../features/cli.md) -- REPL, one-shot, tool invocation -- [System Prompt](../features/system-prompt.md) -- Static/dynamic prompt split -- [Database Layer](../features/database.md) -- Schema, queries, migrations -- [Cairn](../packages/cairn.md) -- Memory substrate diff --git a/.docs/apps/cortex.md b/.docs/apps/cortex.md deleted file mode 100644 index 09fe271..0000000 --- a/.docs/apps/cortex.md +++ /dev/null @@ -1,97 +0,0 @@ -# Cortex - -*Last updated: 2026-03-01 -- Initial documentation* - -## Overview - -Crypto market intelligence daemon. Runs as a headless Node.js process that ingests prices and news on cron loops, feeds them through Cairn's memory pipeline, and generates LLM-grounded trading signals. Stores everything in its own SQLite database. - -## How it works - -### Boot sequence (`apps/cortex/src/main.ts`) - -1. Run DB migrations -2. Create Kysely DB + MemoryManager (Cairn) -3. Seed tracked tokens from CoinGecko (fetch metadata) -4. If `--backfill` flag: run historical backfill, then exit (or continue with `--daemon`) -5. Fetch initial prices -6. Start cron loop daemon - -### Pipeline loop (`apps/cortex/src/pipeline/loop.ts`) - -Three Croner jobs run on configurable intervals: - -- **Price ingestion** (default: 5min) -- Fetches current prices from CoinGecko, stores snapshots, composes a price message for Cairn, runs observer -> promoter -> reflector pipeline -- **News ingestion** (default: 15min) -- Fetches news from CryptoPanic + CryptoCompare RSS, deduplicates by external_id, composes news message for Cairn pipeline -- **Signal generation** (default: 1hr) -- Runs `analyzeAllTokens()` for each tracked token - -A fourth job polls a **command queue** every 10s. Optic can insert commands (e.g. "analyze") which Cortex picks up and executes. - -### Signal analyzer (`apps/cortex/src/pipeline/analyzer.ts`) - -For each token, generates two signals: **short-term** (24h) and **long-term** (4 weeks). - -1. Generate a recall query via LLM (or static fallback) -2. Hybrid memory recall: FTS5 + embedding similarity (15 memories) -3. Graph context: search nodes by token name, traverse 2 hops, fetch linked memories -4. Compose prompt with price data, memories, graph context -5. LLM generates structured signal: `{ signal: buy|sell|hold, confidence: 0-1, reasoning, key_factors }` -6. Store signal in `signals` table -7. Store signal reasoning as a Cairn memory (feedback loop) - -### Data flow - -``` -CoinGecko ─────────> price_snapshots ─────> analyzer ─────> signals - ↑ │ -CryptoPanic ────┐ │ │ -CryptoCompare ──┴─> news_items │ ↓ - │ │ Synapse reads - ↓ │ - Cairn pipeline │ - (observe → promote → │ - reflect → graph) ──────────┘ - │ - memories + graph_nodes + graph_edges -``` - -### Backfill (`apps/cortex/src/pipeline/backfill.ts`) - -Supports `--backfill`, `--backfill-news`, `--backfill-prices` flags with a day count. Historical data is fetched and run through the same Cairn pipeline, building up the memory/graph substrate before live operation. - -## Key files - -| File | Role | -|------|------| -| `src/main.ts` | Entry point, CLI args, boot | -| `src/env.ts` | Zod-validated env config | -| `src/pipeline/loop.ts` | Croner jobs, price/news composition | -| `src/pipeline/analyzer.ts` | LLM signal generation with memory recall | -| `src/pipeline/prompts.ts` | Short/long signal prompt templates | -| `src/pipeline/backfill.ts` | Historical data backfill | -| `src/ingest/prices.ts` | CoinGecko API | -| `src/ingest/news.ts` | CryptoPanic + CryptoCompare RSS | -| `src/db/schema.ts` | tracked_tokens, price_snapshots, news_items, signals, commands | -| `src/db/queries.ts` | All DB operations | - -## Database tables - -- `tracked_tokens` -- Token metadata (id, symbol, name, active flag) -- `price_snapshots` -- Time-series price data (price, market_cap, volume, change_24h/7d) -- `news_items` -- Deduplicated news articles (external_id, title, url, source, tokens_mentioned) -- `signals` -- Generated trading signals (token_id, signal_type, confidence, reasoning, timeframe) -- `commands` -- Queue for inter-app commands (Optic -> Cortex) -- Plus Cairn tables: memories, observations, graph_nodes, graph_edges, conversations, messages - -## Integration points - -- **Cairn** (`@repo/cairn`) -- Memory pipeline for price + news data. Same observe/reflect/promote/graph flow used by Construct. -- **Synapse** -- Reads `signals` table from Cortex's DB via CortexReader -- **Optic** -- Reads price_snapshots, news_items, signals, graph_* tables directly via rusqlite. Can insert commands. -- **Deck** -- Can browse Cortex's memory graph if pointed at its DB - -## Related documentation - -- [Cairn](../packages/cairn.md) -- Memory pipeline details -- [Synapse](./synapse.md) -- Signal consumer -- [Optic](./optic.md) -- Market data visualization diff --git a/.docs/apps/deck.md b/.docs/apps/deck.md deleted file mode 100644 index 11bed47..0000000 --- a/.docs/apps/deck.md +++ /dev/null @@ -1,74 +0,0 @@ -# Deck - -*Last updated: 2026-03-01 -- Initial documentation* - -## Overview - -Memory graph explorer. A Hono REST API serving a React SPA that visualizes the knowledge graph, lets you browse memories, and trace the observation pipeline. Can point at any Sprawl app's database. - -## How it works - -### Backend (`apps/deck/src/server.ts`) - -Hono app with CORS, DB injection middleware, and four route groups: - -- `/api/memories` -- Search, list, detail for the `memories` table -- `/api/graph` -- Nodes, edges, traversal queries against `graph_nodes`/`graph_edges` -- `/api/observations` -- Timeline of observation pipeline activity -- `/api/stats` -- Aggregate counts (memories, nodes, edges, etc.) - -In production, serves the built React SPA from `web/dist/`. In development, use Vite dev server + API proxy. - -### Frontend (`apps/deck/web/`) - -React 19 SPA with React Router. Three views: - -- **GraphView** (`/`) -- D3-force directed graph on HTML canvas. Nodes are entities, edges are relationships. Click to inspect, search to filter. -- **MemoryBrowser** (`/memories`) -- Searchable list of all memories with category/source filters. -- **ObservationTimeline** (`/observations`) -- Chronological view of observations, showing generation, priority, and supersession. - -### Components - -| Component | Role | -|-----------|------| -| `GraphView.tsx` | D3-force canvas rendering, zoom/pan, node selection | -| `GraphControls.tsx` | Search, layout controls | -| `GraphDetail.tsx` | Selected node/edge detail panel | -| `NodeTooltip.tsx` | Hover tooltip for graph nodes | -| `MemoryBrowser.tsx` | Memory list with search | -| `MemoryCard.tsx` | Individual memory display | -| `ObservationTimeline.tsx` | Observation list with generation/priority display | -| `SearchBar.tsx` | Shared search input | -| `Layout.tsx` | App shell with navigation | - -## Key files - -| File | Role | -|------|------| -| `src/server.ts` | Hono app setup, middleware, routing | -| `src/env.ts` | DATABASE_URL + PORT config | -| `src/routes/memories.ts` | Memory search/list/detail API | -| `src/routes/graph.ts` | Graph query API | -| `src/routes/observations.ts` | Observation timeline API | -| `src/routes/stats.ts` | Stats aggregation API | -| `web/src/App.tsx` | React router setup | -| `web/src/components/GraphView.tsx` | D3-force graph visualization | - -## Integration points - -- **@repo/cairn** -- Uses `CairnDatabase` type for DB queries. Reads memories, observations, graph_nodes, graph_edges tables. -- **@repo/db** -- `createDb()` for database connection. -- Can browse any Sprawl app's database (Construct, Cortex) by changing `DATABASE_URL`. - -## Running - -```bash -just deck-dev myinstance # reads .env.myinstance for DATABASE_URL -``` - -Default port: 4800. - -## Related documentation - -- [Cairn](../packages/cairn.md) -- Memory system that Deck visualizes -- [Memory System](../features/memory.md) -- Construct's memory implementation diff --git a/.docs/apps/optic.md b/.docs/apps/optic.md deleted file mode 100644 index 52b3b85..0000000 --- a/.docs/apps/optic.md +++ /dev/null @@ -1,102 +0,0 @@ -# Optic - -*Last updated: 2026-03-01 -- Initial documentation* - -## Overview - -Terminal trading dashboard. A Rust TUI built with Ratatui that reads Cortex and Synapse SQLite databases directly via rusqlite. No JS runtime, no network calls -- just local DB reads. - -## How it works - -### Startup (`apps/optic/src/main.rs`) - -1. Parse CLI args: first positional arg = Cortex DB path, `--synapse ` = Synapse DB path -2. Open CortexDb (required) and optionally SynapseDb -3. Initial data refresh -4. Enter Crossterm alternate screen, start event loop - -### Event loop - -- Auto-refreshes every 5 seconds -- Polls for keyboard events between refreshes -- Modal popups for news detail and signal detail - -### View modes - -**Market view** (`1` key): -- **Prices table** -- Token symbols, current price, 24h/7d change, volume -- **Price chart** -- Braille-rendered 24h sparkline (cycle tokens with `c`) -- **News feed** -- Scrollable list with source, time, linked tokens. Enter for detail modal, `o` to open URL. -- **Signals** -- Buy/sell/hold with confidence, timeframe (24h/4w), reasoning preview. Enter for detail modal. -- **Knowledge graph** -- Recent edges: source -> relation -> target with weight - -**Trading view** (`2` key, requires Synapse connection): -- **Positions** -- Open positions with entry/current price, size, unrealized P&L, stop-loss/take-profit -- **Trades** -- Recent buy/sell executions -- **Signal log** -- Every signal processed: opened, closed, or skipped with reason -- **Risk events** -- Stop-loss, take-profit, drawdown halt events - -**Portfolio bar** (shown when Synapse connected): -- NAV, cash, drawdown %, return %, HALTED/LIVE status - -### Keybinds - -| Key | Action | -|-----|--------| -| `q` | Quit | -| `r` | Manual refresh | -| `Tab` | Cycle focused panel | -| `j/k` or arrows | Scroll | -| `c` | Cycle price chart token (market view) | -| `Enter` | Detail modal (news/signals) | -| `o` | Open news URL in browser (in detail modal) | -| `a` | Queue analyze command for Cortex | -| `1` | Market view | -| `2` | Trading view | -| `Esc` | Close modal | - -### Command queue - -Pressing `a` inserts an "analyze" command into Cortex's `commands` table. Cortex picks it up on its next command poll (every 10s) and runs signal analysis. Status shows in the status bar. - -## Key files - -| File | Role | -|------|------| -| `src/main.rs` | CLI args, DB connections, terminal setup, event loop | -| `src/db.rs` | CortexDb + SynapseDb structs. Read-only rusqlite queries. | -| `src/ui.rs` | All Ratatui rendering: panels, tables, charts, modals, status bar | - -## Database access - -Optic reads databases **read-only** and never writes (except the command queue insert): - -**From Cortex DB** (`CortexDb`): -- `tracked_tokens` -- Token symbols -- `price_snapshots` -- Current + historical prices -- `news_items` -- News articles -- `signals` -- Trading signals -- `graph_nodes` + `graph_edges` -- Knowledge graph -- `memories` -- Memory counts for stats -- `commands` -- Insert analyze commands - -**From Synapse DB** (`SynapseDb`): -- `portfolio_state` -- NAV, cash, drawdown -- `positions` -- Open positions -- `trades` -- Recent trades -- `signal_log` -- Signal processing history -- `risk_events` -- Risk management events - -## Building - -```bash -just optic # Run (debug build) -just optic-build # Release build -``` - -Dependencies: Rust toolchain, `sqlite3` system library (or bundled via rusqlite feature). - -## Related documentation - -- [Cortex](./cortex.md) -- Market data source -- [Synapse](./synapse.md) -- Trading data source diff --git a/.docs/apps/synapse.md b/.docs/apps/synapse.md deleted file mode 100644 index 12b522c..0000000 --- a/.docs/apps/synapse.md +++ /dev/null @@ -1,96 +0,0 @@ -# Synapse - -*Last updated: 2026-03-01 -- Initial documentation* - -## Overview - -Paper trading daemon. Reads signals from Cortex's database, sizes positions by confidence, manages risk with stop-losses and drawdown limits, and simulates execution with slippage and gas costs. No real money -- purely simulation against live data. - -## How it works - -### Boot sequence (`apps/synapse/src/main.ts`) - -1. Run DB migrations (own database) -2. Create Kysely DB + CortexReader (read-only connection to Cortex DB) -3. Initialize portfolio state if first run (set initial cash balance) -4. Create PaperExecutor (simulates fills) -5. Start cron loop daemon - -### Engine loop (`apps/synapse/src/engine/loop.ts`) - -Two Croner jobs: - -- **Signal poll** (default: 60s) -- Reads latest signals from Cortex, filters, sizes, and executes paper trades -- **Risk check** (default: 30s) -- Updates position prices, checks stop-loss/take-profit, monitors portfolio drawdown - -### Signal processing flow - -``` -Cortex signals ──> dedup ──> filter ──> risk check ──> size ──> execute ──> record -``` - -1. **Dedup** -- Skip signals already in `signal_log` -2. **Filter** (`signal-filter.ts`) -- Confidence thresholds (configurable per short/long), existing position check -3. **Pre-trade risk** (`risk.ts`) -- Portfolio halt check, max open positions, exposure limits -4. **Position sizing** (`position-sizer.ts`) -- Scales USD allocation by confidence against total portfolio value -5. **Execution** (`executor.ts`) -- PaperExecutor applies slippage (BPS) and simulated gas, returns fill price + quantity -6. **Record** -- Insert position, trade, signal log entries. Update cash balance. - -### Risk management (`apps/synapse/src/engine/risk.ts`) - -Per-position: -- **Stop-loss** -- Configurable percentage below entry (default: 8%). Triggers automatic close. -- **Take-profit** -- Configurable percentage above entry (default: 20%). Triggers automatic close. - -Portfolio-level: -- **Drawdown halt** -- If portfolio drops below threshold from high water mark (default: 15%), close all positions and halt trading -- **Exposure limit** -- Max allocation to a single token (default: 25% of NAV) -- **Max open positions** -- Cap on concurrent positions (default: 8) - -### Portfolio tracking (`apps/synapse/src/portfolio/tracker.ts`) - -- Updates position mark-to-market prices from Cortex -- Recalculates total portfolio value, drawdown, high water mark -- Periodic snapshots for historical tracking - -### CLI status (`apps/synapse/src/status.ts`) - -`just synapse-status` prints a summary: NAV, cash, drawdown, return %, open positions table, recent trades, risk events. - -## Key files - -| File | Role | -|------|------| -| `src/main.ts` | Entry point, boot sequence | -| `src/env.ts` | All config: balance, intervals, risk params, sizing | -| `src/status.ts` | CLI portfolio summary | -| `src/types.ts` | Executor interface | -| `src/cortex/reader.ts` | Read-only Cortex DB access (signals, prices, tokens) | -| `src/engine/loop.ts` | Signal poll + risk check cron jobs | -| `src/engine/executor.ts` | PaperExecutor (simulated fills) | -| `src/engine/signal-filter.ts` | Confidence filtering, cooldown | -| `src/engine/position-sizer.ts` | Confidence-scaled allocation | -| `src/engine/risk.ts` | Stop-loss, take-profit, drawdown, exposure | -| `src/engine/pricing.ts` | Price fetching from Cortex | -| `src/portfolio/tracker.ts` | Mark-to-market, portfolio recalc, snapshots | -| `src/db/schema.ts` | positions, trades, signal_log, risk_events, portfolio_state, portfolio_snapshots | - -## Database tables - -- `portfolio_state` -- Single row: cash_usd, total_value_usd, high_water_mark_usd, drawdown_pct, halted -- `positions` -- Open/closed positions (token, entry/current price, quantity, P&L, stop/take levels) -- `trades` -- Individual buy/sell executions (price, size, gas, slippage) -- `signal_log` -- Every signal processed: action taken (opened, closed, skipped) + reason -- `risk_events` -- Stop-loss, take-profit, drawdown halt, exposure limit events -- `portfolio_snapshots` -- Periodic NAV snapshots for time-series tracking - -## Integration points - -- **Cortex** -- Reads `signals`, `price_snapshots`, `tracked_tokens` from Cortex DB via CortexReader -- **Optic** -- Reads portfolio_state, positions, trades, signal_log, risk_events via rusqlite -- Does NOT use Cairn -- purely a signal consumer and execution engine - -## Related documentation - -- [Cortex](./cortex.md) -- Signal producer -- [Optic](./optic.md) -- Trading view visualization diff --git a/.docs/architecture/overview.md b/.docs/architecture/overview.md deleted file mode 100644 index 092ec77..0000000 --- a/.docs/architecture/overview.md +++ /dev/null @@ -1,147 +0,0 @@ -# Architecture Overview - -*Last updated: 2026-03-01 -- Expanded to cover full Sprawl monorepo* - -## Overview - -Sprawl is a monorepo of five apps sharing two packages and a SQLite-based data layer. The flagship app is Construct, a self-aware braindump companion. The trading pipeline (Cortex, Synapse, Optic) reuses the same memory substrate for market intelligence. Deck provides observability for any app's memory graph. - -Construct runs as a long-lived Node.js process. It receives messages over Telegram (or a local CLI), processes them through an AI agent backed by OpenRouter, and uses SQLite for persistent storage of conversations, memories, schedules, secrets, and usage tracking. - -The system is self-aware: it can read, edit, test, and deploy its own source code. It extends itself through a plugin-like extension system that supports user-authored skills (Markdown instructions) and tools (TypeScript modules). - -## Monorepo Data Flow - -``` -┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ -│ Construct │ │ Cortex │ │ Synapse │ │ Deck │ │ Optic │ -│ (agent) │ │ (ingest) │ │ (trading) │ │ (web UI) │ │ (TUI) │ -└──────┬───────┘ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ - │ │ │ │ │ - └────────┬────────┘ │ │ │ - │ │ │ │ - ┌────────▼────────┐ │ │ │ - │ @repo/cairn │ │ │ │ - │ memory pipeline│────────────────│────────────────┘ │ - └────────┬────────┘ │ │ - │ │ │ - ┌────────▼────────┐ │ │ - │ @repo/db │────────────────┘ │ - │ kysely + sqlite│ │ - └────────┬────────┘ │ - │ │ - ┌────────▼───────────────────────────────────────────────────────────▼──┐ - │ SQLITE │ - └──────────────────────────────────────────────────────────────────────┘ -``` - -- Construct, Cortex, Deck use Cairn for memory (observe/reflect/promote/graph) -- Synapse reads Cortex's DB directly (signals, prices) -- Optic reads Cortex + Synapse DBs via rusqlite (no JS runtime) -- Each app manages its own database and migrations - -## High-Level Architecture (Construct) - -``` - ┌──────────────────────────────────┐ - │ src/main.ts │ - │ (startup orchestrator) │ - └────┬────┬────┬────┬────┬─────────┘ - │ │ │ │ │ - ┌────────────────────┘ │ │ │ └──────────────────┐ - ▼ ▼ │ ▼ ▼ - ┌───────────┐ ┌────────┐ │ ┌──────────┐ ┌─────────────┐ - │ Database │ │ Exts │ │ │ Tool Pack│ │ Telegram │ - │ migrate │ │ init │ │ │ Embeds │ │ Bot start │ - │ + create │ │ │ │ │ │ │ │ - └─────┬─────┘ └────┬───┘ │ └────┬─────┘ └──────┬──────┘ - │ │ │ │ │ - ▼ ▼ ▼ ▼ ▼ - ┌──────────────────────────────────────────────────────────────────────┐ - │ processMessage() │ - │ (src/agent.ts) │ - │ │ - │ 1. Get/create conversation │ - │ 2. Load recent chat history (20 messages) │ - │ 3. Load recent + semantically relevant memories │ - │ 4. Select relevant skills via embedding similarity │ - │ 5. Build context preamble (date, memories, skills, reply context) │ - │ 6. Construct system prompt (base + SOUL/IDENTITY/USER) │ - │ 7. Select tool packs via embedding similarity │ - │ 8. Create pi-agent Agent, replay history, register tools │ - │ 9. Save user message, prompt agent, await completion │ - │ 10. Save assistant response, track usage │ - └──────────────────────────────────────────────────────────────────────┘ - │ │ │ - ▼ ▼ ▼ - ┌──────────┐ ┌──────────────┐ ┌──────────────┐ - │ SQLite │ │ OpenRouter │ │ Tool Packs │ - │ (Kysely) │ │ (LLM API) │ │ (4 builtin │ - │ │ │ │ │ + dynamic) │ - └──────────┘ └──────────────┘ └──────────────┘ -``` - -## Startup Sequence - -The entry point is `src/main.ts`. On startup: - -1. **Logging** -- Configure logtape with console + rotating file sinks (`src/logger.ts`) -2. **Database migrations** -- Run Kysely migrations to ensure schema is current (`src/db/migrate.ts`) -3. **Database connection** -- Create a Kysely instance backed by Node.js built-in `node:sqlite` (`src/db/index.ts`) -4. **Sync env secrets** -- Any `EXT_*` environment variables are written to the `secrets` table (`src/extensions/secrets.ts`) -5. **Initialize extensions** -- Load SOUL.md/IDENTITY.md/USER.md, skills, and dynamic tools from `EXTENSIONS_DIR`, compute their embeddings (`src/extensions/index.ts`) -6. **Pack embeddings** -- Pre-compute embedding vectors for non-always-load builtin tool pack descriptions (`src/tools/packs.ts`) -7. **Create Telegram bot** -- Set up Grammy bot with message and reaction handlers (`src/telegram/bot.ts`) -8. **Start scheduler** -- Load active schedules from DB and register Croner jobs (`src/scheduler/index.ts`) -9. **Start polling** -- Begin Telegram long-polling for messages and reactions -10. **Graceful shutdown** -- SIGINT/SIGTERM handlers stop scheduler, bot, and close DB - -## Key Design Decisions - -### Embedding-Based Tool Selection - -Not all tools are loaded for every message. Tool packs have description embeddings computed at startup. When a message arrives, its embedding is compared against pack embeddings using cosine similarity. Only packs above a threshold (0.3) are loaded. Packs marked `alwaysLoad: true` (core, telegram) bypass this check. If embedding generation fails, all packs load as a graceful fallback. - -### Static System Prompt + Dynamic Preamble - -The system prompt is split into two parts for prompt caching efficiency: -- **Static system prompt**: Base instructions + identity files (SOUL.md, IDENTITY.md, USER.md). Cached and reused across requests. -- **Dynamic preamble**: Prepended to the user's message. Contains current date/time, recent memories, semantically relevant memories, active skills, and reply context. - -### Node.js Built-in SQLite - -Instead of using `better-sqlite3` (which requires native C++ compilation), the project uses Node.js built-in `node:sqlite` (`DatabaseSync`) with a custom Kysely dialect. This avoids compilation issues on ARM devices. - -### Self-Modification Safety - -The agent can edit its own source, but with guardrails: -- Edits are scoped to `src/`, `cli/`, and `extensions/` only -- Self-deploy runs typecheck and tests before committing -- Deploys are rate-limited to 3 per hour -- Auto-rollback if the service fails health check after restart -- Disabled entirely in development mode - -## Related Documentation - -### Construct features -- [Agent System](./../features/agent.md) -- processMessage() flow in detail -- [Tool System](./../features/tools.md) -- Packs, selection, and tool definitions -- [Extension System](./../features/extensions.md) -- Skills, dynamic tools, identity files -- [Database Layer](./../features/database.md) -- Schema, migrations, queries -- [Telegram Integration](./../features/telegram.md) -- Bot setup, message handling -- [Scheduler](./../features/scheduler.md) -- Reminders and cron jobs -- [CLI Interface](./../features/cli.md) -- REPL and one-shot modes -- [System Prompt](./../features/system-prompt.md) -- Prompt construction - -### Other apps -- [Cortex](./../apps/cortex.md) -- Market intelligence daemon -- [Synapse](./../apps/synapse.md) -- Paper trading daemon -- [Deck](./../apps/deck.md) -- Memory graph explorer -- [Optic](./../apps/optic.md) -- Terminal trading dashboard - -### Shared packages -- [Cairn](./../packages/cairn.md) -- Memory substrate - -### Guides -- [Environment Configuration](./../guides/environment.md) -- Env vars and configuration -- [Development Workflow](./../guides/development.md) -- Just commands, testing, deployment diff --git a/.docs/features/agent.md b/.docs/features/agent.md deleted file mode 100644 index 054313c..0000000 --- a/.docs/features/agent.md +++ /dev/null @@ -1,206 +0,0 @@ -# Agent System - -*Last updated: 2026-02-26 -- Updated for memory system integration (MemoryManager, observations, graph memory)* - -## Overview - -The agent system is the central intelligence of Construct. It takes a user message, enriches it with context (memories, skills, history), selects appropriate tools, runs the LLM, and returns a response. Everything flows through a single function: `processMessage()` in `src/agent.ts`. - -## Key Files - -| File | Role | -|------|------| -| `src/agent.ts` | `processMessage()` -- the main orchestration function | -| `src/system-prompt.ts` | System prompt construction and context preamble | -| `src/embeddings.ts` | Embedding generation via OpenRouter and cosine similarity | -| `src/memory/index.ts` | `MemoryManager` -- observational + graph memory facade | -| `src/tools/packs.ts` | Tool pack selection and instantiation | -| `src/extensions/index.ts` | Extension registry (skills, dynamic tools, identity) | - -## How processMessage() Works - -The function signature: - -```typescript -async function processMessage( - db: Kysely, - message: string, - opts: ProcessMessageOpts, -): Promise -``` - -`ProcessMessageOpts` includes: -- `source`: `'telegram'` or `'cli'` -- `externalId`: The chat or session identifier (e.g., Telegram chat ID) -- `chatId`: Telegram chat ID (used for tool context) -- `telegram`: Optional `TelegramContext` with bot instance and side-effects object -- `replyContext`: Text of the message being replied to (if any) -- `incomingTelegramMessageId`: Telegram message ID for the incoming message - -### Step-by-Step Flow - -```mermaid -flowchart TD - A[User message arrives] --> B[getOrCreateConversation] - B --> C[Create MemoryManager] - C --> D[buildContext: observations + un-observed messages] - D --> E[Load 10 recent memories] - E --> F[Generate message embedding] - F --> G[Recall semantically relevant memories] - G --> H[Select relevant skills] - H --> I[Build context preamble with observations] - I --> J[Build system prompt with identity files] - J --> K[Select tool packs via embedding similarity] - K --> L[Create pi-agent Agent instance] - L --> M[Replay conversation history] - M --> N[Subscribe to agent events] - N --> O[Save user message to DB] - O --> P[Prompt agent with preamble + message] - P --> Q[Wait for agent idle] - Q --> R[Save assistant response] - R --> S[Track usage] - S --> T[Async: run observer then reflector] - T --> U[Return AgentResponse] -``` - -### 1. Conversation Management - -Every message is associated with a conversation identified by `(source, externalId)`. For Telegram, the external ID is the chat ID. For CLI, it is the fixed string `'cli'`. The function `getOrCreateConversation()` either finds an existing conversation or creates a new one. - -### 2. MemoryManager and Context Building - -A `MemoryManager` is instantiated for the conversation. It uses the `MEMORY_WORKER_MODEL` env var to configure the worker LLM (if not set, LLM-powered memory features are disabled). - -`memoryManager.buildContext()` determines what conversation history the LLM sees: -- **If observations exist**: Rendered observations become a stable text prefix (injected into the context preamble), and only un-observed messages (those after the watermark) are replayed as conversation turns. This keeps the context window bounded as conversations grow. -- **If no observations yet**: Falls back to loading the last 20 raw messages (original behavior). - -See [Memory System](./memory.md) for details on how observations are created and managed. - -### 3. Memory Context - -Two types of memories are injected: -- **Recent memories** (10 most recent, regardless of relevance) -- for temporal continuity -- **Relevant memories** (up to 5, filtered by embedding cosine similarity >= 0.4) -- for semantic relevance - -Relevant memories that already appear in the recent set are deduplicated. - -### 4. Embedding Generation - -The user's message is embedded using `generateEmbedding()` in `src/embeddings.ts`. This calls the OpenRouter embeddings endpoint with the configured `EMBEDDING_MODEL` (default `qwen/qwen3-embedding-4b`). The resulting vector is reused for: -- Memory recall (semantic search) -- Skill selection -- Tool pack selection - -If embedding generation fails, graceful fallbacks kick in (all packs load, no semantic memory recall). - -### 5. Skill Selection - -Skills from the extension system are selected based on embedding similarity to the user's message. Up to 3 skills with similarity >= 0.35 are included. Selected skills are injected into the context preamble. - -### 6. Context Preamble - -The `buildContextPreamble()` function in `src/system-prompt.ts` creates a text block prepended to the user's message. It contains: - -``` -[Context: Monday, February 24, 2026 at 3:15 PM (America/New_York) | telegram] - -[Recent memories -- use these for context, pattern recognition, and continuity] -- (preference) User prefers dark mode -- (fact) User works at Acme Corp - -[Potentially relevant memories] -- (note) Meeting with Bob about the API redesign (87% match) - -[Active skills -- follow these instructions when relevant] -### daily-standup -...skill instructions... - -[Replying to: "what was that thing you mentioned yesterday?"] -``` - -### 7. System Prompt - -The system prompt is built by `getSystemPrompt()` in `src/system-prompt.ts`. It concatenates: -1. `BASE_SYSTEM_PROMPT` -- Static rules about behavior, tool usage, Telegram interaction -2. Identity section from `IDENTITY.md` (if loaded) -3. User section from `USER.md` (if loaded) -4. Soul section from `SOUL.md` (if loaded) - -The result is cached and invalidated when identity files change. - -### 8. Tool Selection and Registration - -Tool packs are selected based on embedding similarity (see [Tool System](./tools.md)). The selected packs are instantiated with a `ToolContext` that provides database access, API keys, project paths, and Telegram context. Each `InternalTool` is wrapped by `createPiTool()` to match the pi-agent-core `AgentTool` interface. - -### 9. Agent Execution - -A `pi-agent-core` `Agent` is instantiated with the system prompt and model. Conversation history is replayed via `agent.appendMessage()`. The agent subscribes to events for: -- `message_update` -- Accumulates response text from `text_delta` events -- `message_end` -- Captures usage statistics -- `tool_execution_end` -- Records tool call names and results - -The agent is then prompted with `preamble + message` and the function awaits `agent.waitForIdle()`. - -### 10. Persistence and Post-Response Memory - -After the agent finishes: -- The user's message is saved (already done before prompting) -- The assistant's response is saved with any tool call records -- LLM usage (input/output tokens, cost) is tracked in the `ai_usage` table -- **Async (non-blocking)**: `memoryManager.runObserver()` checks if un-observed messages exceed the token threshold (3000 tokens). If so, it compresses them into observations. If the observer runs, it chains into `memoryManager.runReflector()` to condense observations if they exceed 4000 tokens. This is fire-and-forget -- the current response is already sent, and the next turn benefits from the compression. - -## AgentResponse - -The function returns: - -```typescript -interface AgentResponse { - text: string // The assistant's text response - toolCalls: Array<{ name: string; args: unknown; result: string }> // Tools invoked - usage?: { input: number; output: number; cost: number } // Token usage - messageId?: string // Internal DB message ID -} -``` - -## pi-agent-core Integration - -The project uses `@mariozechner/pi-agent-core` as the agent framework and `@mariozechner/pi-ai` for model access. The `Agent` class handles: -- Multi-turn conversation management -- Tool calling protocol with the LLM -- Streaming text generation - -The model is obtained via `getModel('openrouter', modelName)` from `@mariozechner/pi-ai`. - -## Tool Adaptation Layer - -`createPiTool()` bridges the internal tool format to pi-agent's `AgentTool`: - -```typescript -// Internal tool format -interface InternalTool { - name: string - description: string - parameters: T // TypeBox schema - execute: (toolCallId: string, args: unknown) => Promise<{ output: string; details?: unknown }> -} - -// Adapted to pi-agent format -interface AgentTool { - name: string - label: string // name with underscores replaced by spaces - description: string - parameters: T - execute: (toolCallId: string, params: T) => Promise -} -``` - -Error handling in the adapter catches exceptions and returns them as text results so the LLM can see what went wrong and potentially retry. - -## Related Documentation - -- [Architecture Overview](./../architecture/overview.md) -- [Memory System](./memory.md) -- Graph memory, observational memory, and the MemoryManager facade -- [Tool System](./tools.md) -- How tools are defined, organized, and selected -- [System Prompt](./system-prompt.md) -- Prompt construction details -- [Extension System](./extensions.md) -- Skills and dynamic tools diff --git a/.docs/features/cli.md b/.docs/features/cli.md deleted file mode 100644 index 6877b72..0000000 --- a/.docs/features/cli.md +++ /dev/null @@ -1,91 +0,0 @@ -# CLI Interface - -*Last updated: 2026-02-24 -- Initial documentation* - -## Overview - -The CLI provides a local interface to Construct without requiring Telegram. Built with Citty (a lightweight CLI framework), it supports three modes: interactive REPL, one-shot messages, and direct tool invocation. It shares the same `processMessage()` pipeline as Telegram. - -## Key Files - -| File | Role | -|------|------| -| `cli/index.ts` | CLI entry point: command definition, REPL, one-shot, and tool modes | - -## Modes of Operation - -### Interactive REPL - -```bash -npm run cli -``` - -Starts an interactive loop where you type messages and see responses: - -``` -Construct interactive mode. Type "exit" or Ctrl+C to quit. - -you> What do you remember about my work schedule? - -construct> Based on my memories, you typically work from... - -you> exit -``` - -The REPL uses Node.js `readline` for input. Each message is processed through `processMessage()` with `source: 'cli'` and `externalId: 'cli'`, creating a single persistent CLI conversation. - -### One-Shot - -```bash -npm run cli -- "What's the weather like?" -``` - -Sends a single message, prints the response, and exits. Uses the positional `message` argument. - -### Direct Tool Invocation - -```bash -npm run cli -- --tool memory_recall --args '{"query": "work schedule"}' -``` - -Bypasses the agent entirely and invokes a specific tool with JSON arguments. Useful for testing and debugging tools. - -When using `--tool` mode: -1. All tools from all packs are loaded (no embedding selection -- `queryEmbedding` is `undefined`) -2. The named tool is found and executed directly -3. The raw output is printed - -If the tool name is not found, available tool names are listed. - -## Command Definition - -Using Citty's `defineCommand`: - -```typescript -args: { - message: { type: 'positional', required: false }, // One-shot message - tool: { type: 'string' }, // Direct tool name - args: { type: 'string', alias: 'a' }, // JSON args for --tool -} -``` - -## CLI vs. Telegram - -| Aspect | CLI | Telegram | -|--------|-----|----------| -| Source | `'cli'` | `'telegram'` | -| External ID | `'cli'` (fixed) | Chat ID | -| Telegram tools | Return `null` (no TelegramContext) | Fully functional | -| Typing indicator | None | Auto-refreshing | -| Output format | Plain text | Markdown-to-HTML | -| Self-deploy | Respects `isDev` flag | Respects `isDev` flag | - -## Startup - -The CLI runs migrations on startup, same as the main entry point. It does **not** start the scheduler or Telegram bot -- it only creates the database connection and processes messages. - -## Related Documentation - -- [Agent System](./agent.md) -- The shared processMessage() pipeline -- [Tool System](./tools.md) -- Tools invoked via --tool mode -- [Development Workflow](./../guides/development.md) -- npm run scripts diff --git a/.docs/features/database.md b/.docs/features/database.md deleted file mode 100644 index c82e520..0000000 --- a/.docs/features/database.md +++ /dev/null @@ -1,269 +0,0 @@ -# Database Layer - -*Last updated: 2026-02-26 -- Added graph memory and observational memory tables* - -## Overview - -Construct uses SQLite for all persistent storage, accessed through Kysely (a type-safe SQL query builder). Instead of the common `better-sqlite3` native addon, it uses Node.js built-in `node:sqlite` (`DatabaseSync`) with a custom Kysely dialect. This eliminates native compilation requirements, which is important for ARM deployment targets. - -## Key Files - -| File | Role | -|------|------| -| `src/db/index.ts` | Custom Kysely dialect for `node:sqlite`, `createDb()` function | -| `src/db/schema.ts` | TypeScript type definitions for all tables | -| `src/db/queries.ts` | All database query functions (memories, conversations, messages, schedules, usage, settings) | -| `src/db/migrate.ts` | Migration runner using Kysely's `FileMigrationProvider` | -| `src/db/migrations/` | Individual migration files | - -## Custom Kysely Dialect - -`src/db/index.ts` implements three classes to bridge `node:sqlite` to Kysely: - -- **`NodeSqliteDialect`** -- Implements `Dialect`, creates the driver, query compiler (SQLite), adapter, and introspector -- **`NodeSqliteDriver`** -- Implements `Driver`, manages connection lifecycle and transactions -- **`NodeSqliteConnection`** -- Implements `DatabaseConnection`, executes queries by detecting SELECT/PRAGMA/WITH (returning rows) vs. other statements (returning affected row counts) - -`createDb()` opens the database with pragmas: -- `PRAGMA journal_mode = WAL` -- Write-Ahead Logging for concurrent reads/writes -- `PRAGMA busy_timeout = 5000` -- Wait up to 5 seconds on lock contention -- `PRAGMA foreign_keys = ON` -- Enforce foreign key constraints - -## Schema - -Ten tables are defined in `src/db/schema.ts`: - -### memories - -Stores long-term memories with full-text search and embedding support. - -| Column | Type | Notes | -|--------|------|-------| -| `id` | text (PK) | nanoid | -| `content` | text | The memory content | -| `category` | text | Default `'general'`. Options: general, preference, fact, reminder, note | -| `tags` | text (nullable) | JSON array of keyword tags | -| `source` | text | Default `'user'` | -| `embedding` | text (nullable) | JSON-serialized embedding vector (added in migration 002) | -| `created_at` | text | ISO 8601 datetime, auto-set | -| `updated_at` | text | ISO 8601 datetime, auto-set | -| `archived_at` | text (nullable) | Set when "forgotten" (soft delete) | - -Indexes: `idx_memories_category`, `idx_memories_archived` - -### memories_fts (FTS5 virtual table) - -Full-text search index on memories, synced via triggers. - -| Column | Indexed | Source | -|--------|:---:|--------| -| `id` | No (UNINDEXED) | memories.id | -| `content` | Yes | memories.content | -| `tags` | Yes | memories.tags | -| `category` | No (UNINDEXED) | memories.category | - -Three triggers keep it in sync: `memories_ai` (insert), `memories_ad` (delete), `memories_au` (update). - -### conversations - -Groups messages by source and external identifier. - -| Column | Type | Notes | -|--------|------|-------| -| `id` | text (PK) | nanoid | -| `source` | text | `'telegram'` or `'cli'` | -| `external_id` | text (nullable) | Telegram chat ID, or `'cli'` | -| `created_at` | text | Auto-set | -| `updated_at` | text | Auto-set, updated on each message | - -Index: `idx_conversations_external` - -### messages - -Individual messages within conversations. - -| Column | Type | Notes | -|--------|------|-------| -| `id` | text (PK) | nanoid | -| `conversation_id` | text (FK) | References conversations.id | -| `role` | text | `'user'` or `'assistant'` | -| `content` | text | Message text | -| `tool_calls` | text (nullable) | JSON array of `{name, args, result}` | -| `telegram_message_id` | integer (nullable) | Telegram message ID for cross-referencing (added in migration 004) | -| `created_at` | text | Auto-set | - -Indexes: `idx_messages_conversation`, `idx_messages_telegram_message_id` - -### schedules - -Reminders and recurring tasks. - -| Column | Type | Notes | -|--------|------|-------| -| `id` | text (PK) | nanoid | -| `description` | text | Human-readable description | -| `cron_expression` | text (nullable) | Cron string for recurring schedules | -| `run_at` | text (nullable) | ISO 8601 datetime for one-shot schedules | -| `message` | text | Message to send when triggered | -| `chat_id` | text | Telegram chat ID to send to | -| `active` | integer | 1 = active, 0 = cancelled. Default 1 | -| `last_run_at` | text (nullable) | Last execution timestamp | -| `created_at` | text | Auto-set | - -Index: `idx_schedules_active` - -### ai_usage - -Tracks LLM API usage for cost monitoring. - -| Column | Type | Notes | -|--------|------|-------| -| `id` | text (PK) | nanoid | -| `model` | text | Model identifier (e.g., `google/gemini-3-flash-preview`) | -| `input_tokens` | integer (nullable) | Input token count | -| `output_tokens` | integer (nullable) | Output token count | -| `cost_usd` | real (nullable) | Cost in USD | -| `source` | text | `'telegram'` or `'cli'` | -| `created_at` | text | Auto-set | - -### settings - -Key-value store for application settings. - -| Column | Type | Notes | -|--------|------|-------| -| `key` | text (PK) | Setting name | -| `value` | text | Setting value | -| `updated_at` | text | Auto-set | - -### secrets - -Stores API keys and tokens for extensions. - -| Column | Type | Notes | -|--------|------|-------| -| `key` | text (PK) | Secret name | -| `value` | text | Secret value | -| `source` | text | `'agent'` or `'env'`. Default `'agent'` | -| `created_at` | text | Auto-set | -| `updated_at` | text | Auto-set | - -### graph_nodes - -Entity nodes extracted from memories by the graph memory system. See [Memory System](./memory.md) for details. - -| Column | Type | Notes | -|--------|------|-------| -| `id` | text (PK) | nanoid | -| `name` | text | Canonical (lowercased, trimmed) | -| `display_name` | text | Original casing | -| `node_type` | text | `person`, `place`, `concept`, `event`, or `entity`. Default `entity` | -| `description` | text (nullable) | Short description from extraction | -| `embedding` | text (nullable) | Reserved for future node embeddings | -| `created_at` | text | Auto-set | -| `updated_at` | text | Auto-set | - -Unique index: `idx_gn_name_type` on `(name, node_type)` - -### graph_edges - -Relationships between graph nodes, linked back to source memories. - -| Column | Type | Notes | -|--------|------|-------| -| `id` | text (PK) | nanoid | -| `source_id` | text (FK) | References `graph_nodes.id` | -| `target_id` | text (FK) | References `graph_nodes.id` | -| `relation` | text | Short verb phrase, lowercased | -| `weight` | real | Default 1.0, incremented on repeated mention | -| `properties` | text (nullable) | JSON for extensible metadata | -| `memory_id` | text (nullable, FK) | References `memories.id` | -| `created_at` | text | Auto-set | -| `updated_at` | text | Auto-set | - -Indexes: `idx_ge_source`, `idx_ge_target`, `idx_ge_unique` (unique on `source_id, target_id, relation`) - -### observations - -Compressed conversation summaries produced by the observational memory system. See [Memory System](./memory.md) for details. - -| Column | Type | Notes | -|--------|------|-------| -| `id` | text (PK) | nanoid | -| `conversation_id` | text (FK) | References `conversations.id` | -| `content` | text | The observation text | -| `priority` | text | `high`, `medium`, or `low`. Default `medium` | -| `observation_date` | text | Date context for the observation | -| `source_message_ids` | text (nullable) | JSON array of message IDs | -| `token_count` | integer (nullable) | Estimated token count | -| `generation` | integer | 0 = observer, 1+ = reflector rounds. Default 0 | -| `superseded_at` | text (nullable) | Set when replaced by reflector | -| `created_at` | text | Auto-set | - -Indexes: `idx_obs_conv` (conversation_id), `idx_obs_active` (conversation_id, superseded_at) - -Note: Migration 006 also adds `observed_up_to_message_id` (text, nullable) and `observation_token_count` (integer, default 0) to the `conversations` table for watermark tracking. - -## Migrations - -Migrations use Kysely's `FileMigrationProvider` which scans `src/db/migrations/` for files. Each migration exports `up()` and `down()` functions. - -| Migration | Description | -|-----------|-------------| -| `001-initial.ts` | Creates all base tables (memories, conversations, messages, schedules, ai_usage, settings) and indexes | -| `002-fts5-and-embeddings.ts` | Creates FTS5 virtual table, sync triggers, adds `embedding` column to memories | -| `003-secrets.ts` | Creates the secrets table | -| `004-telegram-message-ids.ts` | Adds `telegram_message_id` column and index to messages | -| `005-graph-memory.ts` | Creates `graph_nodes` and `graph_edges` tables with indexes and foreign keys | -| `006-observational-memory.ts` | Creates `observations` table, adds watermark columns to `conversations` | - -Migrations are run via `runMigrations()` which is called both at startup and by the `npm run db:migrate` script. The convention is **additive only** -- never drop tables or columns. - -## Query Functions - -All database queries are in `src/db/queries.ts`. Key functions: - -### Memory Operations - -- **`storeMemory(db, memory)`** -- Inserts a memory with nanoid, returns the full record -- **`updateMemoryEmbedding(db, id, embedding)`** -- Updates a memory's embedding (JSON-serialized) -- **`recallMemories(db, query, opts?)`** -- Hybrid search: FTS5 -> embedding cosine similarity -> LIKE fallback. Results are merged and deduplicated by ID -- **`getRecentMemories(db, limit)`** -- Returns the N most recent non-archived memories -- **`forgetMemory(db, id)`** -- Soft-deletes by setting `archived_at` -- **`searchMemoriesForForget(db, query)`** -- Searches for forget candidates - -### Conversation Operations - -- **`getOrCreateConversation(db, source, externalId)`** -- Finds existing conversation by `(source, external_id)` or creates one. Updates `updated_at` on access. -- **`getRecentMessages(db, conversationId, limit)`** -- Returns last N messages in chronological order (fetched DESC then reversed) -- **`saveMessage(db, message)`** -- Inserts a message, returns its nanoid -- **`updateTelegramMessageId(db, internalId, telegramMsgId)`** -- Associates a Telegram message ID with an internal message -- **`getMessageByTelegramId(db, conversationId, telegramMsgId)`** -- Looks up a message by its Telegram message ID - -### Schedule Operations - -- **`createSchedule(db, schedule)`** -- Inserts a schedule, returns the full record -- **`listSchedules(db, activeOnly)`** -- Lists schedules, optionally filtered to active only -- **`cancelSchedule(db, id)`** -- Sets `active = 0` -- **`markScheduleRun(db, id)`** -- Updates `last_run_at` to now - -### Usage Tracking - -- **`trackUsage(db, usage)`** -- Inserts a usage record -- **`getUsageStats(db, opts?)`** -- Aggregates usage: total cost, tokens, message count, plus per-day breakdown. Supports day range and source filters. - -### Settings - -- **`getSetting(db, key)`** -- Returns a setting value or null -- **`setSetting(db, key, value)`** -- Upserts a setting - -## ID Generation - -All entity IDs use `nanoid()` (21-character URL-safe string) rather than auto-incrementing integers. This avoids ID collision concerns and works well with distributed systems. - -## Related Documentation - -- [Architecture Overview](./../architecture/overview.md) -- How the database fits into the system -- [Memory System](./memory.md) -- Graph memory, observational memory, and declarative memory details -- [Tool System](./tools.md) -- Tools that interact with the database -- [Extension System](./extensions.md) -- Secrets table usage diff --git a/.docs/features/extensions.md b/.docs/features/extensions.md deleted file mode 100644 index 3798eac..0000000 --- a/.docs/features/extensions.md +++ /dev/null @@ -1,245 +0,0 @@ -# Extension System - -*Last updated: 2026-02-24 -- Initial documentation* - -## Overview - -The extension system allows Construct to be customized with user-authored skills (Markdown instruction sets) and tools (TypeScript modules) without modifying core source code. Extensions live in a configurable directory (`EXTENSIONS_DIR`) and are loaded at startup, with the ability to hot-reload via the `extension_reload` tool. - -The extension system also manages three **identity files** (SOUL.md, IDENTITY.md, USER.md) that shape the agent's personality and context. - -## Key Files - -| File | Role | -|------|------| -| `src/extensions/index.ts` | Singleton registry, `initExtensions()`, `reloadExtensions()`, selection helpers | -| `src/extensions/loader.ts` | File loading: identity files, skills (Markdown), dynamic tools (TypeScript via jiti) | -| `src/extensions/embeddings.ts` | Embedding caches for skills and dynamic packs, selection functions | -| `src/extensions/secrets.ts` | Secret management: store, get, list, delete, env sync, secrets map builder | -| `src/extensions/types.ts` | TypeScript interfaces for Skill, DynamicToolExport, ExtensionRegistry, etc. | - -## Extensions Directory Layout - -``` -$EXTENSIONS_DIR/ - SOUL.md # Personality: traits, values, communication style - IDENTITY.md # Agent metadata: name, creature type, pronouns - USER.md # Human context: name, location, preferences - skills/ - daily-standup.md # Standalone skill (YAML frontmatter + body) - coding/ - code-review.md # Skills can be nested in subdirectories - tools/ - weather.ts # Standalone tool file -> single-tool pack (ext:weather) - music/ # Directory -> grouped pack (ext:music) - pack.md # Optional description override for the pack - play.ts # Tool: music_play - search.ts # Tool: music_search -``` - -The default `EXTENSIONS_DIR` is: -- **Development**: `./data` (relative to project root) -- **Production**: `$XDG_DATA_HOME/construct/` (typically `~/.local/share/construct/`) - -## Identity Files - -Three Markdown files injected into the system prompt: - -| File | Purpose | System Prompt Section | -|------|---------|----------------------| -| `SOUL.md` | Personality traits, values, communication anti-patterns | `## Soul` | -| `IDENTITY.md` | Name, creature type, visual description, pronouns | `## Identity` | -| `USER.md` | Human's name, location, preferences, interests, schedule | `## User` | - -These are loaded by `loadIdentityFiles()` in `src/extensions/loader.ts` and stored in the `ExtensionRegistry.identity` field. They are read/written by the `identity_read` and `identity_update` tools. - -When an identity file is updated via `identity_update`, the tool: -1. Writes the new content to disk -2. Calls `invalidateSystemPromptCache()` to clear the cached system prompt -3. Calls `reloadExtensions()` to refresh the registry - -## Skills - -Skills are Markdown files with YAML frontmatter, found recursively under `$EXTENSIONS_DIR/skills/`. - -### Skill File Format - -```markdown ---- -name: daily-standup -description: Run a daily standup summarizing recent activity and upcoming plans -requires: - secrets: - - JIRA_TOKEN - env: - - JIRA_URL - bins: - - curl ---- - -When the user asks for a standup or morning briefing: - -1. Check recent memories for what was worked on yesterday -2. Look up today's schedule -3. Summarize in a concise format -``` - -### Frontmatter Fields - -| Field | Required | Description | -|-------|:---:|-------------| -| `name` | Yes | Unique skill name | -| `description` | Yes | Short description (used for embedding) | -| `requires.secrets` | No | Secret keys that must exist in the `secrets` table | -| `requires.env` | No | Environment variables that must be set | -| `requires.bins` | No | Binary executables needed (logged but not enforced) | - -### How Skills Are Selected - -Skills are **not** tools. They are instruction sets injected into the context preamble when relevant. Selection uses embedding similarity: - -1. At extension load time, `initSkillEmbeddings()` generates an embedding for each skill from `"name: description"`. -2. At message time, `selectSkills()` compares the message embedding against skill embeddings. -3. Skills with cosine similarity >= 0.35 are included (up to 3 max). -4. If embedding generation failed for the message, no skills are selected (skills are optional context). - -Selected skills appear in the context preamble as: - -``` -[Active skills -- follow these instructions when relevant] - -### daily-standup -When the user asks for a standup... -``` - -### Requirement Checking - -`checkRequirements()` in `src/extensions/loader.ts` validates: -- `requires.env` -- checks `process.env` -- `requires.secrets` -- checks against available secrets from the database -- `requires.bins` -- logged only (not enforced) - -Skills with unmet requirements are still loaded but may not function correctly. (Requirement checking is primarily used for dynamic tools, where unmet requirements cause the tool to be skipped.) - -## Dynamic Tools - -Dynamic tools are TypeScript files under `$EXTENSIONS_DIR/tools/`. They are loaded at runtime using **jiti** (a JIT TypeScript transpiler that works without a compile step). - -### Tool File Format - -A dynamic tool file must export: - -```typescript -import { Type, type Static } from '@sinclair/typebox' - -// Optional: declare requirements -export const meta = { - requires: { - secrets: ['OPENWEATHERMAP_API_KEY'], - }, -} - -// Default export: either a tool object or a factory function -export default (ctx: DynamicToolContext) => ({ - name: 'weather_current', - description: 'Get current weather for a location', - parameters: Type.Object({ - location: Type.String({ description: 'City name' }), - }), - execute: async (_id: string, args: { location: string }) => { - const apiKey = ctx.secrets.get('OPENWEATHERMAP_API_KEY') - // ... fetch weather ... - return { output: `Weather in ${args.location}: ...` } - }, -}) -``` - -The default export can be: -- A **factory function** `(ctx: DynamicToolContext) => InternalTool` -- receives secrets and context -- A **plain tool object** `InternalTool` -- for tools that don't need secrets - -### DynamicToolContext - -```typescript -interface DynamicToolContext { - secrets: Map // All secrets from the secrets table -} -``` - -### Loading Process - -1. `loadDynamicTools()` scans `$EXTENSIONS_DIR/tools/` -2. **Standalone .ts files** at the root level become single-tool packs (name: `ext:`) -3. **Subdirectories** become grouped packs (name: `ext:`) - - All `.ts` files in the directory are loaded as tools in the pack - - Optional `pack.md` provides a description override; otherwise, tool descriptions are concatenated -4. Each file is loaded via `jiti.import()` with `moduleCache: false` (for reload support) -5. Requirements are checked -- tools with unmet requirements are skipped with a log message -6. Tool shape is validated: must have `name`, `description`, `parameters`, `execute` - -### node_modules Symlink - -Dynamic tools may import project dependencies (like `@sinclair/typebox`). To support this, `ensureNodeModulesLink()` creates a symlink from `$EXTENSIONS_DIR/node_modules` to the project's `node_modules/`. This happens once during tool loading. - -### Dynamic Pack Embedding and Selection - -Dynamic packs follow the same embedding-based selection as builtin packs: - -1. `initDynamicPackEmbeddings()` generates embeddings for each dynamic pack description -2. `selectDynamicPacks()` filters by cosine similarity >= 0.3 -3. If no message embedding is available, all dynamic packs are loaded (graceful fallback) - -## Extension Registry - -The singleton registry holds all loaded extension data: - -```typescript -interface ExtensionRegistry { - identity: IdentityFiles // { soul, identity, user } -- string | null each - skills: Skill[] // Parsed skill objects - dynamicPacks: ToolPack[] // Dynamic tool packs (same ToolPack type as builtins) -} -``` - -Access via `getExtensionRegistry()`. Updated by `reloadExtensions()`. - -## Secrets System - -Secrets enable dynamic tools to access API keys and tokens without hardcoding them. - -### Storage - -Secrets are stored in the `secrets` table with columns: `key`, `value`, `source` (`'agent'` or `'env'`), `created_at`, `updated_at`. - -### Sources - -1. **Environment variables**: Any `EXT_*` env var is synced to the secrets table on startup. The `EXT_` prefix is stripped (e.g., `EXT_OPENWEATHERMAP_API_KEY` becomes `OPENWEATHERMAP_API_KEY`). Source is set to `'env'`. -2. **Agent-created**: The agent can store secrets via the `secret_store` tool. Source is set to `'agent'`. - -Environment-sourced secrets always overwrite on restart. - -### Access - -- **Built-in tools**: Use `secret_store`, `secret_list`, `secret_delete` tools (in core pack) -- **Dynamic tools**: Receive a `Map` of all secrets via `DynamicToolContext.secrets` -- **Never exposed**: `secret_list` returns only key names and sources, never values - -## Reload Flow - -When `extension_reload` is called (or `identity_update` triggers a reload): - -1. `invalidateSystemPromptCache()` -- clears the cached system prompt -2. `clearExtensionEmbeddings()` -- clears skill and dynamic pack embedding caches -3. `loadIdentityFiles()` -- re-reads SOUL.md, IDENTITY.md, USER.md -4. `loadSkills()` -- re-scans and parses all skill files -5. `buildSecretsMap()` -- rebuilds the secrets map from the database -6. `loadDynamicTools()` -- re-scans and loads all dynamic tool files (with `moduleCache: false`) -7. Update the singleton registry -8. `initSkillEmbeddings()` + `initDynamicPackEmbeddings()` -- recompute embeddings - -## Related Documentation - -- [Tool System](./tools.md) -- How packs and tools work in general -- [Agent System](./agent.md) -- How skills and dynamic tools are used during message processing -- [System Prompt](./system-prompt.md) -- How identity files are injected -- [Environment Configuration](./../guides/environment.md) -- EXTENSIONS_DIR and EXT_* variables diff --git a/.docs/features/memory.md b/.docs/features/memory.md deleted file mode 100644 index df69350..0000000 --- a/.docs/features/memory.md +++ /dev/null @@ -1,413 +0,0 @@ -# Memory System - -*Last updated: 2026-02-26 -- Initial documentation* - -## Overview - -Construct has a layered memory system that gives the agent both long-term factual recall and conversation-level context compression. The system consists of three cooperating subsystems: - -1. **Declarative Memory** -- Explicit facts, preferences, and notes stored by the agent via tools. Searchable with FTS5, embedding similarity, and keyword fallback. -2. **Graph Memory** -- An entity-relationship graph extracted from stored memories by a worker LLM. Enables associative recall ("what do I know about Alice?" surfaces memories about Bob if Alice and Bob are connected). -3. **Observational Memory** -- Automatic compression of conversation history into dated observations. Replaces raw message replay with a compact prefix, keeping the context window lean as conversations grow. - -Declarative memory is the foundation that existed before the recent additions. Graph memory and observational memory are new layers that build on top of it. - -## Key Files - -| File | Role | -|------|------| -| `src/memory/index.ts` | `MemoryManager` class -- facade for observational and graph memory | -| `src/memory/types.ts` | Shared type definitions (`Observation`, `GraphNode`, `GraphEdge`, `WorkerModelConfig`, etc.) | -| `src/memory/observer.ts` | Observer LLM call -- compresses messages into observations | -| `src/memory/reflector.ts` | Reflector LLM call -- condenses observations when they grow too large | -| `src/memory/context.ts` | `renderObservations()` and `buildContextWindow()` -- pure rendering functions | -| `src/memory/tokens.ts` | Token estimation utilities (chars/4 heuristic) | -| `src/memory/graph/index.ts` | `processMemoryForGraph()` -- orchestrates entity extraction and graph upsert | -| `src/memory/graph/extract.ts` | `extractEntities()` -- LLM call to extract entities and relationships | -| `src/memory/graph/queries.ts` | Graph database operations: node/edge CRUD, traversal, search | -| `src/tools/core/memory-store.ts` | `memory_store` tool -- stores memories, triggers embedding + graph extraction | -| `src/tools/core/memory-recall.ts` | `memory_recall` tool -- hybrid search with graph expansion | -| `src/tools/core/memory-forget.ts` | `memory_forget` tool -- soft-delete (archive) memories | -| `src/tools/core/memory-graph.ts` | `memory_graph` tool -- explore, search, and connect graph nodes | -| `src/db/queries.ts` | `storeMemory()`, `recallMemories()`, `updateMemoryEmbedding()`, etc. | -| `src/db/migrations/005-graph-memory.ts` | Creates `graph_nodes` and `graph_edges` tables | -| `src/db/migrations/006-observational-memory.ts` | Creates `observations` table, adds watermark columns to `conversations` | - -## Architecture - -```mermaid -flowchart TD - subgraph DeclarativeMemory["Declarative Memory"] - MemStore[memory_store tool] - MemRecall[memory_recall tool] - MemForget[memory_forget tool] - MemTable[(memories table)] - FTS[(memories_fts)] - Embeddings[Embedding vectors] - end - - subgraph GraphMem["Graph Memory"] - Extract[extractEntities LLM] - GraphNodes[(graph_nodes)] - GraphEdges[(graph_edges)] - MemGraphTool[memory_graph tool] - end - - subgraph ObsMem["Observational Memory"] - Observer[Observer LLM] - Reflector[Reflector LLM] - ObsTable[(observations)] - Watermark[conversation watermark] - end - - subgraph Pipeline["processMessage pipeline"] - BuildCtx[buildContext] - RunObs[runObserver post-response] - RunRef[runReflector post-observer] - end - - MemStore --> MemTable - MemStore -.->|async| Embeddings - MemStore -.->|async| Extract - Extract --> GraphNodes - Extract --> GraphEdges - MemRecall --> FTS - MemRecall --> Embeddings - MemRecall --> GraphNodes - BuildCtx --> ObsTable - BuildCtx --> Watermark - RunObs --> Observer - Observer --> ObsTable - RunObs --> RunRef - RunRef --> Reflector - Reflector --> ObsTable -``` - -## Declarative Memory - -### Storage - -When the agent calls `memory_store`, the tool: - -1. Inserts a row into the `memories` table with a nanoid, content, category, tags, and source. -2. **Async (non-blocking)**: Generates an embedding vector via `generateEmbedding()` and stores it in the `embedding` column as JSON. -3. **Async (non-blocking)**: Passes the memory to `MemoryManager.processStoredMemory()` for graph extraction. - -Categories: `general` (default), `preference`, `fact`, `reminder`, `note`. - -### Recall - -`memory_recall` performs a three-tier hybrid search via `recallMemories()` in `src/db/queries.ts`: - -1. **FTS5 full-text search** -- Tokenizes the query into OR-joined terms and searches `memories_fts`. Results ranked by BM25. -2. **Embedding cosine similarity** -- Loads all non-archived memories with embeddings, computes cosine similarity against the query embedding, and keeps results above the threshold (default 0.3). -3. **LIKE keyword fallback** -- If still below the limit, does `LIKE '%keyword%'` on content and tags. - -Results are merged and deduplicated by ID across all three tiers. Each result carries a `matchType` (`fts5`, `embedding`, or `keyword`). - -After the direct search, `memory_recall` also **expands via graph traversal**: it searches `graph_nodes` for matching node names, traverses up to 2 hops from each match, collects all connected `memory_id` values from edges, and appends those memories (tagged `[graph]`) to the result set. - -### Forgetting - -`memory_forget` performs a soft delete by setting `archived_at` on the memory row. Archived memories are excluded from all search results. The tool supports both direct ID-based archival and query-based candidate search. - -## Graph Memory - -Graph memory builds a knowledge graph of entities and relationships extracted from stored memories. It enables associative recall -- finding memories that are semantically related through shared entities rather than just keyword or embedding overlap. - -### Schema - -**`graph_nodes`** -- Entities extracted from memories. - -| Column | Type | Notes | -|--------|------|-------| -| `id` | text (PK) | nanoid | -| `name` | text | Canonical (lowercased, trimmed) | -| `display_name` | text | Original casing | -| `node_type` | text | `person`, `place`, `concept`, `event`, or `entity`. Default `entity` | -| `description` | text (nullable) | Short description from extraction | -| `embedding` | text (nullable) | Reserved for future node embeddings | -| `created_at` | text | Auto-set | -| `updated_at` | text | Auto-set | - -Unique index on `(name, node_type)` -- same canonical name can exist as different types (e.g., "Java" as both concept and place). - -**`graph_edges`** -- Relationships between nodes. - -| Column | Type | Notes | -|--------|------|-------| -| `id` | text (PK) | nanoid | -| `source_id` | text (FK) | References `graph_nodes.id` | -| `target_id` | text (FK) | References `graph_nodes.id` | -| `relation` | text | Short verb phrase, lowercased (e.g., `lives in`, `works at`) | -| `weight` | real | Default 1.0, incremented on repeated mention | -| `properties` | text (nullable) | JSON for extensible metadata | -| `memory_id` | text (nullable, FK) | References `memories.id` -- links edge back to source memory | -| `created_at` | text | Auto-set | -| `updated_at` | text | Auto-set | - -Unique index on `(source_id, target_id, relation)` -- duplicate edges increment `weight` instead of creating new rows. - -### Extraction Pipeline - -When a memory is stored, `MemoryManager.processStoredMemory()` fires asynchronously: - -1. **`extractEntities()`** (`src/memory/graph/extract.ts`) sends the memory content to the worker LLM with a structured extraction prompt. -2. The LLM returns JSON with `entities` (name, type, description) and `relationships` (from, to, relation). -3. **`processMemoryForGraph()`** (`src/memory/graph/index.ts`) iterates over the extracted data: - - Each entity is upserted as a node (matched by canonical name + type). Descriptions are only filled in if the existing node has none. - - Each relationship is upserted as an edge. If the edge already exists (same source, target, relation), its `weight` is incremented. The originating `memory_id` is stored on the edge. - - If a relationship references an entity not in the current extraction, the system looks for it in the existing graph or creates a new `entity`-typed node. -4. Usage (input/output tokens) is tracked in `ai_usage` with source `graph_extract`. - -### Graph Queries - -`src/memory/graph/queries.ts` provides: - -- **`upsertNode(db, {name, type, description})`** -- Case-insensitive dedup by `(name, node_type)`. Fills in description if the existing node lacks one. -- **`findNodeByName(db, name, type?)`** -- Case-insensitive exact match. Optional type filter. -- **`searchNodes(db, query, limit)`** -- `LIKE '%query%'` on canonical name. Returns up to `limit` results ordered by `updated_at` desc. -- **`upsertEdge(db, {source_id, target_id, relation, memory_id})`** -- Dedup by `(source_id, target_id, relation)`. Increments `weight` on duplicates. -- **`getNodeEdges(db, nodeId)`** -- Returns all edges where the node is source or target, ordered by weight descending. -- **`traverseGraph(db, startNodeId, maxDepth)`** -- Recursive CTE traversal. Returns all reachable nodes within `maxDepth` hops with depth and `via_relation`. Handles cycles via a visited-node string. -- **`getRelatedMemoryIds(db, nodeIds)`** -- Returns distinct `memory_id` values from edges connected to any of the given nodes. -- **`getMemoryNodes(db, memoryId)`** -- Returns all graph nodes connected to a specific memory via edges. - -### memory_graph Tool - -The `memory_graph` tool (`src/tools/core/memory-graph.ts`) exposes the graph to the agent with three actions: - -- **`search`** -- Find nodes by name pattern. Returns display names, types, and descriptions. -- **`explore`** -- From a specific node, show direct connections (edges with directions and weights) and reachable nodes up to `depth` hops (default 2, max 3). -- **`connect`** -- Check if two named concepts are connected within `depth` hops. Returns the path depth and relation if found. - -## Observational Memory - -Observational memory solves the growing context window problem. Instead of replaying all past messages in a conversation, the system compresses older messages into concise, dated observations. The LLM then sees a stable observation prefix plus only the most recent un-observed messages. - -### Schema - -**`observations`** -- Compressed conversation summaries. - -| Column | Type | Notes | -|--------|------|-------| -| `id` | text (PK) | nanoid | -| `conversation_id` | text (FK) | References `conversations.id` | -| `content` | text | The observation text | -| `priority` | text | `high`, `medium`, or `low`. Default `medium` | -| `observation_date` | text | Date context for the observation | -| `source_message_ids` | text (nullable) | JSON array of message IDs that produced this observation | -| `token_count` | integer (nullable) | Estimated token count of the content | -| `generation` | integer | 0 = produced by observer, 1+ = produced by reflector rounds. Default 0 | -| `superseded_at` | text (nullable) | Set when a reflector round replaces this observation | -| `created_at` | text | Auto-set | - -Indexes: `idx_obs_conv` (conversation_id), `idx_obs_active` (conversation_id, superseded_at). - -**Columns added to `conversations`** (migration 006): - -| Column | Type | Notes | -|--------|------|-------| -| `observed_up_to_message_id` | text (nullable) | Watermark -- all messages up to and including this ID have been observed | -| `observation_token_count` | integer | Running total of active observation tokens. Default 0 | - -### Observer - -The observer (`src/memory/observer.ts`) compresses un-observed messages into observations. - -**Trigger**: Called asynchronously after every `processMessage()` response. Only runs if un-observed messages exceed `OBSERVER_THRESHOLD` (3000 estimated tokens). - -**Process**: -1. `MemoryManager.getUnobservedMessages()` loads messages after the watermark. Uses `rowid` comparison to handle messages inserted within the same second. -2. Messages are formatted as `[timestamp] role: content` and sent to the worker LLM with the observer system prompt. -3. The LLM returns JSON observations, each with `content`, `priority`, and `observation_date`. -4. Observations are validated (content must be a non-empty string, priority must be `low`/`medium`/`high`) and stored. -5. The watermark (`observed_up_to_message_id`) is advanced to the last processed message ID. -6. `observation_token_count` is updated with the cumulative token estimate. -7. Usage is tracked in `ai_usage` with source `observer`. - -**Observer prompt rules**: -- Extract key information as self-contained bullet points -- Assign priority: `high` (decisions, commitments, important facts), `medium` (general context), `low` (small talk) -- Preserve concrete details: names, numbers, dates, preferences -- Omit pleasantries and filler -- Use present tense for ongoing states, past tense for events - -### Reflector - -The reflector (`src/memory/reflector.ts`) condenses observations when they accumulate too many tokens. - -**Trigger**: Called automatically after the observer runs. Only runs if active (non-superseded) observations exceed `REFLECTOR_THRESHOLD` (4000 estimated tokens). - -**Process**: -1. Active observations are loaded and formatted as `[id] (priority, date) content`. -2. Sent to the worker LLM with the reflector system prompt. -3. The LLM returns new condensed observations and a list of `superseded_ids` to retire. -4. Superseded observations have their `superseded_at` set (soft delete). The returned IDs are validated against the input set to prevent hallucinated IDs. -5. New observations are inserted with `generation = max(input generations) + 1`. -6. `observation_token_count` is recalculated from the new active observation set. -7. Usage is tracked in `ai_usage` with source `reflector`. - -**Reflector prompt rules**: -- Combine related observations into richer single observations -- Remove superseded information -- Preserve high-priority items -- Drop low-priority items that add no lasting value -- Keep each observation self-contained - -### Context Building - -`MemoryManager.buildContext()` assembles the conversation context: - -1. Loads active (non-superseded) observations for the conversation. -2. Loads un-observed messages (messages after the watermark). -3. Returns: - - `observationsText` -- Rendered observations, or empty string if none - - `activeMessages` -- The un-observed messages for replay - - `hasObservations` -- Whether any observations exist - -`renderObservations()` (`src/memory/context.ts`) formats observations with priority-based prefixes: -- `!` for high priority -- `-` for medium priority -- `~` for low priority - -Example output: -``` -! [2024-01-15] User has a dentist appointment on March 5th at 9am -- [2024-01-15] User is working on a TypeScript project called Construct -~ [2024-01-14] User mentioned they had coffee this morning -``` - -### Token Estimation - -`src/memory/tokens.ts` uses a `chars / 4` heuristic for token estimation, plus 4 tokens of overhead per message (for role delimiters). This is intentionally simple and designed to be swappable with a real tokenizer later. - -## Integration into processMessage() - -The memory system integrates into the agent pipeline (`src/agent.ts`) at multiple points: - -### At Message Arrival - -1. **MemoryManager instantiation** (step 2): A `MemoryManager` is created with the database and worker model config. The worker model is configured via `MEMORY_WORKER_MODEL` env var. If not set, worker config is `null` and all LLM-powered memory features (graph extraction, observer, reflector) are disabled. - -2. **Context building** (step 3): `memoryManager.buildContext()` determines what the LLM sees: - - **If observations exist**: The LLM sees the rendered observations as a prefix (injected into the context preamble under `[Conversation observations]`) plus only the un-observed messages replayed as conversation turns. This keeps the context window bounded. - - **If no observations yet**: Falls back to the last 20 raw messages (original behavior). - -3. **Declarative memory loading** (step 4): Recent and semantically relevant memories are loaded independently and injected into the context preamble. - -4. **Tool context** (step 8): The `memoryManager` instance is passed into the tool context, making it available to `memory_store` for triggering graph extraction. - -### After Response - -5. **Observer + Reflector** (step 15): After the response is saved, `memoryManager.runObserver()` is called with fire-and-forget semantics (`.then()/.catch()`, non-blocking). If the observer runs and creates observations, it chains into `memoryManager.runReflector()` to check if condensation is needed. - -```mermaid -sequenceDiagram - participant PM as processMessage - participant MM as MemoryManager - participant Obs as Observer LLM - participant Ref as Reflector LLM - participant DB as SQLite - - PM->>MM: buildContext(conversationId) - MM->>DB: getActiveObservations() - MM->>DB: getUnobservedMessages() - MM-->>PM: {observationsText, activeMessages} - - Note over PM: Agent runs, response saved - - PM-)MM: runObserver(conversationId) [async] - MM->>DB: getUnobservedMessages() - alt tokens >= 3000 - MM->>Obs: compress messages - Obs-->>MM: observations[] - MM->>DB: INSERT observations - MM->>DB: UPDATE watermark - MM-)MM: runReflector(conversationId) - MM->>DB: getActiveObservations() - alt tokens >= 4000 - MM->>Ref: condense observations - Ref-->>MM: new observations + superseded IDs - MM->>DB: UPDATE superseded_at - MM->>DB: INSERT new observations - end - end -``` - -## Memory-Related Tools - -All four memory tools are in the `core` pack (always loaded): - -| Tool | Description | -|------|-------------| -| `memory_store` | Store a memory with content, category, and tags. Triggers async embedding generation and graph extraction. | -| `memory_recall` | Hybrid search (FTS5 + embedding + LIKE) with graph-based expansion. Returns results with match type and score. | -| `memory_forget` | Soft-delete by ID, or search for candidates first by query. | -| `memory_graph` | Explore the knowledge graph: `search` nodes, `explore` connections from a node, or `connect` two concepts. | - -## Embeddings and Semantic Search - -Embeddings are generated via OpenRouter's embeddings API (`src/embeddings.ts`). The default model is `qwen/qwen3-embedding-4b`, configurable via the `EMBEDDING_MODEL` env var. - -Embeddings serve three purposes in the memory system: - -1. **Memory recall** -- Query embedding is compared against stored memory embeddings for semantic search (cosine similarity threshold 0.4 in `processMessage()`, 0.3 in `recallMemories()`). -2. **Tool pack selection** -- The same query embedding is reused to select which tool packs to load. -3. **Skill selection** -- Also reused for selecting relevant extension skills. - -Embedding generation for stored memories is non-blocking (fire-and-forget after `memory_store`). If embedding generation fails, the system degrades gracefully: semantic search returns no results, but FTS5 and keyword search still work. - -## Configuration - -| Variable | Required | Default | Description | -|----------|----------|---------|-------------| -| `MEMORY_WORKER_MODEL` | No | *(none)* | OpenRouter model ID for graph extraction, observer, and reflector LLM calls. If not set, all LLM-powered memory features are disabled and only declarative memory with embeddings works. | -| `EMBEDDING_MODEL` | No | `qwen/qwen3-embedding-4b` | OpenRouter model for generating embedding vectors. | -| `OPENROUTER_API_KEY` | Yes | -- | Used for all OpenRouter API calls including embeddings and worker model. | - -The worker model is used for three separate LLM calls: -- **Graph extraction** (`graph_extract`) -- Extracts entities and relationships from stored memories -- **Observer** (`observer`) -- Compresses un-observed messages into observations -- **Reflector** (`reflector`) -- Condenses accumulated observations - -All three track their usage in `ai_usage` with distinct `source` values (shown in parentheses above). - -## Token Thresholds - -| Constant | Value | Location | Purpose | -|----------|-------|----------|---------| -| `OBSERVER_THRESHOLD` | 3000 tokens | `src/memory/index.ts` | Minimum un-observed message tokens before observer triggers | -| `REFLECTOR_THRESHOLD` | 4000 tokens | `src/memory/index.ts` | Minimum active observation tokens before reflector triggers | - -These use the chars/4 heuristic, so 3000 tokens is roughly 12,000 characters of message content. - -## Architecture Decisions - -### Why a separate worker model? - -The main agent model (configured via `OPENROUTER_MODEL`) is optimized for conversation. Memory extraction and observation compression are background tasks that can use a cheaper, smaller model. Keeping them separate also means the worker model can be swapped or disabled independently without affecting the agent's conversational ability. - -### Why token-based thresholds instead of message counts? - -Message count is a poor proxy for context usage. A conversation with many short messages ("ok", "thanks") should not trigger compression at the same rate as one with long, information-dense messages. Token estimation, even via a simple heuristic, better captures actual context pressure. - -### Why fire-and-forget for observer/reflector? - -The observer and reflector run after the current response is already sent. Their output benefits the *next* turn, not the current one. Making them blocking would add latency to every response for no user-visible benefit. - -### Why soft-delete for observations? - -`superseded_at` rather than `DELETE` preserves the observation history. This enables debugging ("what did the reflector replace?") and potential future features like observation undo or audit trails. - -### Why graph edges store memory_id? - -Linking edges back to their source memory enables bidirectional traversal: from a graph query, the system can surface the original memories that established a relationship, providing concrete evidence rather than just abstract graph connections. - -## Related Documentation - -- [Agent System](./agent.md) -- How `processMessage()` orchestrates the full pipeline -- [Database Layer](./database.md) -- Schema details for all tables including `memories`, `graph_nodes`, `graph_edges`, `observations` -- [Tool System](./tools.md) -- Tool pack organization and embedding-based selection -- [System Prompt](./system-prompt.md) -- How observations and memories are injected into the context preamble -- [Environment Configuration](./../guides/environment.md) -- All environment variables including `MEMORY_WORKER_MODEL` and `EMBEDDING_MODEL` diff --git a/.docs/features/scheduler.md b/.docs/features/scheduler.md deleted file mode 100644 index 8803da4..0000000 --- a/.docs/features/scheduler.md +++ /dev/null @@ -1,205 +0,0 @@ -# Scheduler / Reminders System - -*Last updated: 2026-03-01 -- Added agent-prompt mode, dedup logic, corrected timezone and firing behavior* - -## Overview - -The scheduler enables Construct to fire actions at specific times or on recurring schedules. It supports two execution modes: **static messages** (deliver a pre-written string) and **agent prompts** (run a full `processMessage()` cycle with tool access, memory, and reasoning). It uses Croner for timed jobs, persists schedules in SQLite, and survives restarts. - -## Key Files - -| File | Role | -|------|------| -| `src/scheduler/index.ts` | Scheduler lifecycle: start, register, fire, sync, stop | -| `src/tools/core/schedule.ts` | `schedule_create`, `schedule_list`, `schedule_cancel` tools + dedup logic | -| `src/db/schema.ts` | `ScheduleTable` type (includes `prompt` column) | -| `src/db/queries.ts` | Schedule CRUD queries: `createSchedule`, `listSchedules`, `cancelSchedule`, `markScheduleRun` | - -## How It Works - -### Startup - -`startScheduler(db, bot, timezone)` is called during main startup (`src/main.ts`): - -1. Loads all active schedules from the database -2. Registers a Croner job for each schedule, passing the user's configured `TIMEZONE` -3. Sets up a 30-second polling interval to discover new schedules - -### Schedule Timing Types - -| Type | Database Column | Behavior | -|------|----------------|----------| -| **Recurring** | `cron_expression` | Runs on a cron schedule indefinitely until cancelled | -| **One-shot** | `run_at` | Fires once at the specified time, then auto-cancels | - -### Execution Modes - -Each schedule has a `message` field and an optional `prompt` field. The `prompt` field controls which execution path is used: - -| Mode | Field Set | What Happens | -|------|-----------|--------------| -| **Static** | `message` only (`prompt` is null) | `fireStaticSchedule()` -- sends the message string directly via Telegram | -| **Agent** | `prompt` is set | `fireAgentSchedule()` -- runs the prompt through `processMessage()` with full agent capabilities | - -This branching happens in `fireSchedule()` (line 70 of `scheduler/index.ts`): - -```typescript -if (schedule.prompt) { - await fireAgentSchedule(db, bot, schedule) -} else { - await fireStaticSchedule(db, bot, schedule) -} -``` - -#### Static Mode (`fireStaticSchedule`) - -1. Sends `schedule.message` to `schedule.chat_id` via `bot.api.sendMessage()` as plain text -2. Marks the schedule as run via `markScheduleRun()` -3. Saves the message to conversation history so the agent knows the reminder was delivered - -#### Agent-Prompt Mode (`fireAgentSchedule`) - -1. Calls `processMessage(db, schedule.prompt, { source: 'scheduler', ... })` -- this is the same full agent pipeline used for Telegram and CLI messages -2. The agent gets its system prompt, memory context, tool packs (selected via embeddings), conversation history, and the prompt as its input message -3. The agent can use any tools it would normally have access to: memory, web search, self-edit, schedule management, etc. -4. If the agent produces a non-empty text response, it is sent to Telegram (formatted as HTML via `markdownToTelegramHtml`, with a plain-text fallback) -5. If the response is empty, the schedule fires silently (logged but nothing sent) -6. The response is saved to conversation history with a `[Scheduled: ...]` prefix - -This makes agent-prompt schedules useful for: -- **Conditional notifications**: "Check if BTC is above $100k and only notify me if it is" -- **Background tasks**: "Summarize my unread memories every Sunday" -- **Periodic reasoning**: "Review my goals and suggest next steps" -- Any task that benefits from tool access, memory recall, or LLM reasoning - -### Registration Logic - -`registerJob(db, bot, schedule, timezone)`: - -- **Recurring (cron_expression set)**: Creates a `new Cron(cronExpression, { timezone }, callback)` that fires the schedule on each cron tick -- **One-shot (run_at set)**: - - Creates a `new Cron(runAtDate, { timezone }, callback)` that fires once, then cancels and removes itself from the active jobs map - - If `nextRun()` returns null (time is in the past), fires immediately and cancels - -Both types pass the user's configured `TIMEZONE` to Croner, so cron expressions and `run_at` times are interpreted in the user's local timezone. - -### Sync Loop - -Every 30 seconds, `syncSchedules(db, bot, timezone)`: - -1. Loads all active schedules from the database -2. Registers jobs for any new schedules not yet in the `activeJobs` map -3. Stops and removes jobs for any schedules that have been cancelled (no longer in the active list) - -This polling approach means new schedules created by the `schedule_create` tool are picked up within 30 seconds without requiring direct scheduler communication. - -### Job Tracking - -Active jobs are tracked in a module-level `Map` keyed by schedule ID. This prevents duplicate registration and enables cleanup on cancellation. - -### Shutdown - -`stopScheduler()` stops all active Cron jobs, clears the sync interval, and empties the map. - -## Schedule Tools - -The agent creates and manages schedules through three tools in the core pack (`src/tools/core/schedule.ts`): - -### schedule_create - -Parameters: -- `description` (required) -- Human-readable description (e.g. "Dentist appointment reminder") -- `message` (optional) -- Static message to send when triggered. Required unless `prompt` is provided. -- `prompt` (optional) -- Agent prompt to run when triggered, with full tool access. Mutually exclusive with `message`. -- `cron_expression` (optional) -- Cron string (e.g., `"0 9 * * 1"` for Monday at 9am) -- `run_at` (optional) -- Datetime in user's local timezone, without Z or offset (e.g. `"2025-03-05T09:00:00"`) - -Validation: -- Must provide either `cron_expression` or `run_at` (timing) -- Must provide either `message` or `prompt` (content), but not both -- `run_at` values have timezone offsets stripped (`stripTimezoneOffset()`) so they're treated as local time -- `chat_id` is automatically injected from the current conversation context - -When `prompt` is used, the `message` column (which is NOT NULL in the schema) is filled with the `description` as a fallback. - -#### Deduplication - -`schedule_create` performs two-pass dedup to prevent the agent from creating duplicate schedules: - -1. **Fast pass (Levenshtein)**: For schedules matching the same chat, mode (static vs prompt), and timing, checks content similarity using Levenshtein distance (threshold 0.75). Compares `message`/`prompt` content and `description` separately. - -2. **Slow pass (embedding similarity)**: If the fast pass finds no match but there are time-matching candidates, generates embeddings for the new description and all candidate descriptions, then checks cosine similarity (threshold 0.7). - -If a duplicate is found, the tool returns the existing schedule instead of creating a new one, with a `deduplicated: true` flag in the details. - -### schedule_list - -Lists all schedules (active only by default), showing ID, status, description, and timing. Agent-prompt schedules are marked with an `[agent]` badge in the output. - -Parameters: -- `active_only` (optional, default: true) -- Whether to filter to active schedules only - -### schedule_cancel - -Deactivates a schedule by setting `active = 0`. The sync loop will clean up the corresponding Croner job within 30 seconds. - -Parameters: -- `id` (required) -- The schedule ID to cancel - -## Database Schema - -The `schedules` table (`src/db/schema.ts`): - -| Column | Type | Description | -|--------|------|-------------| -| `id` | string | Primary key | -| `description` | string | Human-readable description | -| `cron_expression` | string or null | Cron pattern for recurring schedules | -| `run_at` | string or null | ISO datetime for one-shot schedules | -| `message` | string | Static message content (also used as fallback for prompt mode) | -| `prompt` | string or null | Agent prompt for agent-executed schedules | -| `chat_id` | string | Telegram chat to deliver to | -| `active` | integer (default 1) | 1 = active, 0 = cancelled | -| `last_run_at` | string or null | Timestamp of most recent execution | -| `created_at` | string (auto) | Creation timestamp | - -Key constraints: -- `cron_expression` and `run_at` are mutually exclusive (one should be set) -- `message` and `prompt` represent two execution modes; `prompt` being non-null triggers agent mode -- `message` is NOT NULL -- when in prompt mode, it stores the description as a placeholder - -See [Database Layer](./database.md) for the full schema. - -## Data Flow - -```mermaid -graph TD - AgentTool["schedule_create tool"] -->|inserts row| DB[(schedules table)] - DB -->|30s poll| SyncLoop["syncSchedules()"] - SyncLoop -->|new schedule| Register["registerJob()"] - Register -->|creates| CronJob["Croner job"] - CronJob -->|timer fires| Fire["fireSchedule()"] - Fire -->|prompt is null| Static["fireStaticSchedule()"] - Fire -->|prompt is set| AgentFire["fireAgentSchedule()"] - Static -->|bot.api.sendMessage| Telegram["Telegram chat"] - AgentFire -->|processMessage()| AgentPipeline["Full agent pipeline"] - AgentPipeline -->|tools, memory, reasoning| AgentResponse["Agent response"] - AgentResponse -->|formatted HTML| Telegram - Static -->|saveMessage| History[(conversation history)] - AgentResponse -->|saveMessage| History -``` - -## Limitations - -- **Sync delay**: The 30-second sync interval means there can be up to 30 seconds of delay between creating a schedule and it being registered. -- **No response streaming**: Agent-prompt schedule responses are sent as a single message after the agent finishes, not streamed. -- **Agent-prompt cost**: Each agent-prompt firing incurs a full LLM call (with tool pack selection, embedding generation, memory recall, etc.), so frequent cron schedules with prompts can accumulate cost. -- **Error handling**: If `processMessage()` throws during an agent-prompt schedule, the error is logged but no message is sent to the user. The schedule is not retried. - -## Related Documentation - -- [Agent Pipeline](./agent.md) -- `processMessage()` used by agent-prompt mode -- [Telegram Integration](./telegram.md) -- Bot used for message delivery -- [Tool System](./tools.md) -- Schedule tools in the core pack -- [Database Layer](./database.md) -- Schedule persistence -- [Memory System](./memory.md) -- Memory context available to agent-prompt schedules diff --git a/.docs/features/system-prompt.md b/.docs/features/system-prompt.md deleted file mode 100644 index 05f5ec8..0000000 --- a/.docs/features/system-prompt.md +++ /dev/null @@ -1,167 +0,0 @@ -# System Prompt Construction - -*Last updated: 2026-02-24 -- Initial documentation* - -## Overview - -The system prompt is split into two layers for prompt caching efficiency: - -1. **Static system prompt** -- Base rules + identity files (SOUL.md, IDENTITY.md, USER.md). Cached and reused across requests. -2. **Dynamic context preamble** -- Prepended to each user message. Contains date/time, memories, skills, and reply context. - -This separation means the LLM can cache the system prompt tokens and only process the changing preamble as new input. - -## Key Files - -| File | Role | -|------|------| -| `src/system-prompt.ts` | `getSystemPrompt()`, `buildContextPreamble()`, `invalidateSystemPromptCache()`, `formatNow()` | - -## Static System Prompt - -`getSystemPrompt(identity?)` builds the full system prompt by concatenating: - -### BASE_SYSTEM_PROMPT - -The hardcoded base prompt that defines: - -- **Role**: "You are a personal companion" -- **Rules**: - - Be concise (Telegram context) - - Proactively store memories - - Search broadly when recalling - - Confirm time/message before creating reminders - - Explain what/why before self-editing - - Never deploy without passing tests - - Never edit files outside `src/`, `cli/`, or `extensions/` -- **Telegram interactions**: How to use telegram tools, message ID format -- **Proactive communication**: When to add context beyond the bare minimum -- **Identity files**: How to use identity_read/identity_update tools -- **Extensions**: How tools and skills work, when to call extension_reload - -### Identity Sections - -If identity files are loaded, they are appended as sections: - -``` -## Identity - - -## User - - -## Soul - -``` - -Note the order: Identity, User, Soul. The Soul section (personality) comes last so it has the strongest influence on the model's behavior. - -### Caching - -The system prompt is cached in module-level variables. The cache key is a pipe-delimited concatenation of all three identity file contents. `invalidateSystemPromptCache()` clears the cache (called by `identity_update` and `extension_reload`). - -## Dynamic Context Preamble - -`buildContextPreamble(context)` creates a text block prepended to each user message. It contains: - -### 1. Context Header - -``` -[Context: Monday, February 24, 2026 at 3:15 PM (America/New_York) | telegram] -``` - -Includes the current date/time formatted using `Intl.DateTimeFormat` in the configured timezone, the message source, and a DEV MODE flag if applicable. - -### 2. Dev Mode Warning - -``` -[Running in development -- hot reload is active, self_deploy is disabled] -``` - -Only present when `NODE_ENV=development`. - -### 3. Recent Memories - -``` -[Recent memories -- use these for context, pattern recognition, and continuity] -- (preference) User prefers dark mode -- (fact) User works at Acme Corp -``` - -The 10 most recent memories, regardless of relevance. Gives the agent temporal continuity. - -### 4. Relevant Memories - -``` -[Potentially relevant memories] -- (note) Meeting with Bob about the API redesign (87% match) -``` - -Up to 5 semantically relevant memories with match percentage. Deduped against recent memories. - -### 5. Active Skills - -``` -[Active skills -- follow these instructions when relevant] - -### daily-standup -When the user asks for a standup... -``` - -Up to 3 skills selected by embedding similarity (threshold 0.35). - -### 6. Reply Context - -``` -[Replying to: "what was that thing you mentioned yesterday?"] -``` - -When the user replies to a specific message in Telegram, the original message text (truncated to 300 chars) is included. - -## Date/Time Formatting - -`formatNow(timezone)` uses the built-in `Intl` API (no dependencies): - -```typescript -new Date().toLocaleString('en-US', { - timeZone: timezone, - weekday: 'long', - year: 'numeric', - month: 'long', - day: 'numeric', - hour: 'numeric', - minute: '2-digit', - hour12: true, -}) -``` - -Output example: `Monday, February 24, 2026 at 3:15 PM` - -## Full Prompt Assembly - -The complete prompt the LLM sees: - -``` -[System prompt] - BASE_SYSTEM_PROMPT - ## Identity (if loaded) - ## User (if loaded) - ## Soul (if loaded) - -[User message] - [Context: Monday, February 24, 2026 at 3:15 PM (America/New_York) | telegram] - [Recent memories...] - [Relevant memories...] - [Active skills...] - [Reply context...] - - -``` - -The preamble is prepended directly to the user's message text (no separator). - -## Related Documentation - -- [Agent System](./agent.md) -- How the prompt is used in processMessage() -- [Extension System](./extensions.md) -- Identity files and skill injection -- [Environment Configuration](./../guides/environment.md) -- TIMEZONE setting diff --git a/.docs/features/telegram.md b/.docs/features/telegram.md deleted file mode 100644 index 1da58ed..0000000 --- a/.docs/features/telegram.md +++ /dev/null @@ -1,145 +0,0 @@ -# Telegram Integration - -*Last updated: 2026-02-24 -- Initial documentation* - -## Overview - -Telegram is the primary user-facing interface for Construct. The bot uses Grammy (a Telegram Bot API framework) with long polling to receive messages and reactions. It handles authorization, typing indicators, Markdown-to-HTML conversion, message chunking, reply threading, and reaction side-effects. - -## Key Files - -| File | Role | -|------|------| -| `src/telegram/bot.ts` | Bot creation, message/reaction handlers, markdown conversion, reply logic | -| `src/telegram/index.ts` | Standalone Telegram-only entry point (runs bot without scheduler) | -| `src/telegram/types.ts` | `TelegramContext` and `TelegramSideEffects` interfaces | - -## Bot Setup - -`createBot(db)` in `src/telegram/bot.ts` creates a Grammy `Bot` instance with the token from `env.TELEGRAM_BOT_TOKEN`. - -The bot listens for two event types: -- `message:text` -- Text messages from users -- `message_reaction` -- Emoji reactions on messages - -A catch-all `message` handler replies "I can only process text messages for now." for non-text messages (photos, stickers, etc.). - -## Authorization - -Authorization is controlled by `ALLOWED_TELEGRAM_IDS` (comma-separated list of numeric Telegram user IDs). If the list is empty, all users are allowed. Otherwise, only listed user IDs can interact. - -Unauthorized users receive a simple "Unauthorized." reply. Unauthorized reactions are silently ignored. - -## Message Handling Flow - -```mermaid -flowchart TD - A[Telegram text message] --> B{Authorized?} - B -- No --> C[Reply: Unauthorized] - B -- Yes --> D[Start typing indicator every 4s] - D --> E[Build TelegramContext with mutable sideEffects] - E --> F[Extract reply context if replying] - F --> G["processMessage(db, text, opts)"] - G --> H{sideEffects.reactToUser set?} - H -- Yes --> I[Set emoji reaction on incoming message] - H -- No --> J{sideEffects.suppressText?} - I --> J - J -- Yes --> K[Skip text reply] - J -- No --> L["sendReply(ctx, text, sideEffects, messageId)"] - L --> M[Clear typing interval] - K --> M -``` - -### Typing Indicator - -A "typing" chat action is sent immediately and then refreshed every 4 seconds (Telegram expires typing indicators after ~5 seconds). The interval is cleared in a `finally` block regardless of success or failure. - -### TelegramContext - -Each message creates a `TelegramContext` passed to `processMessage()`: - -```typescript -interface TelegramContext { - bot: Bot // Grammy bot instance - chatId: string // Telegram chat ID - incomingMessageId: number // Message ID of the user's message - sideEffects: TelegramSideEffects // Mutable object for tool side-effects -} -``` - -### Side-Effects - -Tools can set flags on `sideEffects` during execution: - -```typescript -interface TelegramSideEffects { - reactToUser?: string // Emoji to react with - replyToMessageId?: number // Message ID for reply threading - suppressText?: boolean // If true, skip the text reply -} -``` - -After the agent finishes: -1. If `reactToUser` is set, the bot calls `setMessageReaction()` on the incoming message -2. If `suppressText` is false (default) and the response has text, `sendReply()` sends it - -### Reply Context - -When a user replies to a specific message in Telegram, the original message text is extracted from `ctx.message.reply_to_message.text` and passed as `replyContext`. This is included in the context preamble so the agent knows what is being referenced. - -## Reaction Handling - -When a user reacts to a message with an emoji: - -1. The bot looks up which conversation the reacted message belongs to -2. It queries the database for the message by its Telegram message ID -3. A synthetic message is constructed: `[User reacted with to message: ""]` -4. This synthetic message is processed through `processMessage()` like a normal message -5. The agent may respond with text, a reaction, or nothing - -This allows the agent to interpret reactions contextually (e.g., a thumbs-up on a suggestion). - -## Markdown to Telegram HTML - -`markdownToTelegramHtml()` converts the agent's Markdown response to Telegram-compatible HTML: - -1. **Protect code**: Extract code blocks (``` ```) and inline code (`` ` ``) to prevent processing -2. **Escape HTML entities**: `&`, `<`, `>` in remaining text -3. **Convert headers**: `# Heading` to `Heading` -4. **Convert formatting**: `***bold-italic***`, `**bold**`, `*italic*` -5. **Convert bullets**: `*` and `-` list items to `bullet` characters -6. **Restore code**: Re-insert code as `
` and `` with HTML escaping
-
-If HTML parsing fails when sending, the bot falls back to sending plain text.
-
-## Message Chunking
-
-Telegram has a 4096-character message limit. The `sendReply()` function chunks long responses:
-
-- Messages <= 4000 characters are sent as-is (with some margin)
-- Longer messages are split into 4000-character chunks
-- Only the first chunk uses `reply_parameters` for threading
-
-## Telegram Message ID Tracking
-
-After sending a reply, the bot stores the Telegram message ID of the sent message in the database via `updateTelegramMessageId()`. This enables:
-- Future `telegram_reply_to` calls referencing the bot's own messages
-- Reaction handling on the bot's messages (to determine `whose` in the synthetic reaction message)
-
-Message IDs appear as `[tg:12345]` prefixes in conversation history replay.
-
-## Standalone Telegram Mode
-
-`src/telegram/index.ts` provides a lightweight entry point that runs only the Telegram bot without the scheduler. This is used by `npm run telegram`. It runs migrations, creates the database, and starts long polling.
-
-## Error Handling
-
-- Grammy errors are caught by `bot.catch()` and logged
-- Individual message processing errors reply with "Something went wrong. Check the logs."
-- Reaction processing errors are logged but do not send error messages to the user
-
-## Related Documentation
-
-- [Agent System](./agent.md) -- How messages are processed by the agent
-- [Tool System](./tools.md) -- Telegram tools (react, reply-to, pin/unpin)
-- [Scheduler](./scheduler.md) -- Sends scheduled messages through the bot
diff --git a/.docs/features/tools.md b/.docs/features/tools.md
deleted file mode 100644
index 7b7bf49..0000000
--- a/.docs/features/tools.md
+++ /dev/null
@@ -1,199 +0,0 @@
-# Tool System
-
-*Last updated: 2026-02-24 -- Initial documentation*
-
-## Overview
-
-Construct's tools are organized into **packs** -- logical groups of related tools. At message time, packs are selected based on embedding similarity to the user's message, so only relevant tools are sent to the LLM. This keeps the context window lean. The system supports both built-in packs (defined in source) and dynamic packs (loaded from the extensions directory at runtime).
-
-## Key Files
-
-| File | Role |
-|------|------|
-| `src/tools/packs.ts` | Pack definitions, embedding cache, selection logic, `InternalTool` and `ToolPack` types |
-| `src/tools/core/` | Core pack: memory, schedule, secret, identity, usage tools |
-| `src/tools/self/` | Self pack: source read/edit, test, deploy, logs, status, extension reload |
-| `src/tools/web/` | Web pack: web page reading, web search |
-| `src/tools/telegram/` | Telegram pack: react, reply-to, pin/unpin, get-pinned |
-
-## How Tools Are Defined
-
-Every tool follows the `InternalTool` interface:
-
-```typescript
-interface InternalTool {
-  name: string
-  description: string
-  parameters: T               // TypeBox JSON Schema
-  execute: (
-    toolCallId: string,
-    args: unknown,
-  ) => Promise<{ output: string; details?: unknown }>
-}
-```
-
-Tools are created by **factory functions** that receive a `ToolContext`:
-
-```typescript
-interface ToolContext {
-  db: Kysely        // Database connection
-  chatId: string              // Current chat identifier
-  apiKey: string              // OpenRouter API key
-  projectRoot: string         // Absolute path to project root
-  dbPath: string              // Path to SQLite database file
-  tavilyApiKey?: string       // Tavily API key (for web search)
-  logFile?: string            // Path to log file
-  isDev: boolean              // Development mode flag
-  extensionsDir?: string      // Extensions directory path
-  telegram?: TelegramContext  // Telegram bot + chat context (absent in CLI)
-}
-```
-
-A factory returns `InternalTool | null`. Returning `null` means the tool should not be loaded (e.g., `self_deploy` is null in dev mode, `web_search` is null without a Tavily key, telegram tools are null outside Telegram context).
-
-## Tool Packs
-
-A pack groups related tool factories under a name and description:
-
-```typescript
-interface ToolPack {
-  name: string
-  description: string
-  alwaysLoad: boolean         // If true, skip embedding similarity check
-  factories: ToolFactory[]    // Functions that create tools from ToolContext
-}
-```
-
-### Built-in Packs
-
-| Pack | `alwaysLoad` | Tools | Description |
-|------|:---:|-------|-------------|
-| **core** | Yes | `memory_store`, `memory_recall`, `memory_forget`, `schedule_create`, `schedule_list`, `schedule_cancel`, `secret_store`, `secret_list`, `secret_delete`, `usage_stats`, `identity_read`, `identity_update` | Long-term memory, scheduling, secrets, identity management |
-| **web** | No | `web_read`, `web_search` | Read web pages (via Jina Reader), search the web (via Tavily) |
-| **self** | No | `self_read_source`, `self_edit_source`, `self_run_tests`, `self_view_logs`, `self_deploy`, `self_system_status`, `extension_reload` | Self-modification, diagnostics, deployment |
-| **telegram** | Yes | `telegram_react`, `telegram_reply_to`, `telegram_pin`, `telegram_unpin`, `telegram_get_pinned` | Telegram-specific message interactions |
-
-### Pack Selection Algorithm
-
-```mermaid
-flowchart TD
-    A[User message arrives] --> B{Embedding generation succeeded?}
-    B -- No --> C[Load ALL packs]
-    B -- Yes --> D{Pack.alwaysLoad?}
-    D -- Yes --> E[Include pack]
-    D -- No --> F{Pack has embedding?}
-    F -- No --> G[Include pack -- graceful fallback]
-    F -- Yes --> H{cosineSimilarity >= 0.3?}
-    H -- Yes --> E
-    H -- No --> I[Exclude pack]
-```
-
-At startup, `initPackEmbeddings()` generates embedding vectors for each non-`alwaysLoad` pack's description string. These are cached in a module-level `Map`.
-
-At message time, `selectPacks()` compares the user message embedding against pack embeddings. Packs with cosine similarity >= 0.3 (the default threshold) are included. The function is pure and testable -- it accepts packs and embeddings as parameters.
-
-`selectAndCreateTools()` combines selection with instantiation: it selects packs, then calls each factory with the `ToolContext`, filtering out null results.
-
-## Individual Tool Details
-
-### Core Pack (always loaded)
-
-**memory_store** -- Stores a memory with optional category and tags. Generates an embedding in the background (non-blocking) for future semantic search.
-
-**memory_recall** -- Searches memories using a three-tier hybrid approach:
-1. FTS5 full-text search on the `memories_fts` virtual table
-2. Embedding cosine similarity (threshold 0.3) against all memories with embeddings
-3. LIKE keyword fallback
-Results are merged and deduplicated.
-
-**memory_forget** -- Soft-deletes (archives) a memory by ID, or searches for candidates if given a query.
-
-**schedule_create** -- Creates a one-shot (`run_at`) or recurring (`cron_expression`) schedule. Supports two modes: `message` for static text delivery, or `prompt` for agent-executed tasks with full tool access via `processMessage()`. Includes two-pass dedup (Levenshtein + embedding similarity). The `chat_id` is injected automatically from conversation context. See [Scheduler](./scheduler.md) for details.
-
-**schedule_list** -- Lists all active (or all) schedules.
-
-**schedule_cancel** -- Deactivates a schedule by ID.
-
-**secret_store / secret_list / secret_delete** -- Manage secrets in the `secrets` table. Values are never exposed through `secret_list`. Secrets are available to dynamic extension tools.
-
-**usage_stats** -- Returns AI usage statistics (cost, tokens, message count) with optional day range and source filter.
-
-**identity_read / identity_update** -- Read and write identity files (SOUL.md, IDENTITY.md, USER.md). Updates invalidate the system prompt cache and trigger extension reload.
-
-### Web Pack (similarity-selected)
-
-**web_read** -- Fetches a URL through `r.jina.ai` (Jina Reader) which returns clean markdown. Truncates at 12,000 characters.
-
-**web_search** -- Searches the web via the Tavily API. Requires `TAVILY_API_KEY`. Returns up to 5 results with titles, URLs, and content snippets. Includes an AI-generated summary when available.
-
-### Self Pack (similarity-selected)
-
-**self_read_source** -- Reads files within `src/`, `cli/`, `extensions/`, or config files (`package.json`, `tsconfig.json`, `CLAUDE.md`). Can also list directories. The `extensions/` prefix is resolved against `EXTENSIONS_DIR`.
-
-**self_edit_source** -- Search-and-replace editing within `src/`, `cli/`, or `extensions/`. The search string must be unique. Empty search + non-existent path creates a new file. Creates parent directories as needed.
-
-**self_run_tests** -- Runs `npx vitest run --reporter=verbose` with optional test name filter. 60-second timeout.
-
-**self_view_logs** -- Reads from the log file (with optional `since` and `grep` filtering) or falls back to `journalctl` for the systemd service.
-
-**self_deploy** -- Full deployment pipeline:
-1. Typecheck (`tsc --noEmit`)
-2. Test (`vitest run`)
-3. Git tag backup (`pre-deploy-TIMESTAMP`)
-4. Git add `src/` + `cli/`, commit
-5. Restart systemd service
-6. Health check (5-second delay then `systemctl is-active`)
-7. Auto-rollback on failure (`git revert HEAD`, restart)
-
-Rate-limited to 3 deploys per hour. Disabled in development mode.
-
-**self_system_status** -- Reports CPU, RAM, disk, temperature, database size, log file size, and uptimes. Can also rotate (archive) the log file.
-
-**extension_reload** -- Reloads all extensions from disk, invalidates the system prompt cache.
-
-### Telegram Pack (always loaded)
-
-All Telegram tools require a `TelegramContext` (so they return null from CLI).
-
-**telegram_react** -- Adds an emoji reaction to the user's message. Can optionally suppress the text reply (the reaction IS the response). Uses a side-effects pattern -- the tool sets flags on `TelegramContext.sideEffects` rather than calling the API directly.
-
-**telegram_reply_to** -- Marks the response to be sent as a reply to a specific Telegram message ID.
-
-**telegram_pin / telegram_unpin / telegram_get_pinned** -- Pin management. These call the Telegram Bot API directly.
-
-## TypeBox Schemas
-
-Tool parameters use `@sinclair/typebox` for JSON Schema generation:
-
-```typescript
-import { Type, type Static } from '@sinclair/typebox'
-
-const Params = Type.Object({
-  query: Type.String({ description: 'Search query' }),
-  limit: Type.Optional(Type.Number({ description: 'Max results' })),
-})
-
-type Input = Static
-```
-
-TypeBox schemas are passed directly to pi-agent-core, which uses them for LLM function calling.
-
-## Side-Effects Pattern (Telegram Tools)
-
-Telegram tools like `telegram_react` and `telegram_reply_to` don't perform their actions immediately. Instead, they set flags on a mutable `TelegramSideEffects` object:
-
-```typescript
-interface TelegramSideEffects {
-  reactToUser?: string        // Emoji to react with
-  replyToMessageId?: number   // Message ID to reply to
-  suppressText?: boolean      // Skip sending text reply
-}
-```
-
-After the agent finishes, the Telegram bot handler reads these flags and executes the side effects. This avoids race conditions and lets the LLM combine a reaction with a text reply in a single turn.
-
-## Related Documentation
-
-- [Agent System](./agent.md) -- How tools are selected and invoked during message processing
-- [Extension System](./extensions.md) -- Dynamic tools loaded from the extensions directory
-- [Database Layer](./database.md) -- Queries used by memory and schedule tools
diff --git a/.docs/guides/development.md b/.docs/guides/development.md
deleted file mode 100644
index 8a7414c..0000000
--- a/.docs/guides/development.md
+++ /dev/null
@@ -1,233 +0,0 @@
-# Development Workflow
-
-*Last updated: 2026-03-01 -- Updated for pnpm monorepo with Just task runner*
-
-## Overview
-
-Sprawl uses tsx for TypeScript execution, Vitest for testing, and Just for task orchestration. All TS apps run directly from source -- no build step. Optic is the exception (Rust, compiled with cargo).
-
-## Key Files
-
-| File | Role |
-|------|------|
-| `Justfile` | Task runner (primary interface) |
-| `pnpm-workspace.yaml` | Workspace config |
-| `apps/*/package.json` | App dependencies |
-| `packages/*/package.json` | Package dependencies |
-
-## Just Commands
-
-| Command | Description |
-|---------|-------------|
-| `just dev` | Construct dev mode (file watching) |
-| `just start ` | Start named Construct instance (reads `.env.`) |
-| `just cli [instance] [args]` | Construct CLI |
-| `just cortex-dev` | Cortex dev mode |
-| `just cortex-start` | Cortex production |
-| `just cortex-backfill [days]` | Backfill historical data |
-| `just synapse-dev` | Synapse dev mode |
-| `just synapse-start` | Synapse production |
-| `just synapse-status` | Portfolio summary |
-| `just deck-dev ` | Deck dev mode |
-| `just optic [db] [synapse]` | Optic TUI |
-| `just optic-build` | Build Optic release binary |
-| `just test` | Run all tests (`pnpm -r run test`) |
-| `just test-construct` | Construct tests only |
-| `just test-cairn` | Cairn tests only |
-| `just test-synapse` | Synapse tests only |
-| `just test-ai` | AI integration tests |
-| `just typecheck` | Typecheck all packages |
-| `just db-migrate [inst]` | Run Construct DB migrations |
-
-Each app reads its env from `.env` (Construct), `.env.cortex`, `.env.synapse`, etc.
-
-## TypeScript Configuration
-
-```json
-{
-  "compilerOptions": {
-    "target": "ES2022",
-    "module": "ESNext",
-    "moduleResolution": "bundler",
-    "strict": true,
-    "noEmit": true,
-    "noUnusedLocals": true,
-    "noUnusedParameters": true
-  }
-}
-```
-
-Key points:
-- **No compilation**: `noEmit: true` -- tsx handles runtime transpilation
-- **Path alias**: `@/*` maps to `./src/*` (used in vitest config)
-- **Strict mode**: Full TypeScript strict checks enabled
-- **Bundler module resolution**: Modern resolution compatible with tsx
-
-## Runtime Execution
-
-The project uses `tsx` (via `--import=tsx`) as a TypeScript loader. This means:
-- No build step required
-- Source files are transpiled on-the-fly
-- File watching uses Node.js native `--watch-path` flag
-- Extension tool files use `jiti` instead of `tsx` for dynamic loading
-
-## Testing
-
-### Vitest Configuration
-
-```typescript
-export default defineConfig({
-  resolve: {
-    alias: { '@': new URL('./src', import.meta.url).pathname },
-  },
-  test: {
-    globals: true,
-    environment: 'node',
-  },
-})
-```
-
-- **Global test functions**: `describe`, `it`, `expect`, etc. are available without imports
-- **Node environment**: Tests run in Node.js (not jsdom)
-- **Path alias**: `@/` resolves to `src/` in test files
-
-### Test Organization
-
-Tests are colocated with their source in `__tests__/` directories:
-
-```
-src/tools/core/__tests__/
-  memory.test.ts
-  schedule.test.ts
-src/tools/self/__tests__/
-  deploy.test.ts
-  exec.test.ts
-  extension-scope.test.ts
-  self.test.ts
-src/tools/web/__tests__/
-  web.test.ts
-src/tools/__tests__/
-  packs.test.ts
-src/extensions/__tests__/
-  dynamic-tools.test.ts
-  loader.test.ts
-  secrets.test.ts
-  skills.test.ts
-```
-
-### Running Tests
-
-```bash
-npm run test              # Run all tests once
-npm run test:watch        # Watch mode
-npx vitest run -t memory  # Filter by test name
-```
-
-The self_run_tests tool also runs `npx vitest run --reporter=verbose` with a 60-second timeout.
-
-## Logging
-
-### Logtape Setup
-
-The logging system uses `@logtape/logtape` with these loggers:
-
-| Logger | Category |
-|--------|----------|
-| `log` | `['construct']` |
-| `agentLog` | `['construct', 'agent']` |
-| `toolLog` | `['construct', 'tool']` |
-| `telegramLog` | `['construct', 'telegram']` |
-| `schedulerLog` | `['construct', 'scheduler']` |
-| `dbLog` | `['construct', 'db']` |
-
-### Sinks
-
-- **Console**: Always active, uses a custom formatter
-- **File**: Active when `LOG_FILE` is set. Uses a swappable `WriteStream` to support runtime log rotation.
-
-### Log Format
-
-```
-2026-02-24T15:30:00.000Z [info] construct.agent: Processing message from telegram
-```
-
-### Log Rotation
-
-- Automatic: On startup, if the log file exceeds 5 MB, it is rotated
-- Manual: The `self_system_status` tool can trigger rotation via `rotate_logs: true`
-- Rotation keeps up to 3 archived files: `construct.log.1`, `construct.log.2`, `construct.log.3`
-
-## Deployment (Self-Deploy)
-
-The `self_deploy` tool handles automated deployment. It detects the runtime environment by checking for `/.dockerenv`:
-
-**Common steps (both environments):**
-
-1. Typecheck (`tsc --noEmit`)
-2. Test (`vitest run`)
-3. Git tag backup (`pre-deploy-TIMESTAMP`)
-4. Git commit (`src/`, `cli/`, and `extensions/` directories)
-
-**Docker mode** (detected via `/.dockerenv`):
-
-5. `process.exit(0)` -- container restarts via `restart: unless-stopped` policy
-
-**Systemd mode** (non-Docker):
-
-5. `sudo systemctl restart construct`
-6. Health check (5-second wait, then `systemctl is-active`)
-7. Auto-rollback on failure (`git revert HEAD`, restart)
-
-Self-deploy is:
-- **Disabled** in development mode (`NODE_ENV=development`)
-- **Rate-limited** to 3 deploys per hour
-- **Safety-gated** by a `confirm: true` parameter
-
-See [Deployment Guide](./deployment.md) for full details on Docker and systemd deployment, and [Security Considerations](./security.md) for the complete safety model.
-
-## Dev Mode Differences
-
-When `NODE_ENV=development`:
-- File watching is active (`--watch-path`)
-- `self_deploy` tool is not loaded (returns null from factory)
-- Context preamble includes `[DEV MODE]` and a development warning
-- `EXTENSIONS_DIR` defaults to `./data` instead of XDG path
-
-## Dependencies
-
-### Runtime
-
-| Package | Version | Purpose |
-|---------|---------|---------|
-| `@mariozechner/pi-agent-core` | ^0.54.2 | Agent framework |
-| `@mariozechner/pi-ai` | ^0.54.2 | LLM model access |
-| `@sinclair/typebox` | ^0.34.48 | JSON Schema / TypeBox for tool parameters |
-| `grammy` | ^1.40.0 | Telegram Bot API |
-| `kysely` | ^0.28.11 | Type-safe SQL query builder |
-| `croner` | ^10.0.1 | Cron job scheduling |
-| `citty` | ^0.2.1 | CLI framework |
-| `jiti` | ^2.6.1 | Dynamic TypeScript loading (extensions) |
-| `@logtape/logtape` | ^2.0.2 | Structured logging |
-| `nanoid` | ^5.1.6 | ID generation |
-| `yaml` | ^2.8.2 | YAML parsing (skill frontmatter) |
-| `zod` | ^4.3.6 | Environment validation |
-| `date-fns` | ^4.1.0 | Date utilities |
-| `chalk` | ^5.6.2 | Terminal coloring |
-| `consola` | ^3.4.2 | Console utilities |
-
-### Dev
-
-| Package | Version | Purpose |
-|---------|---------|---------|
-| `typescript` | ^5.9.3 | Type checking |
-| `tsx` | ^4.21.0 | TypeScript execution |
-| `vitest` | ^4.0.18 | Testing framework |
-| `@types/node` | ^25.3.0 | Node.js type definitions |
-
-## Related Documentation
-
-- [Architecture Overview](./../architecture/overview.md) -- System startup sequence
-- [Environment Configuration](./environment.md) -- Environment variables
-- [Deployment Guide](./deployment.md) -- Docker and systemd deployment
-- [Security Considerations](./security.md) -- Self-deploy safety gates
-- [Tool System](./../features/tools.md) -- Self-modification tools
diff --git a/.docs/guides/environment.md b/.docs/guides/environment.md
deleted file mode 100644
index 40a6d88..0000000
--- a/.docs/guides/environment.md
+++ /dev/null
@@ -1,100 +0,0 @@
-# Environment Configuration
-
-*Last updated: 2026-02-24 -- Initial documentation*
-
-## Overview
-
-Environment variables are validated at startup using Zod in `src/env.ts`. The application uses Node.js `--env-file=.env` flag to load variables (not dotenv). A `.env.example` file documents all available variables.
-
-## Key Files
-
-| File | Role |
-|------|------|
-| `src/env.ts` | Zod schema, validation, `env` export |
-| `.env.example` | Template with all variables and defaults |
-
-## Required Variables
-
-| Variable | Description | Example |
-|----------|-------------|---------|
-| `OPENROUTER_API_KEY` | API key for OpenRouter (LLM and embeddings) | `sk-or-v1-...` |
-| `TELEGRAM_BOT_TOKEN` | Telegram Bot API token from @BotFather | `123456:ABC-DEF...` |
-
-## Optional Variables
-
-| Variable | Default | Description |
-|----------|---------|-------------|
-| `NODE_ENV` | `'production'` | Set to `'development'` for dev mode |
-| `OPENROUTER_MODEL` | `'google/gemini-3-flash-preview'` | LLM model identifier for OpenRouter |
-| `DATABASE_URL` | `'./data/construct.db'` | Path to SQLite database file |
-| `ALLOWED_TELEGRAM_IDS` | `''` (allow all) | Comma-separated Telegram user IDs |
-| `TIMEZONE` | `'UTC'` | Timezone for date display and context (e.g., `'America/New_York'`) |
-| `LOG_LEVEL` | `'info'` | Logging level: `debug`, `info`, `warning`, `error`, `fatal` |
-| `LOG_FILE` | `'./data/construct.log'` | Path to the log file |
-| `PROJECT_ROOT` | `'.'` | Resolved to absolute path. Root for self-read/edit tools |
-| `TAVILY_API_KEY` | (none) | API key for Tavily web search. If absent, `web_search` tool is disabled |
-| `EXTENSIONS_DIR` | Smart default (see below) | Path to extensions directory |
-
-## EXTENSIONS_DIR Defaults
-
-The extensions directory has environment-aware defaults:
-
-- **Development** (`NODE_ENV=development`): `./data`
-- **Production**: `$XDG_DATA_HOME/construct/` or `~/.local/share/construct/`
-
-## Extension Secrets (EXT_* Variables)
-
-Any environment variable with the `EXT_` prefix is automatically synced to the `secrets` table on startup. The prefix is stripped:
-
-```bash
-EXT_OPENWEATHERMAP_API_KEY=abc123
-# Becomes secret key: OPENWEATHERMAP_API_KEY
-```
-
-These secrets are then available to dynamic extension tools via `DynamicToolContext.secrets`. Environment-sourced secrets always overwrite existing values on restart (source is set to `'env'`).
-
-## Validation
-
-`src/env.ts` uses Zod to parse and validate `process.env`:
-
-```typescript
-const envSchema = z.object({
-  OPENROUTER_API_KEY: z.string(),            // Required
-  TELEGRAM_BOT_TOKEN: z.string(),            // Required
-  NODE_ENV: z.string().default('production'),
-  OPENROUTER_MODEL: z.string().default('google/gemini-3-flash-preview'),
-  DATABASE_URL: z.string().default('./data/construct.db'),
-  ALLOWED_TELEGRAM_IDS: z.string().default('').transform(s => s.split(',').filter(Boolean)),
-  TIMEZONE: z.string().default('UTC'),
-  LOG_LEVEL: z.string().default('info'),
-  LOG_FILE: z.string().default('./data/construct.log'),
-  PROJECT_ROOT: z.string().default('.').transform(p => resolve(p)),
-  EXTENSIONS_DIR: z.string().default(defaultExtensionsDir()).transform(p => resolve(p)),
-  TAVILY_API_KEY: z.string().optional(),
-})
-
-export const env = envSchema.parse(process.env)
-```
-
-Notable transforms:
-- `ALLOWED_TELEGRAM_IDS` is split into a string array
-- `PROJECT_ROOT` and `EXTENSIONS_DIR` are resolved to absolute paths
-
-If required variables are missing, the application fails immediately with a Zod validation error.
-
-## Loading Mechanism
-
-Variables are loaded via Node.js `--env-file=.env` flag in package.json scripts:
-
-```json
-"dev": "NODE_ENV=development node --env-file=.env --import=tsx ...",
-"start": "node --env-file=.env --import=tsx src/main.ts"
-```
-
-This is a native Node.js feature (v20.6+), not a third-party dotenv library.
-
-## Related Documentation
-
-- [Extension System](./../features/extensions.md) -- How EXTENSIONS_DIR and EXT_* are used
-- [Development Workflow](./development.md) -- npm scripts and dev mode
-- [Architecture Overview](./../architecture/overview.md) -- Startup sequence
diff --git a/.docs/packages/cairn.md b/.docs/packages/cairn.md
deleted file mode 100644
index 5ff3837..0000000
--- a/.docs/packages/cairn.md
+++ /dev/null
@@ -1,128 +0,0 @@
-# Cairn
-
-*Last updated: 2026-03-01 -- Initial documentation*
-
-## Overview
-
-Memory substrate shared by Construct, Cortex, and Deck. Provides the observe-reflect-promote-graph pipeline that turns raw messages into structured long-term memories with entity relationships.
-
-Published as `@repo/cairn` in the pnpm workspace.
-
-## How it works
-
-### Memory pipeline
-
-```
-Messages ──> Observer ──> Observations ──> Reflector ──> Condensed observations
-                              │                              │
-                              ▼                              │
-                          Promoter ◄─────────────────────────┘
-                              │
-                              ▼
-                          Memories ──> Graph extractor ──> Nodes + Edges
-```
-
-### MemoryManager (`packages/cairn/src/manager.ts`)
-
-Central facade class. Constructed with a Kysely DB instance and config (API key, worker model, embedding model). Methods:
-
-- `runObserver(conversationId)` -- Compress un-observed messages into observations. Triggered when unobserved token count exceeds 3000. Batches messages at 16K tokens per batch. Advances watermark per batch for crash safety.
-- `runReflector(conversationId)` -- Condense observations when total tokens exceed 4000. Supersedes old observations, creates new generation.
-- `promoteObservations(conversationId)` -- Promote medium/high-priority observations to the `memories` table. Embedding-based dedup (threshold: 0.85 cosine similarity). Only novel observations get graph extraction.
-- `processStoredMemory(memoryId, content)` -- Extract entities/relationships from a memory into the knowledge graph.
-- `buildContext(conversationId)` -- Returns observation text + un-observed messages for context injection. Priority-based budget eviction when observations exceed token limit.
-
-### Observer (`packages/cairn/src/observer.ts`)
-
-LLM-powered message compressor. Takes a batch of messages, outputs structured observations:
-- Each observation has: content, priority (low/medium/high), observation_date
-- Sanitizes output, detects degenerate responses
-- Tracks token usage
-
-### Reflector (`packages/cairn/src/reflector.ts`)
-
-LLM-powered observation condenser. When observation tokens exceed threshold:
-- Identifies redundant/outdated observations to supersede
-- Creates new condensed observations at generation N+1
-- Validates superseded IDs against actual observation set
-
-### Promoter (in MemoryManager)
-
-Bridges observations to long-term memories:
-1. Find unpromoted medium/high-priority observations
-2. Generate embedding for each
-3. Compare against all existing memory embeddings
-4. If max cosine similarity < 0.85, store as memory + trigger graph extraction
-5. Mark all candidates as promoted regardless of outcome
-
-### Graph extraction (`packages/cairn/src/graph/`)
-
-- `extract.ts` -- LLM extracts entities (name, type, aliases) and relationships from memory content
-- `index.ts` -- Orchestrates: extract, upsert nodes (with embedding-based merge for aliases), upsert edges
-- `queries.ts` -- Node/edge CRUD, FTS5 + embedding hybrid search, BFS graph traversal, node dedup
-
-### Embeddings (`packages/cairn/src/embeddings.ts`)
-
-- `generateEmbedding(apiKey, text, model)` -- OpenRouter embedding API call
-- `cosineSimilarity(a, b)` -- Vector similarity for dedup and search
-
-### Context building (`packages/cairn/src/context.ts`)
-
-- `renderObservations(obs)` -- Format observations as markdown text
-- `renderObservationsWithBudget(obs)` -- Priority-based eviction when over token budget (default: 2000 tokens). Evicts low priority first, then medium.
-- `buildContextWindow()` -- Full context assembly
-
-### DB layer (`packages/cairn/src/db/`)
-
-- `types.ts` -- `CairnDatabase` type: memories, conversations, messages, observations, graph_nodes, graph_edges, ai_usage
-- `queries.ts` -- `storeMemory`, `recallMemories` (FTS5 + embedding hybrid), `updateMemoryEmbedding`, `forgetMemory`, `trackUsage`
-
-## Exports
-
-The package has multiple entry points:
-
-```
-@repo/cairn             # MemoryManager, types, observer, reflector, context, tokens
-@repo/cairn/embeddings  # generateEmbedding, cosineSimilarity
-@repo/cairn/graph       # processMemoryForGraph
-@repo/cairn/graph/queries # searchNodes, traverseGraph, upsertNode, upsertEdge, etc.
-@repo/cairn/db/types    # CairnDatabase, table types
-@repo/cairn/db/queries  # storeMemory, recallMemories, etc.
-```
-
-## Key files
-
-| File | Role |
-|------|------|
-| `src/index.ts` | Barrel exports |
-| `src/manager.ts` | MemoryManager class (main facade) |
-| `src/observer.ts` | Message -> observations LLM worker |
-| `src/reflector.ts` | Observation condenser LLM worker |
-| `src/context.ts` | Observation rendering with budget eviction |
-| `src/embeddings.ts` | OpenRouter embeddings + cosine similarity |
-| `src/tokens.ts` | Token estimation (char/4 heuristic) |
-| `src/types.ts` | All shared types |
-| `src/db/types.ts` | CairnDatabase schema type |
-| `src/db/queries.ts` | Memory CRUD, FTS5 hybrid recall, usage tracking |
-| `src/graph/index.ts` | processMemoryForGraph orchestrator |
-| `src/graph/extract.ts` | LLM entity/relationship extraction |
-| `src/graph/queries.ts` | Graph CRUD, search, traversal |
-
-## Consumers
-
-- **Construct** -- Full pipeline: observer/reflector run after each conversation turn, memories stored via tools, graph extracted from stored memories, context built for each processMessage() call
-- **Cortex** -- Price + news messages fed through observer -> promoter -> reflector. Analyzer uses recallMemories + graph traversal for signal generation.
-- **Deck** -- Read-only: queries memories, observations, graph for visualization
-
-## Architecture decisions
-
-- **Batched observer** -- Messages are split into batches of max 16K tokens to avoid overwhelming the worker LLM. Watermark advances per batch so partial failures preserve progress.
-- **Embedding-based dedup** -- Promoter compares observation embeddings against all existing memories. Prevents redundant memory accumulation across conversations.
-- **Priority-based eviction** -- When observations exceed context budget, low-priority observations are evicted first. Ensures the most important context survives token limits.
-- **Separate from Construct** -- Extracted as a shared package so Cortex can use the same memory pipeline without depending on Construct.
-
-## Related documentation
-
-- [Memory System](../features/memory.md) -- Construct's use of Cairn
-- [Cortex](../apps/cortex.md) -- Market data memory pipeline
-- [Deck](../apps/deck.md) -- Memory visualization
diff --git a/.env.docs.example b/.env.docs.example
new file mode 100644
index 0000000..d64e5fb
--- /dev/null
+++ b/.env.docs.example
@@ -0,0 +1,11 @@
+# Docs — knowledge graph extraction
+# Copy to .env.docs and fill in required values
+
+# ── Required ─────────────────────────────────────────────────────────────────
+
+OPENROUTER_API_KEY=sk-or-...
+
+# ── Optional (sensible defaults) ─────────────────────────────────────────────
+
+# LLM model for entity extraction. Default: google/gemini-3.1-flash-lite-preview
+#MEMORY_WORKER_MODEL=google/gemini-3.1-flash-lite-preview
diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md
new file mode 100644
index 0000000..faaeb5b
--- /dev/null
+++ b/.github/copilot-instructions.md
@@ -0,0 +1,65 @@
+# Copilot Instructions
+
+## Project Overview
+
+Monorepo for personal AI tools. Five apps + two shared packages, all converging on SQLite.
+
+- **construct** - AI companion (Telegram + CLI + scheduler), uses LLM agent with tool system
+- **cortex** - Crypto market intelligence daemon (price/news ingestion, LLM signal generation)
+- **synapse** - Paper trading daemon (reads cortex signals, simulated execution)
+- **deck** - Memory graph explorer (Hono API + React/D3 SPA)
+- **optic** - Terminal trading dashboard (Rust/Ratatui, reads cortex+synapse DBs)
+- **@repo/cairn** - Memory substrate (observer/reflector/promoter/graph pipeline, embeddings, FTS5)
+- **@repo/db** - Shared Kysely database factory + migration runner
+
+## Tech Stack
+
+- TypeScript (Node.js + tsx), Rust (optic only)
+- pnpm workspace monorepo, Just task runner (`Justfile`)
+- SQLite via `node:sqlite` + Kysely (JS apps), rusqlite (Rust)
+- Vitest for testing, oxlint for linting, oxfmt for formatting
+- TypeBox for tool parameter schemas, Zod for env validation
+- LLM via OpenRouter (OpenAI-compatible API)
+
+## Code Conventions
+
+### Tool definitions
+
+Tools follow a strict shape: `{ name, description, parameters, execute }`. Parameters use TypeBox schemas (`Type.Object`, `Type.String`, etc.). See `apps/construct/src/tools/` for examples.
+
+### Database migrations
+
+Migrations are **additive only**. Never drop tables or columns. Migration files live in each app/package's `db/migrations/` directory.
+
+### Error handling
+
+Each package/app defines custom error classes in `errors.ts`. Use these instead of generic `Error`:
+
+- `@repo/cairn`: `MemoryError`, `EmbeddingError`, `GraphError`
+- `@repo/db`: `DatabaseError`, `MigrationError`
+- `construct`: `ToolError`, `ExtensionError`, `AgentError`, `ConfigError`
+- `cortex`: `IngestError`, `AnalyzerError`
+- `synapse`: `ExecutionError`, `RiskError`
+
+### Environment variables
+
+Each app validates env with Zod in `src/env.ts`. Env files use `.env.` naming at repo root.
+
+### Testing
+
+Tests use Vitest. Test files live in `__tests__/` directories. Factory functions for test data go in `__tests__/fixtures.ts`. Run with `just test`.
+
+### Public APIs
+
+Exported functions and classes should have JSDoc comments.
+
+## PR Review Checklist
+
+- [ ] Migrations are additive only (no DROP TABLE, DROP COLUMN, or destructive ALTER)
+- [ ] Custom error classes from `errors.ts` are used, not bare `Error`
+- [ ] New exported APIs have JSDoc documentation
+- [ ] New code has test coverage
+- [ ] No SQL injection vectors (use parameterized queries via Kysely, never string interpolation)
+- [ ] No command injection in any shell/exec calls
+- [ ] No secrets or credentials in committed code
+- [ ] Env variables are added to the app's `env.ts` Zod schema and `.env..example`
diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
new file mode 100644
index 0000000..4eb2494
--- /dev/null
+++ b/.github/workflows/ci.yml
@@ -0,0 +1,34 @@
+name: CI
+
+on:
+  push:
+    branches: [main]
+  pull_request:
+
+jobs:
+  check:
+    runs-on: ubuntu-latest
+    steps:
+      - uses: actions/checkout@v4
+
+      - uses: pnpm/action-setup@v4
+
+      - uses: actions/setup-node@v4
+        with:
+          node-version: "24"
+          cache: pnpm
+
+      - name: Install dependencies
+        run: pnpm install --frozen-lockfile
+
+      - name: Typecheck
+        run: pnpm -r run typecheck
+
+      - name: Lint
+        run: npx oxlint
+
+      - name: Format check
+        run: npx oxfmt --check .
+
+      - name: Test
+        run: pnpm -r run test
diff --git a/.oxlintrc.json b/.oxlintrc.json
new file mode 100644
index 0000000..ed3225f
--- /dev/null
+++ b/.oxlintrc.json
@@ -0,0 +1,24 @@
+{
+  "$schema": "./node_modules/oxlint/configuration_schema.json",
+  "categories": {
+    "correctness": "error",
+    "suspicious": "warn",
+    "perf": "warn",
+    "pedantic": "off",
+    "style": "off",
+    "restriction": "off",
+    "nursery": "off"
+  },
+  "plugins": ["typescript", "unicorn", "import"],
+  "rules": {
+    "no-unused-expressions": "off",
+    "no-await-in-loop": "off",
+    "no-shadow": "warn",
+    "unicorn/consistent-function-scoping": "warn",
+    "unicorn/no-array-sort": "warn",
+    "unicorn/prefer-add-event-listener": "warn",
+    "import/no-unassigned-import": "off",
+    "no-control-regex": "off"
+  },
+  "ignorePatterns": ["dist/", "build/", "node_modules/", "apps/optic/", "*.d.ts", "data/", ".blog/"]
+}
diff --git a/CLAUDE.md b/CLAUDE.md
index 4376490..7c14418 100644
--- a/CLAUDE.md
+++ b/CLAUDE.md
@@ -23,7 +23,7 @@ sprawl/
 ## Architecture
 
 ```
-┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐
+┌──────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐
 │  Construct   │  │   Cortex    │  │   Synapse   │  │    Deck     │  │    Optic    │
 │  (agent)     │  │  (ingest)   │  │  (trading)  │  │  (web UI)   │  │  (TUI)      │
 └──────┬───────┘  └──────┬──────┘  └──────┬──────┘  └──────┬──────┘  └──────┬──────┘
@@ -36,7 +36,7 @@ sprawl/
             │  promote → graph        │                    │                │
             ╰────────────┬────────────╯                    │                │
                          │                                 │                │
-            ┌────────────▼─────────────────────────────────▼───┐           │
+            ┌────────────▼─────────────────────────────────▼────┐           │
             │               @repo/db                            │           │
             │          kysely + migrations                      │           │
             └──────────────────┬────────────────────────────────┘           │
@@ -59,282 +59,161 @@ Construct, Cortex, and Deck use Cairn for memory. Synapse reads Cortex's DB dire
 - **Memory**: @repo/cairn (observer/reflector, FTS5, embeddings, graph)
 - **Telegram**: Grammy (long polling)
 - **CLI**: Citty
-- **Scheduler**: Croner (Construct + Cortex + Synapse)
+- **Scheduler**: Croner
 - **Web**: Hono + React + D3-force (Deck)
 - **TUI**: Ratatui + Crossterm (Optic)
-- **Web tools**: Tavily (search), fetch + parsing (read)
-- **Data sources**: CoinGecko (prices), CryptoPanic + CryptoCompare (news)
-- **Logging**: Logtape
+- **Linting**: oxlint (`.oxlintrc.json`)
+- **Formatting**: oxfmt
+- **Pre-commit**: lefthook (`lefthook.yml`)
+- **CI**: GitHub Actions (`.github/workflows/ci.yml`)
 - **Testing**: Vitest
-- **Dynamic tool loading**: jiti (TypeScript without compile step)
+- **Logging**: Logtape
 - **Schemas**: TypeBox (tool parameters), Zod (env validation)
-
-## Key Conventions
-
-- **Tools** follow the `{ name, description, parameters, execute }` pattern with TypeBox schemas
-- **Migrations** are additive only — never drop tables or columns
-- **Self-aware tools** are scoped to `src/`, `cli/`, and `extensions/` — never system files
-- **Self-deploy** requires passing tests first and is rate-limited to 3/hour
-- **Extensions** are user/agent-authored skills (Markdown) and tools (TypeScript) loaded from `EXTENSIONS_DIR`
-- **Tool packs** are semantically selected per message via embeddings (core pack always loads)
+- **Dynamic tool loading**: jiti (TypeScript without compile step)
 
 ## Commands (Justfile)
 
 ```bash
 just                     # List all commands
 
-# Construct
+# Quality gates
+just check               # Run ALL checks: typecheck + lint + fmt-check + test
+just typecheck           # Typecheck all packages
+just lint                # Run oxlint
+just lint-fix            # Auto-fix lint issues
+just fmt                 # Format all files with oxfmt
+just fmt-check           # Check formatting (no writes)
+
+# Test
+just test                # Run all tests (pnpm -r run test)
+just test-construct      # Construct tests only
+just test-cairn          # Cairn tests only
+just test-synapse        # Synapse tests only
+just test-ai             # AI integration tests (requires OPENROUTER_API_KEY)
+
+# Apps
 just dev                 # Construct dev mode (file watching)
 just start     # Start a named construct instance
 just cli [instance] [args] # Construct CLI
-
-# Cortex
-just cortex-dev          # Cortex dev mode (file watching)
+just cortex-dev          # Cortex dev mode
 just cortex-start        # Cortex production
-just cortex-backfill [days] # Backfill historical data (default: 30)
-just cortex-backfill-news [days]   # Backfill news only
-just cortex-backfill-prices [days] # Backfill prices only
-
-# Synapse
-just synapse-dev         # Synapse dev mode (file watching)
+just synapse-dev         # Synapse dev mode
 just synapse-start       # Synapse production
 just synapse-status      # Print portfolio summary
-
-# Deck
-just deck-dev  # Deck dev mode (memory graph explorer)
-
-# Optic
-just optic [db] [synapse] # Optic TUI (reads cortex + synapse DBs)
-just optic-build         # Build optic release binary
-
-# Test / Typecheck
-just test                # Run all tests
-just test-construct      # Construct tests
-just test-cairn          # Cairn tests
-just test-synapse        # Synapse tests
-just test-ai             # AI integration tests
-just typecheck           # Typecheck all packages
+just deck-dev  # Deck dev mode
 
 # DB
 just db-migrate [inst]   # Run DB migrations
 ```
 
-## Construct Directory Structure
+## Pre-commit Hooks (lefthook)
 
-```
-apps/construct/src/
-├── agent.ts             # Agent factory, processMessage(), tool registration
-├── system-prompt.ts     # System prompt + identity file injection
-├── main.ts              # Boot (migrations, DB, extensions, scheduler, Telegram)
-├── env.ts               # Zod-validated environment variables
-├── logger.ts            # Logtape logging
-├── cli/
-│   └── index.ts         # Citty CLI (REPL, one-shot, tool invocation)
-├── db/
-│   ├── schema.ts        # Table types
-│   ├── queries.ts       # Query helpers
-│   ├── migrate.ts       # Migration runner
-│   └── migrations/      # 001–008
-├── tools/
-│   ├── packs.ts         # Tool pack selection (embedding-based)
-│   ├── core/            # Always-loaded: memory, schedule, secrets, identity, usage
-│   ├── self/            # Self-modification: read, edit, test, logs, deploy, status, extension_reload
-│   ├── web/             # Web search (Tavily) + web read
-│   └── telegram/        # React, reply-to, pin/unpin, get-pinned
-├── telegram/
-│   ├── bot.ts           # Grammy handlers, queue/threading
-│   ├── format.ts        # Markdown → Telegram HTML
-│   ├── types.ts
-│   └── index.ts
-├── scheduler/
-│   └── index.ts         # Croner reminder daemon
-├── extensions/
-│   ├── loader.ts        # Dynamic tool/skill loader (jiti)
-│   ├── embeddings.ts    # Extension pack embeddings
-│   ├── secrets.ts       # Secrets table + EXT_* sync
-│   └── types.ts
-└── __tests__/           # Integration tests (memory pipelines, graph, context)
-```
-
-## Cortex Directory Structure
-
-```
-apps/cortex/src/
-├── main.ts              # Boot, migrations, token seeding, backfill, daemon loop
-├── env.ts               # Zod env: OPENROUTER_API_KEY, TRACKED_TOKENS, intervals
-├── ingest/
-│   ├── prices.ts        # CoinGecko price fetching
-│   ├── news.ts          # CryptoPanic + CryptoCompare RSS news
-│   └── types.ts
-├── pipeline/
-│   ├── loop.ts          # Croner jobs: prices, news, signals, command queue
-│   ├── analyzer.ts      # LLM signal generation (hybrid recall + graph context)
-│   ├── prompts.ts       # Short/long signal prompt templates
-│   └── backfill.ts      # Historical data backfill
-└── db/
-    ├── schema.ts        # tracked_tokens, price_snapshots, news_items, signals, commands
-    ├── queries.ts
-    └── migrations/
-```
+Runs in parallel on every commit:
 
-## Synapse Directory Structure
+1. `just fmt-check` -- formatting
+2. `just lint` -- oxlint
+3. `just typecheck` -- tsc
 
-```
-apps/synapse/src/
-├── main.ts              # Boot, migrations, portfolio init, executor, daemon loop
-├── env.ts               # Zod env: portfolio config, risk params, position sizing
-├── status.ts            # CLI portfolio summary script
-├── types.ts             # Executor interface (buy/sell -> ExecutionResult)
-├── cortex/
-│   ├── reader.ts        # Read-only access to Cortex DB (signals, prices, tokens)
-│   └── types.ts
-├── engine/
-│   ├── loop.ts          # Croner jobs: signal poll, risk check
-│   ├── executor.ts      # PaperExecutor (simulated fills with slippage + gas)
-│   ├── signal-filter.ts # Confidence thresholds, cooldown, dedup
-│   ├── position-sizer.ts # Kelly-inspired sizing by confidence
-│   ├── risk.ts          # Stop-loss, take-profit, drawdown halt, exposure limits
-│   └── pricing.ts       # Price fetching from Cortex DB
-├── portfolio/
-│   └── tracker.ts       # Position price updates, portfolio recalc, snapshots
-└── db/
-    ├── schema.ts        # positions, trades, signal_log, risk_events, portfolio_state
-    ├── queries.ts
-    └── migrations/
-```
+CI (`.github/workflows/ci.yml`) runs the same checks plus `pnpm -r run test`.
 
-## Deck Directory Structure
+## Conventions
 
-```
-apps/deck/
-├── src/
-│   ├── server.ts        # Hono app: CORS, DB injection, static serving
-│   ├── env.ts           # DATABASE_URL, PORT (4800)
-│   └── routes/
-│       ├── memories.ts  # /api/memories (search, list, detail)
-│       ├── graph.ts     # /api/graph (nodes, edges, traversal)
-│       ├── observations.ts # /api/observations (timeline)
-│       └── stats.ts     # /api/stats (counts)
-└── web/                 # React SPA (Vite)
-    └── src/
-        ├── App.tsx      # Routes: /, /memories, /observations
-        └── components/  # GraphView (D3-force canvas), MemoryBrowser, ObservationTimeline
-```
+### Error Classes
 
-## Optic Structure
+Every app/package defines domain-specific errors in `src/errors.ts`. Pattern:
 
+```typescript
+export class MemoryError extends Error {
+  name = "MemoryError" as const;
+  constructor(message: string, options?: ErrorOptions) {
+    super(message, options);
+  }
+}
 ```
-apps/optic/src/
-├── main.rs              # CLI args, DB connections, terminal setup, event loop
-├── db.rs                # CortexDb + SynapseDb (rusqlite, read-only)
-└── ui.rs                # Ratatui rendering: Market view + Trading view
-```
-
-Two view modes: **Market** (prices, chart, news, signals, graph) and **Trading** (positions, trades, signal log, risk events). Reads Cortex DB for market data, optionally Synapse DB for portfolio. Auto-refreshes every 5s. Keybinds: `q` quit, `Tab` focus, `j/k` scroll, `c` chart cycle, `a` analyze, `1/2` mode switch.
-
-## Shared Packages
 
-### @repo/cairn (`packages/cairn/`)
+Always use `{ cause: originalError }` when wrapping. Errors by package:
 
-Memory substrate shared by Construct, Cortex, and Deck. Provides:
+- **@repo/db**: `DatabaseError`, `MigrationError`
+- **@repo/cairn**: `MemoryError`, `EmbeddingError`, `GraphError`
+- **construct**: `ToolError`, `ExtensionError`, `AgentError`, `ConfigError`
+- **cortex**: `IngestError`, `AnalyzerError`
+- **synapse**: `ExecutionError`, `RiskError`
 
-- **MemoryManager**: Facade for the full pipeline (observer, reflector, promoter, graph)
-- **Observer**: LLM-based message compression into observations (batched, watermarked)
-- **Reflector**: Condenses observations when token budget exceeds threshold
-- **Promoter**: Embedding-deduped promotion of observations to long-term memories
-- **Graph**: Entity/relationship extraction from memories (LLM-powered)
-- **Context**: Observation rendering with priority-based budget eviction
-- **Embeddings**: OpenRouter embedding generation + cosine similarity
-- **DB queries**: Memory CRUD, FTS5 search, hybrid recall, graph queries
+### Testing
 
-```
-packages/cairn/src/
-├── index.ts             # Barrel exports
-├── manager.ts           # MemoryManager class
-├── observer.ts          # observe() - message -> observations
-├── reflector.ts         # reflect() - condense observations
-├── context.ts           # renderObservations(), buildContextWindow()
-├── embeddings.ts        # generateEmbedding(), cosineSimilarity()
-├── tokens.ts            # estimateTokens()
-├── types.ts             # Observation, GraphNode, GraphEdge, etc.
-├── db/
-│   ├── types.ts         # CairnDatabase schema (memories, observations, graph_*)
-│   └── queries.ts       # storeMemory, recallMemories, trackUsage, etc.
-└── graph/
-    ├── index.ts         # processMemoryForGraph() orchestrator
-    ├── extract.ts       # extractEntities() via LLM
-    └── queries.ts       # searchNodes, traverseGraph, upsertNode/Edge
-```
+- **Framework**: Vitest, configs at `vitest.config.ts` per package
+- **Fixtures**: Factory functions in `src/__tests__/fixtures.ts` using the spread pattern:
+  ```typescript
+  createTestSignal({ confidence: 0.9 }); // override only what matters
+  ```
+- **Cairn test DB**: `packages/cairn/src/__tests__/test-db.ts` provides `setupCairnTestDb()` -- creates in-memory SQLite with all cairn tables. Use for any test touching cairn queries.
+- **Construct test DB**: `apps/construct/src/__tests__/fixtures.ts` has `setupDb()` which runs construct migrations against `:memory:`
+- No mocking LLM calls in unit tests -- use synthetic embeddings (16-d vectors with orthogonal topic clusters)
 
-### @repo/db (`packages/db/`)
+### Migrations
 
-Kysely database factory and migration runner shared across all JS apps. Two exports:
-- `createDb(path)` -- creates Kysely instance with node:sqlite dialect
-- `runMigrations(path, migrations)` -- file-based migration runner
+Additive only -- never drop tables or columns. Each app has its own `src/db/migrations/` directory. Pattern:
 
-## Extensions Directory
+```typescript
+import { type Kysely, sql } from "kysely";
 
-Location: `EXTENSIONS_DIR` env var (defaults to `./data` in dev, `$XDG_DATA_HOME/construct/` in prod).
+export async function up(db: Kysely): Promise {
+  await sql`ALTER TABLE observations ADD COLUMN expires_at TEXT`.execute(db);
+}
 
-```
-$EXTENSIONS_DIR/
-├── SOUL.md              # Personality (injected into system prompt)
-├── IDENTITY.md          # Agent metadata: name, type, pronouns
-├── USER.md              # Human context: name, location, preferences
-├── skills/              # Markdown skills (YAML frontmatter + body)
-└── tools/               # TypeScript tools (hot-loaded via jiti)
+export async function down(_db: Kysely): Promise {
+  // Additive only -- no-op
+}
 ```
 
-## Environment Variables
+Numbering: `NNN-description.ts` (e.g. `010-observation-expires-at.ts`).
 
-All env files live in the repo root with the naming convention `.env.` (e.g. `.env.construct`, `.env.cortex`, `.env.synapse`, `.env.deck`). Example files: `.env..example`. The Justfile passes these via `node --env-file=.env.`. All SQLite databases go in `./data/` so apps can share DBs by path.
+### Tools (Construct)
 
-### Construct
+Factory function pattern -- each tool file exports `createXTool(db, ...)`:
 
-**Required**: `OPENROUTER_API_KEY`, `TELEGRAM_BOT_TOKEN`
+```typescript
+const Params = Type.Object({ content: Type.String({ description: "..." }) });
 
-**Optional**:
-- `NODE_ENV` -- `development` | `production` (default: production)
-- `OPENROUTER_MODEL` -- LLM model (default: `google/gemini-3-flash-preview`)
-- `DATABASE_URL` -- SQLite path (default: `./data/construct.db`)
-- `ALLOWED_TELEGRAM_IDS` -- Comma-separated Telegram user IDs
-- `TIMEZONE` -- Agent timezone (default: `UTC`)
-- `TAVILY_API_KEY` -- Web search
-- `LOG_LEVEL` / `LOG_FILE` -- Logging config
-- `PROJECT_ROOT` -- Scope for self-edit tools (default: `.`)
-- `EXTENSIONS_DIR` -- Extensions directory path
-- `EMBEDDING_MODEL` -- Embedding model (default: `qwen/qwen3-embedding-4b`)
-- `MEMORY_WORKER_MODEL` -- Dedicated model for memory workers
-- `EXT_*` -- Synced to secrets table on startup (prefix stripped)
+export function createMemoryStoreTool(db: Kysely, apiKey?: string) {
+  return {
+    name: "memory_store",
+    description: "...",
+    parameters: Params,
+    execute: async (_toolCallId: string, args: Static) => {
+      // ...
+      return { output: "Stored.", details: { id: memory.id } };
+    },
+  };
+}
+```
 
-### Cortex
+Tool packs are semantically selected per message via embeddings (core pack always loads).
 
-**Required**: `OPENROUTER_API_KEY`
+### Environment Variables
 
-**Optional**:
-- `DATABASE_URL` -- SQLite path (default: `./data/cortex.db`)
-- `TRACKED_TOKENS` -- Comma-separated CoinGecko IDs (default: `bitcoin,ethereum`)
-- `CRYPTOPANIC_API_KEY` / `CRYPTOCOMPARE_API_KEY` -- News sources
-- `EMBEDDING_MODEL` / `MEMORY_WORKER_MODEL` / `ANALYZER_MODEL` -- LLM config
-- `PRICE_INTERVAL` / `NEWS_INTERVAL` / `SIGNAL_INTERVAL` -- Seconds between cycles
+Env files: `.env.` in repo root (e.g. `.env.construct`, `.env.cortex`). Examples: `.env..example`. Justfile passes via `node --env-file=.env.`. All SQLite databases go in `./data/`.
 
-### Synapse
+## Optic (Rust)
 
-**Optional** (all have defaults):
-- `CORTEX_DATABASE_URL` -- Cortex DB to read signals from (default: `./data/cortex.db`)
-- `DATABASE_URL` -- Synapse DB (default: `./data/synapse.db`)
-- `INITIAL_BALANCE_USD` -- Starting paper balance (default: `10000`)
-- `POLL_INTERVAL` / `RISK_CHECK_INTERVAL` -- Loop timing (seconds)
-- `MIN_CONFIDENCE_SHORT` / `MIN_CONFIDENCE_LONG` -- Signal thresholds
-- `MAX_POSITION_PCT` / `MAX_PORTFOLIO_DRAWDOWN_PCT` / `STOP_LOSS_PCT` / `TAKE_PROFIT_PCT` -- Risk params
-- `SLIPPAGE_BPS` / `SIMULATED_GAS_USD` -- Execution simulation
+```
+apps/optic/src/
+├── main.rs   # CLI args, DB connections, terminal setup, event loop
+├── db.rs     # CortexDb + SynapseDb (rusqlite, read-only)
+└── ui.rs     # Ratatui rendering: Market view + Trading view
+```
 
-### Deck
+Two view modes: **Market** (prices, chart, news, signals, graph) and **Trading** (positions, trades, signal log, risk events). Different toolchain -- `cargo build`, not covered by `just check`.
 
-- `DATABASE_URL` -- DB to browse (default: `./data/construct.db`)
-- `PORT` -- Server port (default: `4800`)
+## Extensions (Construct)
 
-### Optic
+Location: `EXTENSIONS_DIR` env var (defaults to `./data` in dev).
 
-- First positional arg or `DATABASE_URL` -- Cortex DB path
-- `--synapse ` or `SYNAPSE_DATABASE_URL` -- Synapse DB path
+```
+$EXTENSIONS_DIR/
+├── SOUL.md       # Personality (injected into system prompt)
+├── IDENTITY.md   # Agent metadata: name, type, pronouns
+├── USER.md       # Human context: name, location, preferences
+├── skills/       # Markdown skills (YAML frontmatter + body)
+└── tools/        # TypeScript tools (hot-loaded via jiti)
+```
diff --git a/Justfile b/Justfile
index 084da94..b3a8ac2 100644
--- a/Justfile
+++ b/Justfile
@@ -17,6 +17,27 @@ construct-dev:
 deck-dev instance:
     node --env-file=.env.{{instance}} --import=tsx apps/deck/src/server.ts
 
+# --- Lint / Format ---
+
+# Run oxlint
+lint:
+    npx oxlint
+
+# Fix lint issues
+lint-fix:
+    npx oxlint --fix
+
+# Format all files
+fmt:
+    npx oxfmt --write .
+
+# Check formatting (no writes)
+fmt-check:
+    npx oxfmt --check .
+
+# Run all checks (typecheck + lint + format + test)
+check: typecheck lint fmt-check test
+
 # --- Test ---
 
 # Run all tests
diff --git a/README.md b/README.md
index ad407e2..a193c5c 100644
--- a/README.md
+++ b/README.md
@@ -7,9 +7,9 @@
 ╚══════╝╚═╝     ╚═╝  ╚═╝╚═╝  ╚═╝ ╚══╝╚══╝ ╚══════╝
 ```
 
-> *"The Sprawl was a long strange way, home to millions, most of them sleeping."*
+> _"The Sprawl was a long strange way, home to millions, most of them sleeping."_
 >
-> -- William Gibson, *Neuromancer*
+> -- William Gibson, _Neuromancer_
 
 ---
 
@@ -83,19 +83,19 @@ Terminal trading dashboard. Displays prices, news feeds, signals from Cortex, an
 
 ## Shared ICE
 
-| Package | Description |
-|---|---|
+| Package       | Description                                                                            |
+| ------------- | -------------------------------------------------------------------------------------- |
 | `@repo/cairn` | Memory substrate -- observer/reflector, embeddings, graph extraction, context building |
-| `@repo/db` | Shared Kysely database factory + migration runner |
+| `@repo/db`    | Shared Kysely database factory + migration runner                                      |
 
 ## Construct Toolbox
 
-| Pack | Tools |
-|------|-------|
+| Pack                     | Tools                                                                                                                                                                                       |
+| ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **core** (always loaded) | `memory_store`, `memory_recall`, `memory_forget`, `memory_graph`, `schedule_create`, `schedule_list`, `schedule_cancel`, `secret_manage`, `identity_read`, `identity_update`, `usage_stats` |
-| **self** | `self_read_source`, `self_edit_source`, `self_run_tests`, `self_view_logs`, `self_deploy`, `self_status`, `extension_reload` |
-| **web** | `web_search`, `web_read` |
-| **telegram** | `telegram_react`, `telegram_reply_to`, `telegram_pin`, `telegram_unpin`, `telegram_get_pinned` |
+| **self**                 | `self_read_source`, `self_edit_source`, `self_run_tests`, `self_view_logs`, `self_deploy`, `self_status`, `extension_reload`                                                                |
+| **web**                  | `web_search`, `web_read`                                                                                                                                                                    |
+| **telegram**             | `telegram_react`, `telegram_reply_to`, `telegram_pin`, `telegram_unpin`, `telegram_get_pinned`                                                                                              |
 
 Tool packs are semantically selected per message via embedding similarity. Core always loads; others activate when relevant.
 
@@ -257,8 +257,8 @@ sprawl/
 
 ---
 
-> *"Night City was like a deranged experiment in social Darwinism, designed by a bored researcher who kept one thumb permanently on the fast-forward button."*
+> _"Night City was like a deranged experiment in social Darwinism, designed by a bored researcher who kept one thumb permanently on the fast-forward button."_
 >
-> -- William Gibson, *Neuromancer*
+> -- William Gibson, _Neuromancer_
 
 The sprawl remembers. The sprawl watches. The sprawl trades on what it knows. And if you need to see what it sees, there's a deck for that.
diff --git a/apps/construct/CLAUDE.md b/apps/construct/CLAUDE.md
new file mode 100644
index 0000000..66835c8
--- /dev/null
+++ b/apps/construct/CLAUDE.md
@@ -0,0 +1,149 @@
+# Construct
+
+AI braindump companion. Telegram bot + CLI + scheduler backed by memory pipeline.
+
+## Key Files
+
+- `src/agent.ts` -- `processMessage()` pipeline: context assembly, tool selection, LLM call, persistence, async memory
+- `src/main.ts` -- Boot sequence: migrations, DB, extensions, pack embeddings, scheduler, Telegram
+- `src/system-prompt.ts` -- System prompt construction + identity file injection
+- `src/tools/packs.ts` -- `InternalTool` interface, `ToolContext`, semantic tool pack selection
+- `src/memory.ts` -- `ConstructMemoryManager` subclass with construct-specific observer/reflector prompts
+- `src/env.ts` -- Zod-validated env vars
+- `src/errors.ts` -- `ToolError`, `ExtensionError`, `AgentError`, `ConfigError`
+
+## Architecture
+
+```
+Telegram/CLI message
+  → processMessage() (agent.ts)
+    → getOrCreateConversation
+    → ConstructMemoryManager.buildContext (observations + active messages)
+    → recallMemories (FTS + embeddings + graph)
+    → selectAndCreateTools (semantic pack selection via query embedding)
+    → selectSkills (extension skills)
+    → Agent.processMessage (pi-agent-core)
+    → saveMessage (user + assistant)
+    → trackUsage
+    → async: observer → reflector → promoter → graph (cairn pipeline)
+```
+
+## Directory Structure
+
+```
+src/
+├── agent.ts             # processMessage() pipeline
+├── system-prompt.ts     # System prompt + identity file injection
+├── memory.ts            # ConstructMemoryManager subclass
+├── main.ts              # Boot sequence
+├── env.ts               # Zod env validation
+├── errors.ts            # ToolError, ExtensionError, AgentError, ConfigError
+├── logger.ts            # Logtape logging
+├── cli/index.ts         # Citty CLI (REPL, one-shot, tool invocation)
+├── db/
+│   ├── schema.ts        # Construct + Cairn table types (intersection)
+│   ├── queries.ts       # Query helpers
+│   ├── migrate.ts       # Migration runner
+│   └── migrations/      # 001-010
+├── tools/
+│   ├── packs.ts         # Tool pack selection + InternalTool type
+│   ├── core/            # Always-loaded: memory, schedule, secrets, identity, usage
+│   ├── self/            # Self-modification: read, edit, test, logs, deploy, status
+│   ├── web/             # Web search (Tavily) + web read
+│   └── telegram/        # React, reply-to, pin/unpin, get-pinned, ask
+├── telegram/
+│   ├── bot.ts           # Grammy handlers, queue/threading
+│   └── format.ts        # Markdown → Telegram HTML
+├── scheduler/index.ts   # Croner reminder daemon
+└── extensions/
+    ├── loader.ts        # Dynamic tool/skill loader (jiti)
+    ├── embeddings.ts    # Extension pack embeddings
+    └── secrets.ts       # Secrets table + EXT_* sync
+```
+
+## Tool Pattern
+
+Each tool is a factory function in its own file. Returns `InternalTool`:
+
+```typescript
+// src/tools/core/memory-store.ts
+const Params = Type.Object({
+  content: Type.String({ description: "..." }),
+});
+
+export function createMemoryStoreTool(db: Kysely, apiKey?: string) {
+  return {
+    name: "memory_store",
+    description: "...",
+    parameters: Params,
+    execute: async (_toolCallId: string, args: Static) => {
+      return { output: "Done", details: { id: "..." } };
+    },
+  };
+}
+```
+
+Register in `src/tools/packs.ts` by adding to the appropriate pack (core, self, web, telegram) and importing the factory.
+
+## Adding a New Tool
+
+1. Create `src/tools//my-tool.ts` with factory function
+2. Define params with `Type.Object({...})` (TypeBox)
+3. Import and add to pack array in `src/tools/packs.ts`
+4. Tool description matters -- it's used for semantic pack selection
+
+## Adding a Migration
+
+1. Create `src/db/migrations/NNN-description.ts` (next number after 010)
+2. Export `up(db: Kysely)` and `down(db: Kysely)`
+3. Import in `src/db/migrate.ts` and add to the migrations array
+4. Update `src/db/schema.ts` with new column types
+5. Run: `just db-migrate construct`
+
+## Testing
+
+```bash
+just test-construct       # All construct tests
+just test-ai              # AI integration tests (needs OPENROUTER_API_KEY)
+```
+
+- Fixtures: `src/__tests__/fixtures.ts`
+  - `setupDb()` -- in-memory DB with all migrations applied
+  - `seedMemories()`, `seedGraph()`, `seedObservations()` -- populate test data
+  - `memoryEmbeddings` / `queryEmbeddings` -- synthetic 16-d orthogonal vectors
+  - `createTestMessage()`, `createTestObservation()`, `createTestProcessOpts()`, `createTestAgentResponse()`
+
+## Logging
+
+Logtape loggers by category:
+
+| Logger         | Category                     |
+| -------------- | ---------------------------- |
+| `log`          | `['construct']`              |
+| `agentLog`     | `['construct', 'agent']`     |
+| `toolLog`      | `['construct', 'tool']`      |
+| `telegramLog`  | `['construct', 'telegram']`  |
+| `schedulerLog` | `['construct', 'scheduler']` |
+| `dbLog`        | `['construct', 'db']`        |
+
+- **Console sink**: always active, custom formatter
+- **File sink**: active when `LOG_FILE` is set, swappable `WriteStream` for runtime rotation
+- **Rotation**: on startup if log > 5 MB; manual via `self_system_status` tool with `rotate_logs: true`; keeps up to 3 archived files (`.log.1`, `.log.2`, `.log.3`)
+
+## Environment Variables
+
+File: `.env.construct`
+
+**Required**: `OPENROUTER_API_KEY`, `TELEGRAM_BOT_TOKEN`
+
+**Key optional**:
+
+- `OPENROUTER_MODEL` -- LLM model (default: `google/gemini-3-flash-preview`)
+- `DATABASE_URL` -- SQLite path (default: `./data/construct.db`)
+- `ALLOWED_TELEGRAM_IDS` -- Comma-separated Telegram user IDs
+- `TIMEZONE` -- Agent timezone (default: `UTC`)
+- `TAVILY_API_KEY` -- Web search
+- `EXTENSIONS_DIR` -- Extensions directory path
+- `EMBEDDING_MODEL` -- default: `qwen/qwen3-embedding-4b`
+- `MEMORY_WORKER_MODEL` -- Dedicated model for observer/reflector
+- `EXT_*` -- Synced to secrets table on startup (prefix stripped)
diff --git a/apps/construct/README.md b/apps/construct/README.md
index e20328a..dc4a6cd 100644
--- a/apps/construct/README.md
+++ b/apps/construct/README.md
@@ -7,7 +7,7 @@
  ╚═════╝ ╚═════╝ ╚═╝  ╚═══╝╚══════╝   ╚═╝   ╚═╝  ╚═╝ ╚═════╝  ╚═════╝   ╚═╝
 ```
 
-> *The sky above the port was the color of television, tuned to a dead channel.*
+> _The sky above the port was the color of television, tuned to a dead channel._
 >
 > But down here, in the warm hum of silicon, something remembers.
 
@@ -44,17 +44,17 @@ Two jacks into the matrix. One signal path. Everything flows through `processMes
 
 ## Cyberspace Deck (Tech Stack)
 
-| Layer | ICE |
-|---|---|
-| **Runtime** | Node.js + tsx (runs on ARMv7 -- even the cheapest deck) |
-| **Agent Core** | `@mariozechner/pi-agent-core` |
-| **LLM Uplink** | OpenRouter (OpenAI-compatible wire protocol) |
-| **Flatline DB** | SQLite via `node:sqlite` + Kysely |
-| **Comms** | Grammy (Telegram long polling) |
-| **Terminal** | Citty (CLI REPL, one-shot, direct tool invocation) |
-| **Cron Daemon** | Croner (reminder scheduling) |
-| **Test Rig** | Vitest |
-| **Hot Load** | jiti (TypeScript tools, no compile step) |
+| Layer           | ICE                                                     |
+| --------------- | ------------------------------------------------------- |
+| **Runtime**     | Node.js + tsx (runs on ARMv7 -- even the cheapest deck) |
+| **Agent Core**  | `@mariozechner/pi-agent-core`                           |
+| **LLM Uplink**  | OpenRouter (OpenAI-compatible wire protocol)            |
+| **Flatline DB** | SQLite via `node:sqlite` + Kysely                       |
+| **Comms**       | Grammy (Telegram long polling)                          |
+| **Terminal**    | Citty (CLI REPL, one-shot, direct tool invocation)      |
+| **Cron Daemon** | Croner (reminder scheduling)                            |
+| **Test Rig**    | Vitest                                                  |
+| **Hot Load**    | jiti (TypeScript tools, no compile step)                |
 
 ## Neural Map
 
@@ -116,17 +116,17 @@ The construct can author its own extensions. It writes tools, loads them, uses t
 
 ## Environment
 
-| Variable | Purpose |
-|---|---|
-| `EXTENSIONS_DIR` | Path to extensions sprawl |
-| `EXT_*` | Auto-synced to secrets table on boot (prefix stripped) |
+| Variable         | Purpose                                                |
+| ---------------- | ------------------------------------------------------ |
+| `EXTENSIONS_DIR` | Path to extensions sprawl                              |
+| `EXT_*`          | Auto-synced to secrets table on boot (prefix stripped) |
 
 ---
 
-> *"He'd operated on an almost permanent adrenaline high, a byproduct of youth and proficiency,*
-> *jacked into a custom cyberspace deck that projected his disembodied consciousness*
-> *into the consensual hallucination that was the matrix."*
+> _"He'd operated on an almost permanent adrenaline high, a byproduct of youth and proficiency,_
+> _jacked into a custom cyberspace deck that projected his disembodied consciousness_
+> _into the consensual hallucination that was the matrix."_
 >
-> -- William Gibson, *Neuromancer*
+> -- William Gibson, _Neuromancer_
 
 The construct remembers so you don't have to.
diff --git a/apps/construct/docs/agent.md b/apps/construct/docs/agent.md
index 74caef9..d1eba54 100644
--- a/apps/construct/docs/agent.md
+++ b/apps/construct/docs/agent.md
@@ -11,13 +11,13 @@ The agent system is the central intelligence of Construct. It takes a user messa
 
 ## Key Files
 
-| File | Role |
-|------|------|
-| `src/agent.ts` | `processMessage()` -- the main orchestration function |
-| `src/system-prompt.ts` | System prompt construction and context preamble |
-| `src/memory.ts` | `ConstructMemoryManager` -- Construct-specific memory facade (extends `@repo/cairn`) |
-| `src/tools/packs.ts` | Tool pack selection and instantiation |
-| `src/extensions/index.ts` | Extension registry (skills, dynamic tools, identity) |
+| File                      | Role                                                                                 |
+| ------------------------- | ------------------------------------------------------------------------------------ |
+| `src/agent.ts`            | `processMessage()` -- the main orchestration function                                |
+| `src/system-prompt.ts`    | System prompt construction and context preamble                                      |
+| `src/memory.ts`           | `ConstructMemoryManager` -- Construct-specific memory facade (extends `@repo/cairn`) |
+| `src/tools/packs.ts`      | Tool pack selection and instantiation                                                |
+| `src/extensions/index.ts` | Extension registry (skills, dynamic tools, identity)                                 |
 
 Embeddings (`generateEmbedding`, `cosineSimilarity`) are provided by `@repo/cairn`. The core `MemoryManager` class also lives in `@repo/cairn` -- Construct subclasses it as `ConstructMemoryManager` in `src/memory.ts`.
 
@@ -30,10 +30,11 @@ async function processMessage(
   db: Kysely,
   message: string,
   opts: ProcessMessageOpts,
-): Promise
+): Promise;
 ```
 
 `ProcessMessageOpts` includes:
+
 - `source`: `'telegram'` or `'cli'`
 - `externalId`: The chat or session identifier (e.g., Telegram chat ID)
 - `chatId`: Telegram chat ID (used for tool context)
@@ -76,6 +77,7 @@ Every message is associated with a conversation identified by `(source, external
 A `ConstructMemoryManager` (subclass of Cairn's `MemoryManager`) is instantiated for the conversation with custom observer/reflector prompts and `expires_at` support. It uses the `MEMORY_WORKER_MODEL` env var to configure the worker LLM (if not set, LLM-powered memory features are disabled).
 
 `memoryManager.buildContext()` determines what conversation history the LLM sees:
+
 - **If observations exist**: Rendered observations become a stable text prefix (injected into the context preamble), and only un-observed messages (those after the watermark) are replayed as conversation turns. This keeps the context window bounded as conversations grow.
 - **If no observations yet**: Falls back to loading the last 20 raw messages (original behavior).
 
@@ -84,6 +86,7 @@ See [Memory System](/construct/memory/) for details on how observations are crea
 ### 3. Memory Context
 
 Two types of memories are injected:
+
 - **Recent memories** (10 most recent, regardless of relevance) -- for temporal continuity
 - **Relevant memories** (up to 5, filtered by embedding cosine similarity >= 0.4) -- for semantic relevance
 
@@ -92,6 +95,7 @@ Relevant memories that already appear in the recent set are deduplicated.
 ### 4. Embedding Generation
 
 The user's message is embedded using `generateEmbedding()` from `@repo/cairn`. This calls the OpenRouter embeddings endpoint with the configured `EMBEDDING_MODEL` (default `qwen/qwen3-embedding-4b`). The resulting vector is reused for:
+
 - Memory recall (semantic search)
 - Skill selection
 - Tool pack selection
@@ -126,6 +130,7 @@ The `buildContextPreamble()` function in `src/system-prompt.ts` creates a text b
 ### 7. System Prompt
 
 The system prompt is built by `getSystemPrompt()` in `src/system-prompt.ts`. It concatenates:
+
 1. `BASE_SYSTEM_PROMPT` -- Static rules about behavior, tool usage, Telegram interaction
 2. Identity section from `IDENTITY.md` (if loaded)
 3. User section from `USER.md` (if loaded)
@@ -140,6 +145,7 @@ Tool packs are selected based on embedding similarity (see [Tool System](/constr
 ### 9. Agent Execution
 
 A `pi-agent-core` `Agent` is instantiated with the system prompt and model. Conversation history is replayed via `agent.appendMessage()`. The agent subscribes to events for:
+
 - `message_update` -- Accumulates response text from `text_delta` events
 - `message_end` -- Captures usage statistics
 - `tool_execution_end` -- Records tool call names and results
@@ -149,6 +155,7 @@ The agent is then prompted with `preamble + message` and the function awaits `ag
 ### 10. Persistence and Post-Response Memory
 
 After the agent finishes:
+
 - The user's message is saved (already done before prompting)
 - The assistant's response is saved with any tool call records
 - LLM usage (input/output tokens, cost) is tracked in the `ai_usage` table
@@ -160,16 +167,17 @@ The function returns:
 
 ```typescript
 interface AgentResponse {
-  text: string                                          // The assistant's text response
-  toolCalls: Array<{ name: string; args: unknown; result: string }>  // Tools invoked
-  usage?: { input: number; output: number; cost: number }           // Token usage
-  messageId?: string                                    // Internal DB message ID
+  text: string; // The assistant's text response
+  toolCalls: Array<{ name: string; args: unknown; result: string }>; // Tools invoked
+  usage?: { input: number; output: number; cost: number }; // Token usage
+  messageId?: string; // Internal DB message ID
 }
 ```
 
 ## pi-agent-core Integration
 
 The project uses `@mariozechner/pi-agent-core` as the agent framework and `@mariozechner/pi-ai` for model access. The `Agent` class handles:
+
 - Multi-turn conversation management
 - Tool calling protocol with the LLM
 - Streaming text generation
@@ -183,19 +191,19 @@ The model is obtained via `getModel('openrouter', modelName)` from `@mariozechne
 ```typescript
 // Internal tool format
 interface InternalTool {
-  name: string
-  description: string
-  parameters: T              // TypeBox schema
-  execute: (toolCallId: string, args: unknown) => Promise<{ output: string; details?: unknown }>
+  name: string;
+  description: string;
+  parameters: T; // TypeBox schema
+  execute: (toolCallId: string, args: unknown) => Promise<{ output: string; details?: unknown }>;
 }
 
 // Adapted to pi-agent format
 interface AgentTool {
-  name: string
-  label: string              // name with underscores replaced by spaces
-  description: string
-  parameters: T
-  execute: (toolCallId: string, params: T) => Promise
+  name: string;
+  label: string; // name with underscores replaced by spaces
+  description: string;
+  parameters: T;
+  execute: (toolCallId: string, params: T) => Promise;
 }
 ```
 
diff --git a/apps/construct/docs/cli.md b/apps/construct/docs/cli.md
index 7b32046..7f8b87a 100644
--- a/apps/construct/docs/cli.md
+++ b/apps/construct/docs/cli.md
@@ -11,8 +11,8 @@ The CLI provides a local interface to Construct without requiring Telegram. Buil
 
 ## Key Files
 
-| File | Role |
-|------|------|
+| File           | Role                                                                |
+| -------------- | ------------------------------------------------------------------- |
 | `cli/index.ts` | CLI entry point: command definition, REPL, one-shot, and tool modes |
 
 ## Modes of Operation
@@ -54,6 +54,7 @@ just cli myinstance --tool memory_recall --args '{"query": "work schedule"}'
 Bypasses the agent entirely and invokes a specific tool with JSON arguments. Useful for testing and debugging tools.
 
 When using `--tool` mode:
+
 1. All tools from all packs are loaded (no embedding selection -- `queryEmbedding` is `undefined`)
 2. The named tool is found and executed directly
 3. The raw output is printed
@@ -74,14 +75,14 @@ args: {
 
 ## CLI vs. Telegram
 
-| Aspect | CLI | Telegram |
-|--------|-----|----------|
-| Source | `'cli'` | `'telegram'` |
-| External ID | `'cli'` (fixed) | Chat ID |
-| Telegram tools | Return `null` (no TelegramContext) | Fully functional |
-| Typing indicator | None | Auto-refreshing |
-| Output format | Plain text | Markdown-to-HTML |
-| Self-deploy | Respects `isDev` flag | Respects `isDev` flag |
+| Aspect           | CLI                                | Telegram              |
+| ---------------- | ---------------------------------- | --------------------- |
+| Source           | `'cli'`                            | `'telegram'`          |
+| External ID      | `'cli'` (fixed)                    | Chat ID               |
+| Telegram tools   | Return `null` (no TelegramContext) | Fully functional      |
+| Typing indicator | None                               | Auto-refreshing       |
+| Output format    | Plain text                         | Markdown-to-HTML      |
+| Self-deploy      | Respects `isDev` flag              | Respects `isDev` flag |
 
 ## Startup
 
diff --git a/apps/construct/docs/database.md b/apps/construct/docs/database.md
index 55383d6..1ca684a 100644
--- a/apps/construct/docs/database.md
+++ b/apps/construct/docs/database.md
@@ -11,12 +11,12 @@ Construct uses SQLite for all persistent storage, accessed through Kysely (a typ
 
 ## Key Files
 
-| File | Role |
-|------|------|
-| `src/db/schema.ts` | Construct-specific table types (extends `CairnDatabase`) |
-| `src/db/queries.ts` | All database query functions |
-| `src/db/migrate.ts` | Migration runner |
-| `src/db/migrations/` | Individual migration files (001-010) |
+| File                 | Role                                                     |
+| -------------------- | -------------------------------------------------------- |
+| `src/db/schema.ts`   | Construct-specific table types (extends `CairnDatabase`) |
+| `src/db/queries.ts`  | All database query functions                             |
+| `src/db/migrate.ts`  | Migration runner                                         |
+| `src/db/migrations/` | Individual migration files (001-010)                     |
 
 The `createDb()` function and custom Kysely dialect live in `@repo/db`. See [DB package docs](/db/) for details on the dialect and pragma configuration.
 
@@ -26,16 +26,16 @@ Construct's database includes all Cairn tables plus four app-specific tables:
 
 ### Cairn Tables (from `@repo/cairn`)
 
-| Table | Purpose |
-|-------|---------|
-| `memories` | Long-term facts, preferences, notes with FTS5 + embeddings |
-| `memories_fts` | FTS5 virtual table synced via triggers |
+| Table           | Purpose                                                                          |
+| --------------- | -------------------------------------------------------------------------------- |
+| `memories`      | Long-term facts, preferences, notes with FTS5 + embeddings                       |
+| `memories_fts`  | FTS5 virtual table synced via triggers                                           |
 | `conversations` | Groups messages by source + external ID. Includes observation watermark columns. |
-| `messages` | Individual messages (Construct extends with `telegram_message_id`) |
-| `observations` | Compressed conversation summaries (Construct adds `expires_at`) |
-| `graph_nodes` | Entities extracted from memories |
-| `graph_edges` | Relationships between entities |
-| `ai_usage` | LLM token/cost tracking |
+| `messages`      | Individual messages (Construct extends with `telegram_message_id`)               |
+| `observations`  | Compressed conversation summaries (Construct adds `expires_at`)                  |
+| `graph_nodes`   | Entities extracted from memories                                                 |
+| `graph_edges`   | Relationships between entities                                                   |
+| `ai_usage`      | LLM token/cost tracking                                                          |
 
 See [Cairn docs](/cairn/) for full Cairn schema details.
 
@@ -43,34 +43,34 @@ See [Cairn docs](/cairn/) for full Cairn schema details.
 
 Construct's `messages` table extends Cairn's base with:
 
-| Column | Type | Notes |
-|--------|------|-------|
+| Column                | Type               | Notes                                                     |
+| --------------------- | ------------------ | --------------------------------------------------------- |
 | `telegram_message_id` | integer (nullable) | Telegram message ID for cross-referencing (migration 004) |
 
 ### Construct-Specific: observations (extended)
 
 Construct adds to Cairn's observations table:
 
-| Column | Type | Notes |
-|--------|------|-------|
+| Column       | Type            | Notes                                                                    |
+| ------------ | --------------- | ------------------------------------------------------------------------ |
 | `expires_at` | text (nullable) | ISO datetime; expired observations filtered from context (migration 010) |
 
 ### schedules
 
 Reminders and recurring tasks.
 
-| Column | Type | Notes |
-|--------|------|-------|
-| `id` | text (PK) | nanoid |
-| `description` | text | Human-readable description |
-| `cron_expression` | text (nullable) | Cron string for recurring schedules |
-| `run_at` | text (nullable) | ISO 8601 datetime for one-shot schedules |
-| `message` | text | NOT NULL; stores description as placeholder when using instruction mode |
-| `prompt` | text (nullable) | Agent instruction to execute when fired (migration 008) |
-| `chat_id` | text | Telegram chat ID to deliver to |
-| `active` | integer | 1 = active, 0 = cancelled. Default 1 |
-| `last_run_at` | text (nullable) | Last execution timestamp |
-| `created_at` | text | Auto-set |
+| Column            | Type            | Notes                                                                   |
+| ----------------- | --------------- | ----------------------------------------------------------------------- |
+| `id`              | text (PK)       | nanoid                                                                  |
+| `description`     | text            | Human-readable description                                              |
+| `cron_expression` | text (nullable) | Cron string for recurring schedules                                     |
+| `run_at`          | text (nullable) | ISO 8601 datetime for one-shot schedules                                |
+| `message`         | text            | NOT NULL; stores description as placeholder when using instruction mode |
+| `prompt`          | text (nullable) | Agent instruction to execute when fired (migration 008)                 |
+| `chat_id`         | text            | Telegram chat ID to deliver to                                          |
+| `active`          | integer         | 1 = active, 0 = cancelled. Default 1                                    |
+| `last_run_at`     | text (nullable) | Last execution timestamp                                                |
+| `created_at`      | text            | Auto-set                                                                |
 
 Index: `idx_schedules_active`
 
@@ -78,56 +78,56 @@ Index: `idx_schedules_active`
 
 Key-value store for application settings.
 
-| Column | Type | Notes |
-|--------|------|-------|
-| `key` | text (PK) | Setting name |
-| `value` | text | Setting value |
-| `updated_at` | text | Auto-set |
+| Column       | Type      | Notes         |
+| ------------ | --------- | ------------- |
+| `key`        | text (PK) | Setting name  |
+| `value`      | text      | Setting value |
+| `updated_at` | text      | Auto-set      |
 
 ### secrets
 
 Stores API keys and tokens for extensions.
 
-| Column | Type | Notes |
-|--------|------|-------|
-| `key` | text (PK) | Secret name |
-| `value` | text | Secret value |
-| `source` | text | `'agent'` or `'env'`. Default `'agent'` |
-| `created_at` | text | Auto-set |
-| `updated_at` | text | Auto-set |
+| Column       | Type      | Notes                                   |
+| ------------ | --------- | --------------------------------------- |
+| `key`        | text (PK) | Secret name                             |
+| `value`      | text      | Secret value                            |
+| `source`     | text      | `'agent'` or `'env'`. Default `'agent'` |
+| `created_at` | text      | Auto-set                                |
+| `updated_at` | text      | Auto-set                                |
 
 ### pending_asks
 
 Tracks interactive questions sent to users via Telegram (used by `telegram_ask` tool).
 
-| Column | Type | Notes |
-|--------|------|-------|
-| `id` | text (PK) | nanoid |
-| `conversation_id` | text | References conversations |
-| `chat_id` | text | Telegram chat ID |
-| `question` | text | The question text |
-| `options` | text (nullable) | JSON array of option strings |
+| Column                | Type               | Notes                          |
+| --------------------- | ------------------ | ------------------------------ |
+| `id`                  | text (PK)          | nanoid                         |
+| `conversation_id`     | text               | References conversations       |
+| `chat_id`             | text               | Telegram chat ID               |
+| `question`            | text               | The question text              |
+| `options`             | text (nullable)    | JSON array of option strings   |
 | `telegram_message_id` | integer (nullable) | Telegram message ID of the ask |
-| `created_at` | text | Auto-set |
-| `resolved_at` | text (nullable) | When the user responded |
-| `response` | text (nullable) | The user's response |
+| `created_at`          | text               | Auto-set                       |
+| `resolved_at`         | text (nullable)    | When the user responded        |
+| `response`            | text (nullable)    | The user's response            |
 
 ## Migrations
 
 Migrations use `@repo/db`'s migration runner. Each migration exports `up()` and optionally `down()`.
 
-| Migration | Description |
-|-----------|-------------|
-| `001-initial.ts` | Base tables: memories, conversations, messages, schedules, ai_usage, settings |
-| `002-fts5-and-embeddings.ts` | FTS5 virtual table, sync triggers, `embedding` column on memories |
-| `003-secrets.ts` | Creates the secrets table |
-| `004-telegram-message-ids.ts` | Adds `telegram_message_id` column and index to messages |
-| `005-graph-memory.ts` | Creates `graph_nodes` and `graph_edges` tables |
-| `006-observational-memory.ts` | Creates `observations` table, adds watermark columns to conversations |
-| `007-observation-promoted-at.ts` | Adds `promoted_at` column to observations (for promoter tracking) |
-| `008-schedule-prompt.ts` | Adds `prompt` column to schedules (agent instruction mode) |
-| `009-pending-asks.ts` | Creates `pending_asks` table |
-| `010-observation-expires-at.ts` | Adds `expires_at` column to observations |
+| Migration                        | Description                                                                   |
+| -------------------------------- | ----------------------------------------------------------------------------- |
+| `001-initial.ts`                 | Base tables: memories, conversations, messages, schedules, ai_usage, settings |
+| `002-fts5-and-embeddings.ts`     | FTS5 virtual table, sync triggers, `embedding` column on memories             |
+| `003-secrets.ts`                 | Creates the secrets table                                                     |
+| `004-telegram-message-ids.ts`    | Adds `telegram_message_id` column and index to messages                       |
+| `005-graph-memory.ts`            | Creates `graph_nodes` and `graph_edges` tables                                |
+| `006-observational-memory.ts`    | Creates `observations` table, adds watermark columns to conversations         |
+| `007-observation-promoted-at.ts` | Adds `promoted_at` column to observations (for promoter tracking)             |
+| `008-schedule-prompt.ts`         | Adds `prompt` column to schedules (agent instruction mode)                    |
+| `009-pending-asks.ts`            | Creates `pending_asks` table                                                  |
+| `010-observation-expires-at.ts`  | Adds `expires_at` column to observations                                      |
 
 Convention: **additive only** -- never drop tables or columns.
 
diff --git a/.docs/guides/deployment.md b/apps/construct/docs/deployment.md
similarity index 85%
rename from .docs/guides/deployment.md
rename to apps/construct/docs/deployment.md
index 45a07c3..143408d 100644
--- a/.docs/guides/deployment.md
+++ b/apps/construct/docs/deployment.md
@@ -1,6 +1,9 @@
-# Deployment Guide
+---
+title: Deployment Guide
+description: Docker and systemd deployment
+---
 
-*Last updated: 2026-02-25 -- Added Docker deployment as primary method*
+# Deployment Guide
 
 ## Overview
 
@@ -8,13 +11,13 @@ Construct can be deployed via Docker (recommended) or as a bare-metal systemd se
 
 ## Key Files
 
-| File | Role |
-|------|------|
-| `deploy/Dockerfile` | Multi-stage Docker build |
-| `deploy/docker-compose.yml` | Compose configuration with volume mounts and env |
-| `.dockerignore` | Excludes build artifacts, secrets, and dev files |
-| `src/tools/self/self-deploy.ts` | Self-deploy tool (Docker-aware) |
-| `src/main.ts` | Application entry point |
+| File                            | Role                                             |
+| ------------------------------- | ------------------------------------------------ |
+| `deploy/Dockerfile`             | Multi-stage Docker build                         |
+| `deploy/docker-compose.yml`     | Compose configuration with volume mounts and env |
+| `.dockerignore`                 | Excludes build artifacts, secrets, and dev files |
+| `src/tools/self/self-deploy.ts` | Self-deploy tool (Docker-aware)                  |
+| `src/main.ts`                   | Application entry point                          |
 
 ## Docker Deployment (Primary)
 
@@ -39,7 +42,7 @@ OPENROUTER_API_KEY=sk-or-v1-...
 TELEGRAM_BOT_TOKEN=123456:ABC-DEF...
 ```
 
-See [Environment Configuration](./environment.md) for all available variables. Note that `DATABASE_URL`, `LOG_FILE`, and `EXTENSIONS_DIR` are pre-set in the Dockerfile to point to `/data/` paths, so you do not need to set them in your `.env` file.
+See the Environment Variables section in `CLAUDE.md` for all available variables. Note that `DATABASE_URL`, `LOG_FILE`, and `EXTENSIONS_DIR` are pre-set in the Dockerfile to point to `/data/` paths, so you do not need to set them in your `.env` file.
 
 ### 2. Build and Run
 
@@ -50,6 +53,7 @@ docker compose -f deploy/docker-compose.yml up -d --build
 ```
 
 This will:
+
 1. Build a multi-stage image using `node:22-alpine`
 2. Install dependencies in a builder stage, then copy only `node_modules` to the runtime stage
 3. Install `git` in the runtime stage (required for self-deploy commits)
@@ -89,14 +93,16 @@ The `.env` file is read by Docker Compose via `env_file:` -- it is injected as e
 The Dockerfile (`deploy/Dockerfile`) uses a two-stage build:
 
 **Builder stage** -- installs dependencies:
+
 ```dockerfile
 FROM node:22-alpine AS builder
 WORKDIR /app
-COPY package.json package-lock.json ./
-RUN npm ci
+COPY package.json pnpm-lock.yaml ./
+RUN pnpm install --frozen-lockfile
 ```
 
 **Runtime stage** -- copies dependencies and source:
+
 ```dockerfile
 FROM node:22-alpine
 WORKDIR /app
@@ -108,6 +114,7 @@ COPY cli/ ./cli/
 ```
 
 Environment defaults baked into the image:
+
 - `DATABASE_URL=/data/construct.db`
 - `LOG_FILE=/data/construct.log`
 - `EXTENSIONS_DIR=/data/extensions`
@@ -132,6 +139,7 @@ services:
 ```
 
 Key points:
+
 - Build context is `..` (project root), since the compose file lives in `deploy/`
 - `restart: unless-stopped` is critical for the self-deploy mechanism (see below)
 
@@ -188,7 +196,7 @@ For bare-metal deployment without Docker:
 ```bash
 git clone  /opt/construct
 cd /opt/construct
-npm ci
+pnpm install --frozen-lockfile
 ```
 
 ### 2. Create Environment File
@@ -228,11 +236,3 @@ The self-deploy tool expects the systemd unit to be named `construct` by default
 ## Rate Limiting
 
 Self-deploy is rate-limited to 3 deploys per hour in both Docker and systemd modes. The rate limit is tracked in-memory (`deployHistory` array in `self-deploy.ts`), so it resets on process restart.
-
-## Related Documentation
-
-- [Environment Configuration](./environment.md) -- All environment variables
-- [Development Workflow](./development.md) -- Dev mode, npm scripts, testing
-- [Security Considerations](./security.md) -- Self-deploy safety gates, secrets, Docker security
-- [Tool System](./../features/tools.md) -- Self-modification tools
-- [Architecture Overview](./../architecture/overview.md) -- System startup sequence
diff --git a/apps/construct/docs/extensions.md b/apps/construct/docs/extensions.md
index 949fbe2..b99404f 100644
--- a/apps/construct/docs/extensions.md
+++ b/apps/construct/docs/extensions.md
@@ -13,13 +13,13 @@ The extension system also manages three **identity files** (SOUL.md, IDENTITY.md
 
 ## Key Files
 
-| File | Role |
-|------|------|
-| `src/extensions/index.ts` | Singleton registry, `initExtensions()`, `reloadExtensions()`, selection helpers |
-| `src/extensions/loader.ts` | File loading: identity files, skills (Markdown), dynamic tools (TypeScript via jiti) |
-| `src/extensions/embeddings.ts` | Embedding caches for skills and dynamic packs, selection functions |
-| `src/extensions/secrets.ts` | Secret management: store, get, list, delete, env sync, secrets map builder |
-| `src/extensions/types.ts` | TypeScript interfaces for Skill, DynamicToolExport, ExtensionRegistry, etc. |
+| File                           | Role                                                                                 |
+| ------------------------------ | ------------------------------------------------------------------------------------ |
+| `src/extensions/index.ts`      | Singleton registry, `initExtensions()`, `reloadExtensions()`, selection helpers      |
+| `src/extensions/loader.ts`     | File loading: identity files, skills (Markdown), dynamic tools (TypeScript via jiti) |
+| `src/extensions/embeddings.ts` | Embedding caches for skills and dynamic packs, selection functions                   |
+| `src/extensions/secrets.ts`    | Secret management: store, get, list, delete, env sync, secrets map builder           |
+| `src/extensions/types.ts`      | TypeScript interfaces for Skill, DynamicToolExport, ExtensionRegistry, etc.          |
 
 ## Extensions Directory Layout
 
@@ -41,6 +41,7 @@ $EXTENSIONS_DIR/
 ```
 
 The default `EXTENSIONS_DIR` is:
+
 - **Development**: `./data` (relative to project root)
 - **Production**: `$XDG_DATA_HOME/construct/` (typically `~/.local/share/construct/`)
 
@@ -48,15 +49,16 @@ The default `EXTENSIONS_DIR` is:
 
 Three Markdown files injected into the system prompt:
 
-| File | Purpose | System Prompt Section |
-|------|---------|----------------------|
-| `SOUL.md` | Personality traits, values, communication anti-patterns | `## Soul` |
-| `IDENTITY.md` | Name, creature type, visual description, pronouns | `## Identity` |
-| `USER.md` | Human's name, location, preferences, interests, schedule | `## User` |
+| File          | Purpose                                                  | System Prompt Section |
+| ------------- | -------------------------------------------------------- | --------------------- |
+| `SOUL.md`     | Personality traits, values, communication anti-patterns  | `## Soul`             |
+| `IDENTITY.md` | Name, creature type, visual description, pronouns        | `## Identity`         |
+| `USER.md`     | Human's name, location, preferences, interests, schedule | `## User`             |
 
 These are loaded by `loadIdentityFiles()` in `src/extensions/loader.ts` and stored in the `ExtensionRegistry.identity` field. They are read/written by the `identity_read` and `identity_update` tools.
 
 When an identity file is updated via `identity_update`, the tool:
+
 1. Writes the new content to disk
 2. Calls `invalidateSystemPromptCache()` to clear the cached system prompt
 3. Calls `reloadExtensions()` to refresh the registry
@@ -89,13 +91,13 @@ When the user asks for a standup or morning briefing:
 
 ### Frontmatter Fields
 
-| Field | Required | Description |
-|-------|:---:|-------------|
-| `name` | Yes | Unique skill name |
-| `description` | Yes | Short description (used for embedding) |
-| `requires.secrets` | No | Secret keys that must exist in the `secrets` table |
-| `requires.env` | No | Environment variables that must be set |
-| `requires.bins` | No | Binary executables needed (logged but not enforced) |
+| Field              | Required | Description                                         |
+| ------------------ | :------: | --------------------------------------------------- |
+| `name`             |   Yes    | Unique skill name                                   |
+| `description`      |   Yes    | Short description (used for embedding)              |
+| `requires.secrets` |    No    | Secret keys that must exist in the `secrets` table  |
+| `requires.env`     |    No    | Environment variables that must be set              |
+| `requires.bins`    |    No    | Binary executables needed (logged but not enforced) |
 
 ### How Skills Are Selected
 
@@ -118,6 +120,7 @@ When the user asks for a standup...
 ### Requirement Checking
 
 `checkRequirements()` in `src/extensions/loader.ts` validates:
+
 - `requires.env` -- checks `process.env`
 - `requires.secrets` -- checks against available secrets from the database
 - `requires.bins` -- logged only (not enforced)
@@ -133,31 +136,32 @@ Dynamic tools are TypeScript files under `$EXTENSIONS_DIR/tools/`. They are load
 A dynamic tool file must export:
 
 ```typescript
-import { Type, type Static } from '@sinclair/typebox'
+import { Type, type Static } from "@sinclair/typebox";
 
 // Optional: declare requirements
 export const meta = {
   requires: {
-    secrets: ['OPENWEATHERMAP_API_KEY'],
+    secrets: ["OPENWEATHERMAP_API_KEY"],
   },
-}
+};
 
 // Default export: either a tool object or a factory function
 export default (ctx: DynamicToolContext) => ({
-  name: 'weather_current',
-  description: 'Get current weather for a location',
+  name: "weather_current",
+  description: "Get current weather for a location",
   parameters: Type.Object({
-    location: Type.String({ description: 'City name' }),
+    location: Type.String({ description: "City name" }),
   }),
   execute: async (_id: string, args: { location: string }) => {
-    const apiKey = ctx.secrets.get('OPENWEATHERMAP_API_KEY')
+    const apiKey = ctx.secrets.get("OPENWEATHERMAP_API_KEY");
     // ... fetch weather ...
-    return { output: `Weather in ${args.location}: ...` }
+    return { output: `Weather in ${args.location}: ...` };
   },
-})
+});
 ```
 
 The default export can be:
+
 - A **factory function** `(ctx: DynamicToolContext) => InternalTool` -- receives secrets and context
 - A **plain tool object** `InternalTool` -- for tools that don't need secrets
 
@@ -165,7 +169,7 @@ The default export can be:
 
 ```typescript
 interface DynamicToolContext {
-  secrets: Map   // All secrets from the secrets table
+  secrets: Map; // All secrets from the secrets table
 }
 ```
 
@@ -198,9 +202,9 @@ The singleton registry holds all loaded extension data:
 
 ```typescript
 interface ExtensionRegistry {
-  identity: IdentityFiles       // { soul, identity, user } -- string | null each
-  skills: Skill[]               // Parsed skill objects
-  dynamicPacks: ToolPack[]      // Dynamic tool packs (same ToolPack type as builtins)
+  identity: IdentityFiles; // { soul, identity, user } -- string | null each
+  skills: Skill[]; // Parsed skill objects
+  dynamicPacks: ToolPack[]; // Dynamic tool packs (same ToolPack type as builtins)
 }
 ```
 
diff --git a/apps/construct/docs/index.md b/apps/construct/docs/index.md
index 1ef59bc..d66460f 100644
--- a/apps/construct/docs/index.md
+++ b/apps/construct/docs/index.md
@@ -71,14 +71,15 @@ Two-layer design for prompt caching:
 
 Tools are organized into **packs** -- groups selected per message by embedding similarity.
 
-| Pack | Always loaded | Tools |
-|------|---------------|-------|
-| `core` | Yes | memory_store, memory_recall, memory_forget, memory_graph, schedule_create, schedule_list, schedule_cancel, secret_store, secret_list, secret_delete, usage_stats, identity_read, identity_update |
-| `web` | No | web_read, web_search (requires `TAVILY_API_KEY`) |
-| `self` | No | self_read, self_edit, self_test, self_logs, self_deploy (prod only), self_status, extension_reload |
-| `telegram` | Yes (when ctx) | telegram_react, telegram_reply_to, telegram_pin, telegram_unpin, telegram_get_pinned, telegram_ask |
+| Pack       | Always loaded  | Tools                                                                                                                                                                                            |
+| ---------- | -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| `core`     | Yes            | memory_store, memory_recall, memory_forget, memory_graph, schedule_create, schedule_list, schedule_cancel, secret_store, secret_list, secret_delete, usage_stats, identity_read, identity_update |
+| `web`      | No             | web_read, web_search (requires `TAVILY_API_KEY`)                                                                                                                                                 |
+| `self`     | No             | self_read, self_edit, self_test, self_logs, self_deploy (prod only), self_status, extension_reload                                                                                               |
+| `telegram` | Yes (when ctx) | telegram_react, telegram_reply_to, telegram_pin, telegram_unpin, telegram_get_pinned, telegram_ask                                                                                               |
 
 Selection algorithm:
+
 1. At startup, `initPackEmbeddings()` embeds each non-`alwaysLoad` pack's description
 2. Per message, the query embedding (from step 4 of processMessage) is compared against pack embeddings via cosine similarity
 3. Packs above threshold (0.3) are included. `alwaysLoad` packs always included
@@ -106,6 +107,7 @@ Grammy bot with long polling. Key behaviors:
 Croner-based reminder system. All schedules run through the full `processMessage()` pipeline with tool access, memory, and reasoning.
 
 Mechanics:
+
 - **Cron** -- Recurring schedules via cron expressions (with timezone support)
 - **One-shot** -- `run_at` timestamp; auto-cancelled after firing. Past-due one-shots fire immediately.
 - **Sync loop** -- Every 30s, polls the `schedules` table for new/cancelled entries and updates the in-memory job map
@@ -127,16 +129,19 @@ All modes run migrations, create a DB connection, and go through `processMessage
 User/agent-authored capabilities loaded from `EXTENSIONS_DIR`.
 
 **Identity files** (root of extensions dir):
+
 - `SOUL.md` -- Personality traits, values, communication style
 - `IDENTITY.md` -- Agent metadata: name, creature type, pronouns
 - `USER.md` -- Human context: name, location, preferences
 
 **Skills** (`skills/` subdir):
+
 - Markdown files with YAML frontmatter (`name`, `description`, optional `requires`)
 - Body injected into context preamble when selected by embedding similarity
 - Not tools -- they are instructions the agent follows
 
 **Dynamic tools** (`tools/` subdir):
+
 - TypeScript files loaded at runtime via jiti (no compile step)
 - Single `.ts` file = standalone pack; directory of `.ts` files = grouped pack
 - Export `{ name, description, parameters, execute }` (or factory function receiving `DynamicToolContext`)
@@ -144,6 +149,7 @@ User/agent-authored capabilities loaded from `EXTENSIONS_DIR`.
 - `node_modules` symlinked from project root for import resolution
 
 **Lifecycle**:
+
 1. `initExtensions()` at startup: create dirs, load everything, compute embeddings
 2. `extension_reload` tool: re-reads all files, rebuilds registry, recomputes embeddings
 3. Selection per message: skills and dynamic packs filtered by embedding similarity (same query embedding)
@@ -152,31 +158,31 @@ User/agent-authored capabilities loaded from `EXTENSIONS_DIR`.
 
 ## Key files
 
-| File | Role |
-|------|------|
-| `src/main.ts` | Entry point, boot sequence, graceful shutdown |
-| `src/agent.ts` | `processMessage()` pipeline, `AgentResponse` type, pi-agent adaptation |
-| `src/system-prompt.ts` | Base system prompt, identity injection, context preamble builder |
-| `src/env.ts` | Zod-validated environment config |
-| `src/logger.ts` | Logtape logging setup |
-| `src/cli/index.ts` | CLI: REPL, one-shot, tool invoke, reembed, backfill |
-| `src/telegram/bot.ts` | Grammy bot, authorization, queueing, reply threading, typing |
-| `src/telegram/format.ts` | Markdown-to-Telegram-HTML conversion |
-| `src/telegram/types.ts` | `TelegramContext`, `TelegramSideEffects` |
-| `src/scheduler/index.ts` | Croner scheduler, static/agent execution, sync loop |
-| `src/tools/packs.ts` | Tool pack definitions, embedding selection, `InternalTool` interface |
-| `src/tools/core/` | Memory, schedule, secret, identity, usage tools |
-| `src/tools/self/` | self_read, self_edit, self_test, self_logs, self_deploy, self_status, extension_reload |
-| `src/tools/web/` | web_search (Tavily), web_read (fetch + parse) |
-| `src/tools/telegram/` | react, reply_to, pin, unpin, get_pinned |
-| `src/extensions/index.ts` | Extension registry, init/reload, skill/dynamic-tool selection |
-| `src/extensions/loader.ts` | Skill parser, dynamic tool loader (jiti), requirement checker |
-| `src/extensions/embeddings.ts` | Skill + dynamic pack embedding cache and selection |
-| `src/extensions/secrets.ts` | Secrets table sync + builder |
-| `src/memory.ts` | ConstructMemoryManager (extends Cairn with custom prompts, expires_at) |
-| `src/db/schema.ts` | Construct-specific tables (extends CairnDatabase) |
-| `src/db/queries.ts` | All DB query helpers |
-| `src/db/migrate.ts` | Migration runner |
+| File                           | Role                                                                                   |
+| ------------------------------ | -------------------------------------------------------------------------------------- |
+| `src/main.ts`                  | Entry point, boot sequence, graceful shutdown                                          |
+| `src/agent.ts`                 | `processMessage()` pipeline, `AgentResponse` type, pi-agent adaptation                 |
+| `src/system-prompt.ts`         | Base system prompt, identity injection, context preamble builder                       |
+| `src/env.ts`                   | Zod-validated environment config                                                       |
+| `src/logger.ts`                | Logtape logging setup                                                                  |
+| `src/cli/index.ts`             | CLI: REPL, one-shot, tool invoke, reembed, backfill                                    |
+| `src/telegram/bot.ts`          | Grammy bot, authorization, queueing, reply threading, typing                           |
+| `src/telegram/format.ts`       | Markdown-to-Telegram-HTML conversion                                                   |
+| `src/telegram/types.ts`        | `TelegramContext`, `TelegramSideEffects`                                               |
+| `src/scheduler/index.ts`       | Croner scheduler, static/agent execution, sync loop                                    |
+| `src/tools/packs.ts`           | Tool pack definitions, embedding selection, `InternalTool` interface                   |
+| `src/tools/core/`              | Memory, schedule, secret, identity, usage tools                                        |
+| `src/tools/self/`              | self_read, self_edit, self_test, self_logs, self_deploy, self_status, extension_reload |
+| `src/tools/web/`               | web_search (Tavily), web_read (fetch + parse)                                          |
+| `src/tools/telegram/`          | react, reply_to, pin, unpin, get_pinned                                                |
+| `src/extensions/index.ts`      | Extension registry, init/reload, skill/dynamic-tool selection                          |
+| `src/extensions/loader.ts`     | Skill parser, dynamic tool loader (jiti), requirement checker                          |
+| `src/extensions/embeddings.ts` | Skill + dynamic pack embedding cache and selection                                     |
+| `src/extensions/secrets.ts`    | Secrets table sync + builder                                                           |
+| `src/memory.ts`                | ConstructMemoryManager (extends Cairn with custom prompts, expires_at)                 |
+| `src/db/schema.ts`             | Construct-specific tables (extends CairnDatabase)                                      |
+| `src/db/queries.ts`            | All DB query helpers                                                                   |
+| `src/db/migrate.ts`            | Migration runner                                                                       |
 
 ## Database tables
 
diff --git a/apps/construct/docs/memory.md b/apps/construct/docs/memory.md
index 15792b8..deaa404 100644
--- a/apps/construct/docs/memory.md
+++ b/apps/construct/docs/memory.md
@@ -16,6 +16,7 @@ Construct subclasses Cairn's `MemoryManager` as `ConstructMemoryManager` to add:
 ### Custom Observer/Reflector Prompts
 
 `CONSTRUCT_OBSERVER_PROMPT` and `CONSTRUCT_REFLECTOR_PROMPT` extend the defaults with Construct-specific guidance:
+
 - **expires_at support** -- The observer can tag observations with an `expires_at` datetime for time-bound items (reminders, deadlines). Expired observations are filtered out of context.
 - **Conversation style** -- Tuned for personal companion interactions rather than generic message compression.
 
@@ -35,12 +36,12 @@ The overridden `getUnobservedMessages()` selects `telegram_message_id` alongside
 
 The agent stores and recalls facts via tools:
 
-| Tool | Description |
-|------|-------------|
-| `memory_store` | Store a memory with content, category, tags. Triggers async embedding + graph extraction. |
-| `memory_recall` | Hybrid search: FTS5 + embedding cosine similarity + LIKE fallback, with graph expansion. |
-| `memory_forget` | Soft-delete by ID or search for candidates. |
-| `memory_graph` | Explore the knowledge graph: search nodes, explore connections, check connectivity. |
+| Tool            | Description                                                                               |
+| --------------- | ----------------------------------------------------------------------------------------- |
+| `memory_store`  | Store a memory with content, category, tags. Triggers async embedding + graph extraction. |
+| `memory_recall` | Hybrid search: FTS5 + embedding cosine similarity + LIKE fallback, with graph expansion.  |
+| `memory_forget` | Soft-delete by ID or search for candidates.                                               |
+| `memory_graph`  | Explore the knowledge graph: search nodes, explore connections, check connectivity.       |
 
 Categories: `general` (default), `preference`, `fact`, `reminder`, `note`.
 
@@ -94,6 +95,7 @@ sequenceDiagram
 Embeddings are generated via OpenRouter's embeddings API. The default model is `qwen/qwen3-embedding-4b` (configurable via `EMBEDDING_MODEL`).
 
 Embeddings serve three purposes:
+
 1. **Memory recall** -- Semantic search (cosine similarity threshold 0.4 in processMessage, 0.3 in recallMemories)
 2. **Tool pack selection** -- Same query embedding reused
 3. **Skill selection** -- Same query embedding reused
@@ -102,8 +104,8 @@ If embedding generation fails, the system degrades gracefully: FTS5 and keyword
 
 ## Configuration
 
-| Variable | Required | Default | Description |
-|----------|----------|---------|-------------|
-| `MEMORY_WORKER_MODEL` | No | *(none)* | Model for observer, reflector, graph extraction. If unset, LLM-powered memory features disabled. |
-| `EMBEDDING_MODEL` | No | `qwen/qwen3-embedding-4b` | Model for embedding vectors. |
-| `OPENROUTER_API_KEY` | Yes | -- | Used for all API calls. |
+| Variable              | Required | Default                   | Description                                                                                      |
+| --------------------- | -------- | ------------------------- | ------------------------------------------------------------------------------------------------ |
+| `MEMORY_WORKER_MODEL` | No       | _(none)_                  | Model for observer, reflector, graph extraction. If unset, LLM-powered memory features disabled. |
+| `EMBEDDING_MODEL`     | No       | `qwen/qwen3-embedding-4b` | Model for embedding vectors.                                                                     |
+| `OPENROUTER_API_KEY`  | Yes      | --                        | Used for all API calls.                                                                          |
diff --git a/apps/construct/docs/scheduler.md b/apps/construct/docs/scheduler.md
index 67a08c2..8c5e05d 100644
--- a/apps/construct/docs/scheduler.md
+++ b/apps/construct/docs/scheduler.md
@@ -11,12 +11,12 @@ The scheduler enables Construct to fire actions at specific times or on recurrin
 
 ## Key Files
 
-| File | Role |
-|------|------|
-| `src/scheduler/index.ts` | Scheduler lifecycle: start, register, fire, sync, stop |
+| File                         | Role                                                                      |
+| ---------------------------- | ------------------------------------------------------------------------- |
+| `src/scheduler/index.ts`     | Scheduler lifecycle: start, register, fire, sync, stop                    |
 | `src/tools/core/schedule.ts` | `schedule_create`, `schedule_list`, `schedule_cancel` tools + dedup logic |
-| `src/db/schema.ts` | `ScheduleTable` type |
-| `src/db/queries.ts` | Schedule CRUD queries |
+| `src/db/schema.ts`           | `ScheduleTable` type                                                      |
+| `src/db/queries.ts`          | Schedule CRUD queries                                                     |
 
 ## How It Works
 
@@ -30,10 +30,10 @@ The scheduler enables Construct to fire actions at specific times or on recurrin
 
 ### Schedule Timing Types
 
-| Type | Database Column | Behavior |
-|------|----------------|----------|
+| Type          | Database Column   | Behavior                                             |
+| ------------- | ----------------- | ---------------------------------------------------- |
 | **Recurring** | `cron_expression` | Runs on a cron schedule indefinitely until cancelled |
-| **One-shot** | `run_at` | Fires once at the specified time, then auto-cancels |
+| **One-shot**  | `run_at`          | Fires once at the specified time, then auto-cancels  |
 
 ### Execution
 
@@ -47,6 +47,7 @@ When a schedule fires, `fireSchedule()` routes it through `fireAgentSchedule()`:
 6. The response is saved to the user's Telegram conversation history with a `[Scheduled: ...]` prefix
 
 This makes schedules useful for:
+
 - **Conditional notifications**: "Check if BTC is above $100k and only notify me if it is"
 - **Background tasks**: "Summarize my unread memories every Sunday"
 - **Reminders with context**: "Remind the user about their dentist appointment and check if they need directions"
@@ -84,12 +85,14 @@ The agent creates and manages schedules through three tools in the core pack (`s
 ### schedule_create
 
 Parameters:
+
 - `description` (required) -- Human-readable description (e.g. "Dentist appointment reminder")
 - `instruction` (required) -- What the agent should do when the schedule fires. The agent runs this with full context and tool access.
 - `cron_expression` (optional) -- Cron string (e.g., `"0 9 * * 1"` for Monday at 9am)
 - `run_at` (optional) -- Datetime in user's local timezone, without Z or offset (e.g. `"2025-03-05T09:00:00"`)
 
 Validation:
+
 - Must provide either `cron_expression` or `run_at` (timing)
 - `run_at` values have timezone offsets stripped so they're treated as local time
 - `chat_id` is automatically injected from the current conversation context
diff --git a/docs/guides/security.md b/apps/construct/docs/security.md
similarity index 97%
rename from docs/guides/security.md
rename to apps/construct/docs/security.md
index 6287774..47c18f7 100644
--- a/docs/guides/security.md
+++ b/apps/construct/docs/security.md
@@ -18,12 +18,14 @@ The agent has three self-modification tools: `self_read_source`, `self_edit_sour
 Both `self_read_source` (`src/tools/self/self-read.ts`) and `self_edit_source` (`src/tools/self/self-edit.ts`) enforce path restrictions:
 
 **Read access** is limited to:
+
 - `src/` -- application source
 - `cli/` -- CLI source
 - `extensions/` -- user/agent-authored extensions (resolved against `EXTENSIONS_DIR`)
 - `package.json`, `tsconfig.json`, `CLAUDE.md`, `PLAN.md` -- read-only config files
 
 **Write access** is limited to:
+
 - `src/` -- application source
 - `cli/` -- CLI source
 - `extensions/` -- extensions directory
@@ -33,8 +35,8 @@ Both tools resolve paths against the project root and perform explicit prefix ch
 The extension path prefix (`extensions/`) is resolved against `EXTENSIONS_DIR`, not the project root, with its own traversal guard:
 
 ```typescript
-if (!resolved.startsWith(resolve(extensionsDir) + '/') && resolved !== resolve(extensionsDir)) {
-  return { output: 'Access denied: escapes the extensions directory.' }
+if (!resolved.startsWith(resolve(extensionsDir) + "/") && resolved !== resolve(extensionsDir)) {
+  return { output: "Access denied: escapes the extensions directory." };
 }
 ```
 
@@ -66,7 +68,7 @@ In Docker mode, the auto-rollback mechanism is **not available**. The tool calls
 
 Secrets are stored in the `secrets` table in SQLite with columns: `key`, `value`, `source`, `updated_at`. The `source` field tracks whether a secret came from the environment (`'env'`) or was stored by the agent (`'agent'`).
 
-### EXT_* Environment Variable Sync
+### EXT\_\* Environment Variable Sync
 
 On startup, `syncEnvSecrets()` in `src/extensions/secrets.ts` scans `process.env` for variables prefixed with `EXT_`, strips the prefix, and upserts them into the `secrets` table with `source='env'`:
 
@@ -102,6 +104,7 @@ The Dockerfile (`deploy/Dockerfile`) and compose file (`deploy/docker-compose.ym
 **Base image** -- `node:22-alpine` is a minimal image. Alpine's small surface area reduces exposure.
 
 **Volume mount** -- `~/.construct:/data` gives the container read/write access to:
+
 - The SQLite database (`construct.db`)
 - The log file (`construct.log`)
 - The extensions directory (`extensions/`)
@@ -152,9 +155,9 @@ The container does not expose any ports. Construct communicates with Telegram vi
 Extension tools are TypeScript files loaded at runtime via `jiti` (a TypeScript-to-JavaScript transpiler). The loading happens in `src/extensions/loader.ts` via `loadSingleToolFile()`:
 
 ```typescript
-const { createJiti } = await import('jiti')
-const jiti = createJiti(import.meta.url, { interopDefault: true, moduleCache: false })
-const mod = await jiti.import(filePath)
+const { createJiti } = await import("jiti");
+const jiti = createJiti(import.meta.url, { interopDefault: true, moduleCache: false });
+const mod = await jiti.import(filePath);
 ```
 
 **Trust implication**: Any `.ts` file placed in `$EXTENSIONS_DIR/tools/` will be imported and executed with the full privileges of the Node.js process. There is no sandboxing. A malicious extension tool could:
@@ -165,6 +168,7 @@ const mod = await jiti.import(filePath)
 - Execute child processes
 
 **Mitigations**:
+
 - Extensions are loaded only from `EXTENSIONS_DIR`, which is a controlled directory
 - The agent can only create files within the `extensions/` scope via `self_edit_source`
 - `moduleCache: false` ensures tools are freshly loaded on each `reloadExtensions()` call, so stale or modified tools are not cached
@@ -192,13 +196,14 @@ The `ALLOWED_TELEGRAM_IDS` environment variable restricts which Telegram users c
 
 ```typescript
 function isAuthorized(userId: string): boolean {
-  return allowedIds.length === 0 || allowedIds.includes(userId)
+  return allowedIds.length === 0 || allowedIds.includes(userId);
 }
 ```
 
 If `ALLOWED_TELEGRAM_IDS` is empty (the default), **all users are allowed**. For production, always set this to a comma-separated list of trusted Telegram user IDs.
 
 Authorization is checked for:
+
 - Text messages (`bot.on('message:text')`)
 - Reactions (`bot.on('message_reaction')`)
 - Other message types (`bot.on('message')`)
diff --git a/apps/construct/docs/system-prompt.md b/apps/construct/docs/system-prompt.md
index 7d16b3b..602709c 100644
--- a/apps/construct/docs/system-prompt.md
+++ b/apps/construct/docs/system-prompt.md
@@ -16,8 +16,8 @@ This separation means the LLM can cache the system prompt tokens and only proces
 
 ## Key Files
 
-| File | Role |
-|------|------|
+| File                   | Role                                                                           |
+| ---------------------- | ------------------------------------------------------------------------------ |
 | `src/system-prompt.ts` | `getSystemPrompt()`, `buildContextPreamble()`, `invalidateSystemPromptCache()` |
 
 ## Static System Prompt
diff --git a/apps/construct/docs/telegram.md b/apps/construct/docs/telegram.md
index 925e5c9..703e361 100644
--- a/apps/construct/docs/telegram.md
+++ b/apps/construct/docs/telegram.md
@@ -11,17 +11,18 @@ Telegram is the primary user-facing interface for Construct. The bot uses Grammy
 
 ## Key Files
 
-| File | Role |
-|------|------|
-| `src/telegram/bot.ts` | Bot creation, message/reaction handlers, markdown conversion, reply logic |
-| `src/telegram/index.ts` | Standalone Telegram-only entry point (runs bot without scheduler) |
-| `src/telegram/types.ts` | `TelegramContext` and `TelegramSideEffects` interfaces |
+| File                    | Role                                                                      |
+| ----------------------- | ------------------------------------------------------------------------- |
+| `src/telegram/bot.ts`   | Bot creation, message/reaction handlers, markdown conversion, reply logic |
+| `src/telegram/index.ts` | Standalone Telegram-only entry point (runs bot without scheduler)         |
+| `src/telegram/types.ts` | `TelegramContext` and `TelegramSideEffects` interfaces                    |
 
 ## Bot Setup
 
 `createBot(db)` in `src/telegram/bot.ts` creates a Grammy `Bot` instance with the token from `env.TELEGRAM_BOT_TOKEN`.
 
 The bot listens for two event types:
+
 - `message:text` -- Text messages from users
 - `message_reaction` -- Emoji reactions on messages
 
@@ -63,10 +64,10 @@ Each message creates a `TelegramContext` passed to `processMessage()`:
 
 ```typescript
 interface TelegramContext {
-  bot: Bot                       // Grammy bot instance
-  chatId: string                 // Telegram chat ID
-  incomingMessageId: number      // Message ID of the user's message
-  sideEffects: TelegramSideEffects  // Mutable object for tool side-effects
+  bot: Bot; // Grammy bot instance
+  chatId: string; // Telegram chat ID
+  incomingMessageId: number; // Message ID of the user's message
+  sideEffects: TelegramSideEffects; // Mutable object for tool side-effects
 }
 ```
 
@@ -76,13 +77,14 @@ Tools can set flags on `sideEffects` during execution:
 
 ```typescript
 interface TelegramSideEffects {
-  reactToUser?: string           // Emoji to react with
-  replyToMessageId?: number      // Message ID for reply threading
-  suppressText?: boolean         // If true, skip the text reply
+  reactToUser?: string; // Emoji to react with
+  replyToMessageId?: number; // Message ID for reply threading
+  suppressText?: boolean; // If true, skip the text reply
 }
 ```
 
 After the agent finishes:
+
 1. If `reactToUser` is set, the bot calls `setMessageReaction()` on the incoming message
 2. If `suppressText` is false (default) and the response has text, `sendReply()` sends it
 
@@ -106,7 +108,7 @@ This allows the agent to interpret reactions contextually (e.g., a thumbs-up on
 
 `markdownToTelegramHtml()` converts the agent's Markdown response to Telegram-compatible HTML:
 
-1. **Protect code**: Extract code blocks (``` ```) and inline code (`` ` ``) to prevent processing
+1. **Protect code**: Extract code blocks (` `) and inline code (`` ` ``) to prevent processing
 2. **Escape HTML entities**: `&`, `<`, `>` in remaining text
 3. **Convert headers**: `# Heading` to `Heading`
 4. **Convert formatting**: `***bold-italic***`, `**bold**`, `*italic*`
@@ -126,6 +128,7 @@ Telegram has a 4096-character message limit. The `sendReply()` function chunks l
 ## Telegram Message ID Tracking
 
 After sending a reply, the bot stores the Telegram message ID of the sent message in the database via `updateTelegramMessageId()`. This enables:
+
 - Future `telegram_reply_to` calls referencing the bot's own messages
 - Reaction handling on the bot's messages (to determine `whose` in the synthetic reaction message)
 
diff --git a/apps/construct/docs/tools.md b/apps/construct/docs/tools.md
index e4b9cb3..0bf637c 100644
--- a/apps/construct/docs/tools.md
+++ b/apps/construct/docs/tools.md
@@ -11,13 +11,13 @@ Construct's tools are organized into **packs** -- logical groups of related tool
 
 ## Key Files
 
-| File | Role |
-|------|------|
-| `src/tools/packs.ts` | Pack definitions, embedding cache, selection logic, `InternalTool` and `ToolPack` types |
-| `src/tools/core/` | Core pack: memory, schedule, secret, identity, usage tools |
-| `src/tools/self/` | Self pack: source read/edit, test, deploy, logs, status, extension reload |
-| `src/tools/web/` | Web pack: web page reading, web search |
-| `src/tools/telegram/` | Telegram pack: react, reply-to, pin/unpin, get-pinned, ask |
+| File                  | Role                                                                                    |
+| --------------------- | --------------------------------------------------------------------------------------- |
+| `src/tools/packs.ts`  | Pack definitions, embedding cache, selection logic, `InternalTool` and `ToolPack` types |
+| `src/tools/core/`     | Core pack: memory, schedule, secret, identity, usage tools                              |
+| `src/tools/self/`     | Self pack: source read/edit, test, deploy, logs, status, extension reload               |
+| `src/tools/web/`      | Web pack: web page reading, web search                                                  |
+| `src/tools/telegram/` | Telegram pack: react, reply-to, pin/unpin, get-pinned, ask                              |
 
 ## How Tools Are Defined
 
@@ -25,13 +25,10 @@ Every tool follows the `InternalTool` interface:
 
 ```typescript
 interface InternalTool {
-  name: string
-  description: string
-  parameters: T               // TypeBox JSON Schema
-  execute: (
-    toolCallId: string,
-    args: unknown,
-  ) => Promise<{ output: string; details?: unknown }>
+  name: string;
+  description: string;
+  parameters: T; // TypeBox JSON Schema
+  execute: (toolCallId: string, args: unknown) => Promise<{ output: string; details?: unknown }>;
 }
 ```
 
@@ -39,19 +36,19 @@ Tools are created by **factory functions** that receive a `ToolContext`:
 
 ```typescript
 interface ToolContext {
-  db: Kysely        // Database connection
-  chatId: string              // Current chat identifier
-  apiKey: string              // OpenRouter API key
-  projectRoot: string         // Absolute path to project root
-  dbPath: string              // Path to SQLite database file
-  timezone: string            // User's configured timezone
-  tavilyApiKey?: string       // Tavily API key (for web search)
-  logFile?: string            // Path to log file
-  isDev: boolean              // Development mode flag
-  extensionsDir?: string      // Extensions directory path
-  telegram?: TelegramContext  // Telegram bot + chat context (absent in CLI)
-  memoryManager?: MemoryManager // Cairn memory manager instance
-  embeddingModel?: string     // Embedding model override
+  db: Kysely; // Database connection
+  chatId: string; // Current chat identifier
+  apiKey: string; // OpenRouter API key
+  projectRoot: string; // Absolute path to project root
+  dbPath: string; // Path to SQLite database file
+  timezone: string; // User's configured timezone
+  tavilyApiKey?: string; // Tavily API key (for web search)
+  logFile?: string; // Path to log file
+  isDev: boolean; // Development mode flag
+  extensionsDir?: string; // Extensions directory path
+  telegram?: TelegramContext; // Telegram bot + chat context (absent in CLI)
+  memoryManager?: MemoryManager; // Cairn memory manager instance
+  embeddingModel?: string; // Embedding model override
 }
 ```
 
@@ -63,21 +60,21 @@ A pack groups related tool factories under a name and description:
 
 ```typescript
 interface ToolPack {
-  name: string
-  description: string
-  alwaysLoad: boolean         // If true, skip embedding similarity check
-  factories: ToolFactory[]    // Functions that create tools from ToolContext
+  name: string;
+  description: string;
+  alwaysLoad: boolean; // If true, skip embedding similarity check
+  factories: ToolFactory[]; // Functions that create tools from ToolContext
 }
 ```
 
 ### Built-in Packs
 
-| Pack | `alwaysLoad` | Tools | Description |
-|------|:---:|-------|-------------|
-| **core** | Yes | `memory_store`, `memory_recall`, `memory_forget`, `memory_graph`, `schedule_create`, `schedule_list`, `schedule_cancel`, `secret_store`, `secret_list`, `secret_delete`, `usage_stats`, `identity_read`, `identity_update` | Long-term memory, scheduling, secrets, identity management |
-| **web** | No | `web_read`, `web_search` | Read web pages (via Jina Reader), search the web (via Tavily) |
-| **self** | No | `self_read_source`, `self_edit_source`, `self_run_tests`, `self_view_logs`, `self_deploy`, `self_system_status`, `extension_reload` | Self-modification, diagnostics, deployment |
-| **telegram** | Yes | `telegram_react`, `telegram_reply_to`, `telegram_pin`, `telegram_unpin`, `telegram_get_pinned`, `telegram_ask` | Telegram-specific message interactions |
+| Pack         | `alwaysLoad` | Tools                                                                                                                                                                                                                      | Description                                                   |
+| ------------ | :----------: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------- |
+| **core**     |     Yes      | `memory_store`, `memory_recall`, `memory_forget`, `memory_graph`, `schedule_create`, `schedule_list`, `schedule_cancel`, `secret_store`, `secret_list`, `secret_delete`, `usage_stats`, `identity_read`, `identity_update` | Long-term memory, scheduling, secrets, identity management    |
+| **web**      |      No      | `web_read`, `web_search`                                                                                                                                                                                                   | Read web pages (via Jina Reader), search the web (via Tavily) |
+| **self**     |      No      | `self_read_source`, `self_edit_source`, `self_run_tests`, `self_view_logs`, `self_deploy`, `self_system_status`, `extension_reload`                                                                                        | Self-modification, diagnostics, deployment                    |
+| **telegram** |     Yes      | `telegram_react`, `telegram_reply_to`, `telegram_pin`, `telegram_unpin`, `telegram_get_pinned`, `telegram_ask`                                                                                                             | Telegram-specific message interactions                        |
 
 ### Pack Selection Algorithm
 
@@ -107,10 +104,11 @@ At message time, `selectPacks()` compares the user message embedding against pac
 **memory_store** -- Stores a memory with optional category and tags. Generates an embedding in the background (non-blocking) for future semantic search.
 
 **memory_recall** -- Searches memories using a three-tier hybrid approach:
+
 1. FTS5 full-text search on the `memories_fts` virtual table
 2. Embedding cosine similarity (threshold 0.3) against all memories with embeddings
 3. LIKE keyword fallback
-Results are merged and deduplicated.
+   Results are merged and deduplicated.
 
 **memory_forget** -- Soft-deletes (archives) a memory by ID, or searches for candidates if given a query.
 
@@ -145,6 +143,7 @@ Results are merged and deduplicated.
 **self_view_logs** -- Reads from the log file (with optional `since` and `grep` filtering) or falls back to `journalctl` for the systemd service.
 
 **self_deploy** -- Full deployment pipeline:
+
 1. Typecheck (`tsc --noEmit`)
 2. Test (`vitest run`)
 3. Git tag backup (`pre-deploy-TIMESTAMP`)
@@ -176,14 +175,14 @@ All Telegram tools require a `TelegramContext` (so they return null from CLI).
 Tool parameters use `@sinclair/typebox` for JSON Schema generation:
 
 ```typescript
-import { Type, type Static } from '@sinclair/typebox'
+import { Type, type Static } from "@sinclair/typebox";
 
 const Params = Type.Object({
-  query: Type.String({ description: 'Search query' }),
-  limit: Type.Optional(Type.Number({ description: 'Max results' })),
-})
+  query: Type.String({ description: "Search query" }),
+  limit: Type.Optional(Type.Number({ description: "Max results" })),
+});
 
-type Input = Static
+type Input = Static;
 ```
 
 TypeBox schemas are passed directly to pi-agent-core, which uses them for LLM function calling.
@@ -194,9 +193,9 @@ Telegram tools like `telegram_react` and `telegram_reply_to` don't perform their
 
 ```typescript
 interface TelegramSideEffects {
-  reactToUser?: string        // Emoji to react with
-  replyToMessageId?: number   // Message ID to reply to
-  suppressText?: boolean      // Skip sending text reply
+  reactToUser?: string; // Emoji to react with
+  replyToMessageId?: number; // Message ID to reply to
+  suppressText?: boolean; // Skip sending text reply
 }
 ```
 
diff --git a/apps/construct/package.json b/apps/construct/package.json
index 4c8d667..5c6105b 100644
--- a/apps/construct/package.json
+++ b/apps/construct/package.json
@@ -9,11 +9,12 @@
     "typecheck": "tsc --noEmit"
   },
   "dependencies": {
-    "@repo/db": "workspace:*",
-    "@repo/cairn": "workspace:*",
-    "@logtape/logtape": "^2.0.2",
     "@mariozechner/pi-agent-core": "^0.54.2",
     "@mariozechner/pi-ai": "^0.54.2",
+    "@repo/cairn": "workspace:*",
+    "@repo/db": "workspace:*",
+    "@repo/env": "workspace:*",
+    "@repo/log": "workspace:*",
     "@sinclair/typebox": "^0.34.48",
     "citty": "^0.2.1",
     "croner": "^10.0.1",
diff --git a/apps/construct/src/__tests__/context-assembly.test.ts b/apps/construct/src/__tests__/context-assembly.test.ts
index e4fc221..f6b9e98 100644
--- a/apps/construct/src/__tests__/context-assembly.test.ts
+++ b/apps/construct/src/__tests__/context-assembly.test.ts
@@ -6,229 +6,233 @@
  * No API key needed.
  */
 
-import { describe, it, expect, beforeEach, afterEach } from 'vitest'
-import type { Kysely } from 'kysely'
-import type { Database } from '../db/schema.js'
-import { buildContextPreamble } from '../system-prompt.js'
-import { getOrCreateConversation } from '../db/queries.js'
-import { recallMemories, renderObservations } from '@repo/cairn'
-import type { Observation } from '@repo/cairn'
-import {
-  setupDb,
-  seedAll,
-  queryEmbeddings,
-  observationFixtures,
-} from './fixtures.js'
-
-describe('buildContextPreamble — full preamble', () => {
-  it('includes all sections when all inputs provided', () => {
+import { describe, it, expect, beforeEach, afterEach } from "vitest";
+import type { Kysely } from "kysely";
+import type { Database } from "../db/schema.js";
+import { buildContextPreamble } from "../system-prompt.js";
+import { getOrCreateConversation } from "../db/queries.js";
+import { recallMemories, renderObservations } from "@repo/cairn";
+import type { Observation } from "@repo/cairn";
+import { setupDb, seedAll, queryEmbeddings, observationFixtures } from "./fixtures.js";
+
+describe("buildContextPreamble — full preamble", () => {
+  it("includes all sections when all inputs provided", () => {
     const preamble = buildContextPreamble({
-      timezone: 'America/Los_Angeles',
-      source: 'telegram',
-      observations: '! [2024-01-15] Dentist on March 5th\n- [2024-01-15] Learning Rust',
+      timezone: "America/Los_Angeles",
+      source: "telegram",
+      observations: "! [2024-01-15] Dentist on March 5th\n- [2024-01-15] Learning Rust",
       recentMemories: [
-        { content: 'Alex has a cat named Miso', category: 'personal', created_at: '2024-01-15T10:00:00Z' },
+        {
+          content: "Alex has a cat named Miso",
+          category: "personal",
+          created_at: "2024-01-15T10:00:00Z",
+        },
       ],
       relevantMemories: [
-        { content: 'Alex is allergic to shellfish', category: 'health', score: 0.95 },
-        { content: 'Alex works at DataPipe', category: 'work', score: 0.82 },
+        { content: "Alex is allergic to shellfish", category: "health", score: 0.95 },
+        { content: "Alex works at DataPipe", category: "work", score: 0.82 },
       ],
       skills: [
-        { name: 'Daily Standup', description: 'Run standup', requires: {}, body: 'Ask about blockers and progress.', filePath: '' },
+        {
+          name: "Daily Standup",
+          description: "Run standup",
+          requires: {},
+          body: "Ask about blockers and progress.",
+          filePath: "",
+        },
       ],
-      replyContext: 'What should I have for dinner?',
-    })
+      replyContext: "What should I have for dinner?",
+    });
 
     // Context line
-    expect(preamble).toContain('America/Los_Angeles')
-    expect(preamble).toContain('telegram')
+    expect(preamble).toContain("America/Los_Angeles");
+    expect(preamble).toContain("telegram");
 
     // Observations section
-    expect(preamble).toContain('[Conversation observations')
-    expect(preamble).toContain('Dentist on March 5th')
+    expect(preamble).toContain("[Conversation observations");
+    expect(preamble).toContain("Dentist on March 5th");
 
     // Recent memories section
-    expect(preamble).toContain('[Recent memories')
-    expect(preamble).toContain('(personal) Alex has a cat named Miso')
+    expect(preamble).toContain("[Recent memories");
+    expect(preamble).toContain("(personal) Alex has a cat named Miso");
 
     // Relevant memories with scores
-    expect(preamble).toContain('[Potentially relevant memories]')
-    expect(preamble).toContain('(health) Alex is allergic to shellfish (95% match)')
-    expect(preamble).toContain('(work) Alex works at DataPipe (82% match)')
+    expect(preamble).toContain("[Potentially relevant memories]");
+    expect(preamble).toContain("(health) Alex is allergic to shellfish (95% match)");
+    expect(preamble).toContain("(work) Alex works at DataPipe (82% match)");
 
     // Skills section
-    expect(preamble).toContain('[Active skills')
-    expect(preamble).toContain('### Daily Standup')
-    expect(preamble).toContain('Ask about blockers')
+    expect(preamble).toContain("[Active skills");
+    expect(preamble).toContain("### Daily Standup");
+    expect(preamble).toContain("Ask about blockers");
 
     // Reply context
-    expect(preamble).toContain('[Replying to: "What should I have for dinner?"]')
-  })
-})
+    expect(preamble).toContain('[Replying to: "What should I have for dinner?"]');
+  });
+});
 
-describe('buildContextPreamble — observations only', () => {
-  it('includes observations but omits memory/skill sections', () => {
+describe("buildContextPreamble — observations only", () => {
+  it("includes observations but omits memory/skill sections", () => {
     const preamble = buildContextPreamble({
-      timezone: 'UTC',
-      source: 'cli',
-      observations: '! [2024-01-15] Important observation',
-    })
-
-    expect(preamble).toContain('[Conversation observations')
-    expect(preamble).toContain('Important observation')
-    expect(preamble).not.toContain('[Recent memories')
-    expect(preamble).not.toContain('[Potentially relevant memories]')
-    expect(preamble).not.toContain('[Active skills')
-    expect(preamble).not.toContain('[Replying to')
-  })
-})
-
-describe('buildContextPreamble — relevant memories with scores', () => {
-  it('formats scores as percentages', () => {
+      timezone: "UTC",
+      source: "cli",
+      observations: "! [2024-01-15] Important observation",
+    });
+
+    expect(preamble).toContain("[Conversation observations");
+    expect(preamble).toContain("Important observation");
+    expect(preamble).not.toContain("[Recent memories");
+    expect(preamble).not.toContain("[Potentially relevant memories]");
+    expect(preamble).not.toContain("[Active skills");
+    expect(preamble).not.toContain("[Replying to");
+  });
+});
+
+describe("buildContextPreamble — relevant memories with scores", () => {
+  it("formats scores as percentages", () => {
     const preamble = buildContextPreamble({
-      timezone: 'UTC',
-      source: 'test',
+      timezone: "UTC",
+      source: "test",
       relevantMemories: [
-        { content: 'Memory A', category: 'work', score: 0.456 },
-        { content: 'Memory B', category: 'personal', score: 1.0 },
-        { content: 'Memory C', category: 'health' }, // no score
+        { content: "Memory A", category: "work", score: 0.456 },
+        { content: "Memory B", category: "personal", score: 1.0 },
+        { content: "Memory C", category: "health" }, // no score
       ],
-    })
+    });
 
-    expect(preamble).toContain('(work) Memory A (46% match)')
-    expect(preamble).toContain('(personal) Memory B (100% match)')
-    expect(preamble).toContain('(health) Memory C')
-    expect(preamble).not.toMatch(/Memory C.*%/)
-  })
-})
+    expect(preamble).toContain("(work) Memory A (46% match)");
+    expect(preamble).toContain("(personal) Memory B (100% match)");
+    expect(preamble).toContain("(health) Memory C");
+    expect(preamble).not.toMatch(/Memory C.*%/);
+  });
+});
 
-describe('buildContextPreamble — empty inputs', () => {
-  it('produces minimal preamble with only context line', () => {
+describe("buildContextPreamble — empty inputs", () => {
+  it("produces minimal preamble with only context line", () => {
     const preamble = buildContextPreamble({
-      timezone: 'UTC',
-      source: 'test',
-    })
+      timezone: "UTC",
+      source: "test",
+    });
 
     // Should have the context line
-    expect(preamble).toMatch(/\[Current time:.*UTC.*test\]/)
+    expect(preamble).toMatch(/\[Current time:.*UTC.*test\]/);
 
     // Should NOT have any optional sections
-    expect(preamble).not.toContain('[Conversation observations')
-    expect(preamble).not.toContain('[Recent memories')
-    expect(preamble).not.toContain('[Potentially relevant memories]')
-    expect(preamble).not.toContain('[Active skills')
-    expect(preamble).not.toContain('[Replying to')
-  })
-
-  it('omits sections for empty arrays', () => {
+    expect(preamble).not.toContain("[Conversation observations");
+    expect(preamble).not.toContain("[Recent memories");
+    expect(preamble).not.toContain("[Potentially relevant memories]");
+    expect(preamble).not.toContain("[Active skills");
+    expect(preamble).not.toContain("[Replying to");
+  });
+
+  it("omits sections for empty arrays", () => {
     const preamble = buildContextPreamble({
-      timezone: 'UTC',
-      source: 'test',
+      timezone: "UTC",
+      source: "test",
       recentMemories: [],
       relevantMemories: [],
       skills: [],
-    })
+    });
 
-    expect(preamble).not.toContain('[Recent memories')
-    expect(preamble).not.toContain('[Potentially relevant memories]')
-    expect(preamble).not.toContain('[Active skills')
-  })
-})
+    expect(preamble).not.toContain("[Recent memories");
+    expect(preamble).not.toContain("[Potentially relevant memories]");
+    expect(preamble).not.toContain("[Active skills");
+  });
+});
 
-describe('buildContextPreamble — reply context', () => {
-  it('includes reply context string', () => {
+describe("buildContextPreamble — reply context", () => {
+  it("includes reply context string", () => {
     const preamble = buildContextPreamble({
-      timezone: 'UTC',
-      source: 'test',
-      replyContext: 'Can you remind me about my appointment?',
-    })
+      timezone: "UTC",
+      source: "test",
+      replyContext: "Can you remind me about my appointment?",
+    });
 
-    expect(preamble).toContain('[Replying to: "Can you remind me about my appointment?"]')
-  })
+    expect(preamble).toContain('[Replying to: "Can you remind me about my appointment?"]');
+  });
 
-  it('truncates long reply context to 300 chars', () => {
-    const longReply = 'x'.repeat(500)
+  it("truncates long reply context to 300 chars", () => {
+    const longReply = "x".repeat(500);
     const preamble = buildContextPreamble({
-      timezone: 'UTC',
-      source: 'test',
+      timezone: "UTC",
+      source: "test",
       replyContext: longReply,
-    })
+    });
 
     // The truncated content should be exactly 300 chars
-    expect(preamble).toContain('[Replying to: "' + 'x'.repeat(300) + '"]')
-    expect(preamble).not.toContain('x'.repeat(301))
-  })
-})
+    expect(preamble).toContain('[Replying to: "' + "x".repeat(300) + '"]');
+    expect(preamble).not.toContain("x".repeat(301));
+  });
+});
 
-describe('buildContextPreamble — dev mode', () => {
-  it('includes dev mode indicator', () => {
+describe("buildContextPreamble — dev mode", () => {
+  it("includes dev mode indicator", () => {
     const preamble = buildContextPreamble({
-      timezone: 'UTC',
-      source: 'cli',
+      timezone: "UTC",
+      source: "cli",
       dev: true,
-    })
+    });
 
-    expect(preamble).toContain('DEV MODE')
-    expect(preamble).toContain('self_deploy is disabled')
-  })
-})
+    expect(preamble).toContain("DEV MODE");
+    expect(preamble).toContain("self_deploy is disabled");
+  });
+});
 
-describe('buildContextPreamble — end-to-end composition', () => {
-  let db: Kysely
+describe("buildContextPreamble — end-to-end composition", () => {
+  let db: Kysely;
 
   beforeEach(async () => {
-    db = await setupDb()
-  })
+    db = await setupDb();
+  });
 
   afterEach(async () => {
-    await db.destroy()
-  })
+    await db.destroy();
+  });
 
-  it('seeded data flows through recall into preamble', async () => {
-    const convId = await getOrCreateConversation(db, 'test', null)
-    await seedAll(db, convId)
+  it("seeded data flows through recall into preamble", async () => {
+    const convId = await getOrCreateConversation(db, "test", null);
+    await seedAll(db, convId);
 
     // Recall work-related memories using embedding search
-    const recalled = await recallMemories(db, 'engineering data', {
+    const recalled = await recallMemories(db, "engineering data", {
       queryEmbedding: queryEmbeddings.work,
       limit: 5,
-    })
+    });
 
     // Build preamble with recalled memories + observations
-    const observations = observationFixtures
-      .map((o) => ({
-        id: 'obs-test',
-        conversation_id: convId,
-        content: o.content,
-        priority: o.priority,
-        observation_date: o.observation_date,
-        source_message_ids: [],
-        token_count: 0,
-        generation: 0,
-        superseded_at: null,
-        created_at: '2024-01-15T00:00:00Z',
-      })) satisfies Observation[]
+    const observations = observationFixtures.map((o) => ({
+      id: "obs-test",
+      conversation_id: convId,
+      content: o.content,
+      priority: o.priority,
+      observation_date: o.observation_date,
+      source_message_ids: [],
+      token_count: 0,
+      generation: 0,
+      superseded_at: null,
+      created_at: "2024-01-15T00:00:00Z",
+    })) satisfies Observation[];
 
     const preamble = buildContextPreamble({
-      timezone: 'America/Los_Angeles',
-      source: 'telegram',
+      timezone: "America/Los_Angeles",
+      source: "telegram",
       observations: renderObservations(observations),
       relevantMemories: recalled.map((m) => ({
         content: m.content,
         category: m.category,
         score: m.score,
       })),
-    })
+    });
 
     // Preamble should contain recalled work memories
-    expect(preamble).toContain('DataPipe')
+    expect(preamble).toContain("DataPipe");
 
     // Preamble should contain observations
-    expect(preamble).toContain('dentist appointment')
-    expect(preamble).toContain('learning Rust')
+    expect(preamble).toContain("dentist appointment");
+    expect(preamble).toContain("learning Rust");
 
     // Preamble should have standard structure
-    expect(preamble).toContain('[Conversation observations')
-    expect(preamble).toContain('[Potentially relevant memories]')
-  })
-})
+    expect(preamble).toContain("[Conversation observations");
+    expect(preamble).toContain("[Potentially relevant memories]");
+  });
+});
diff --git a/apps/construct/src/__tests__/fixtures.ts b/apps/construct/src/__tests__/fixtures.ts
index 132e397..cac1305 100644
--- a/apps/construct/src/__tests__/fixtures.ts
+++ b/apps/construct/src/__tests__/fixtures.ts
@@ -6,29 +6,31 @@
  * Dims 6-15 are unused (available for orthogonal "no match" queries).
  */
 
-import type { Kysely } from 'kysely'
-import { nanoid } from 'nanoid'
-import type { Database } from '../db/schema.js'
-import { createDb } from '@repo/db'
-import { storeMemory, upsertNode, upsertEdge } from '@repo/cairn'
-import * as m001 from '../db/migrations/001-initial.js'
-import * as m002 from '../db/migrations/002-fts5-and-embeddings.js'
-import * as m004 from '../db/migrations/004-telegram-message-ids.js'
-import * as m005 from '../db/migrations/005-graph-memory.js'
-import * as m006 from '../db/migrations/006-observational-memory.js'
-import * as m008 from '../db/migrations/008-schedule-prompts.js'
-
-const DIM = 16
+import type { Kysely } from "kysely";
+import { nanoid } from "nanoid";
+import type { Database } from "../db/schema.js";
+import type { Observation, CairnMessage } from "@repo/cairn";
+import { createDb } from "@repo/db";
+import { storeMemory, upsertNode, upsertEdge } from "@repo/cairn";
+import type { AgentResponse, ProcessMessageOpts } from "../agent.js";
+import * as m001 from "../db/migrations/001-initial.js";
+import * as m002 from "../db/migrations/002-fts5-and-embeddings.js";
+import * as m004 from "../db/migrations/004-telegram-message-ids.js";
+import * as m005 from "../db/migrations/005-graph-memory.js";
+import * as m006 from "../db/migrations/006-observational-memory.js";
+import * as m008 from "../db/migrations/008-schedule-prompts.js";
+
+const DIM = 16;
 
 /** Create a normalized 16-d embedding with given dimension weights. */
 function makeEmbedding(weights: Record): number[] {
-  const v = new Array(DIM).fill(0)
+  const v = Array.from({ length: DIM }, () => 0);
   for (const [dim, w] of Object.entries(weights)) {
-    v[Number(dim)] = w
+    v[Number(dim)] = w;
   }
-  const norm = Math.sqrt(v.reduce((s, x) => s + x * x, 0))
-  if (norm === 0) return v
-  return v.map((x) => x / norm)
+  const norm = Math.sqrt(v.reduce((s, x) => s + x * x, 0));
+  if (norm === 0) return v;
+  return v.map((x) => x / norm);
 }
 
 // --- Memory embeddings (one per fixture memory) ---
@@ -42,7 +44,7 @@ export const memoryEmbeddings = {
   darkMode: makeEmbedding({ 2: 0.5, 3: 0.5 }),
   clickstream: makeEmbedding({ 2: 0.9, 3: 0.2 }),
   sarah: makeEmbedding({ 4: 1.0, 5: 0.2 }),
-} as const
+} as const;
 
 // --- Query embeddings for test probes ---
 
@@ -52,90 +54,90 @@ export const queryEmbeddings = {
   work: makeEmbedding({ 2: 1.0 }),
   hobbies: makeEmbedding({ 3: 1.0 }),
   orthogonal: makeEmbedding({ 15: 1.0 }), // matches nothing
-} as const
+} as const;
 
 // --- Memory fixture data ---
 
 export const memoryFixtures = [
   {
-    key: 'shellfish',
-    content: 'Alex is allergic to shellfish - severe reaction, carries an EpiPen',
-    category: 'health',
-    tags: 'allergy,medical',
+    key: "shellfish",
+    content: "Alex is allergic to shellfish - severe reaction, carries an EpiPen",
+    category: "health",
+    tags: "allergy,medical",
   },
   {
-    key: 'datapipe',
-    content: 'Alex works at DataPipe as a senior backend engineer',
-    category: 'work',
-    tags: 'job,career',
+    key: "datapipe",
+    content: "Alex works at DataPipe as a senior backend engineer",
+    category: "work",
+    tags: "job,career",
   },
   {
-    key: 'miso',
-    content: 'Alex has a cat named Miso who is 3 years old',
-    category: 'personal',
-    tags: 'pet,cat',
+    key: "miso",
+    content: "Alex has a cat named Miso who is 3 years old",
+    category: "personal",
+    tags: "pet,cat",
   },
   {
-    key: 'rust',
-    content: 'Alex is learning Rust, working through the Rustlings exercises',
-    category: 'learning',
-    tags: 'programming,rust',
+    key: "rust",
+    content: "Alex is learning Rust, working through the Rustlings exercises",
+    category: "learning",
+    tags: "programming,rust",
   },
   {
-    key: 'portland',
-    content: 'Alex lives in Portland, Oregon, near Hawthorne Boulevard',
-    category: 'personal',
-    tags: 'location,home',
+    key: "portland",
+    content: "Alex lives in Portland, Oregon, near Hawthorne Boulevard",
+    category: "personal",
+    tags: "location,home",
   },
   {
-    key: 'darkMode',
-    content: 'Alex prefers dark mode in all editors and uses Neovim',
-    category: 'preference',
-    tags: 'editor,tooling',
+    key: "darkMode",
+    content: "Alex prefers dark mode in all editors and uses Neovim",
+    category: "preference",
+    tags: "editor,tooling",
   },
   {
-    key: 'clickstream',
-    content: 'DataPipe processes real-time clickstream data using Kafka and Flink',
-    category: 'work',
-    tags: 'infrastructure,streaming',
+    key: "clickstream",
+    content: "DataPipe processes real-time clickstream data using Kafka and Flink",
+    category: "work",
+    tags: "infrastructure,streaming",
   },
   {
-    key: 'sarah',
+    key: "sarah",
     content: "Alex's girlfriend Sarah loves hiking and visits on weekends",
-    category: 'personal',
-    tags: 'relationship,social',
+    category: "personal",
+    tags: "relationship,social",
   },
-] as const
+] as const;
 
 // --- Observation fixture data ---
 
 export const observationFixtures = [
   {
-    content: 'User has a dentist appointment on March 5th at 9am',
-    priority: 'high' as const,
-    observation_date: '2024-01-15',
+    content: "User has a dentist appointment on March 5th at 9am",
+    priority: "high" as const,
+    observation_date: "2024-01-15",
   },
   {
-    content: 'User is learning Rust, finding the borrow checker tricky',
-    priority: 'medium' as const,
-    observation_date: '2024-01-15',
+    content: "User is learning Rust, finding the borrow checker tricky",
+    priority: "medium" as const,
+    observation_date: "2024-01-15",
   },
   {
-    content: 'User works at DataPipe doing Flink stream processing',
-    priority: 'medium' as const,
-    observation_date: '2024-01-14',
+    content: "User works at DataPipe doing Flink stream processing",
+    priority: "medium" as const,
+    observation_date: "2024-01-14",
   },
   {
     content: "User's cat Miso likes to sit on the keyboard",
-    priority: 'low' as const,
-    observation_date: '2024-01-15',
+    priority: "low" as const,
+    observation_date: "2024-01-15",
   },
   {
     content: "User's girlfriend Sarah is visiting next weekend",
-    priority: 'high' as const,
-    observation_date: '2024-01-16',
+    priority: "high" as const,
+    observation_date: "2024-01-16",
   },
-]
+];
 
 // --- Identity file content ---
 
@@ -153,76 +155,76 @@ Interests: Rust programming, hiking, board games
 Pets: Miso (cat, 3 years old)
 Partner: Sarah (graphic designer)
 Health: Severe shellfish allergy (carries EpiPen)`,
-}
+};
 
 // --- DB setup ---
 
 export async function setupDb(): Promise> {
-  const { db } = createDb(':memory:')
-  await m001.up(db as Kysely)
-  await m002.up(db as Kysely)
-  await m004.up(db as Kysely)
-  await m005.up(db as Kysely)
-  await m006.up(db as Kysely)
-  await m008.up(db as Kysely)
-  return db
+  const { db } = createDb(":memory:");
+  await m001.up(db as Kysely);
+  await m002.up(db as Kysely);
+  await m004.up(db as Kysely);
+  await m005.up(db as Kysely);
+  await m006.up(db as Kysely);
+  await m008.up(db as Kysely);
+  return db;
 }
 
 // --- Seed functions ---
 
 export interface SeededMemories {
-  ids: Record
+  ids: Record;
 }
 
 export async function seedMemories(db: Kysely): Promise {
-  const ids: Record = {}
+  const ids: Record = {};
   for (const mem of memoryFixtures) {
     const stored = await storeMemory(db, {
       content: mem.content,
       category: mem.category,
       tags: mem.tags,
-      source: 'user',
+      source: "user",
       embedding: JSON.stringify(memoryEmbeddings[mem.key]),
-    })
-    ids[mem.key] = stored.id
+    });
+    ids[mem.key] = stored.id;
   }
-  return { ids }
+  return { ids };
 }
 
 export interface SeededGraph {
-  nodeIds: Record
+  nodeIds: Record;
 }
 
 export async function seedGraph(
   db: Kysely,
   memoryIds: Record,
 ): Promise {
-  const alex = await upsertNode(db, { name: 'Alex', type: 'person', description: 'The user' })
+  const alex = await upsertNode(db, { name: "Alex", type: "person", description: "The user" });
   const miso = await upsertNode(db, {
-    name: 'Miso',
-    type: 'entity',
+    name: "Miso",
+    type: "entity",
     description: "Alex's cat, 3 years old",
-  })
+  });
   const portland = await upsertNode(db, {
-    name: 'Portland',
-    type: 'place',
-    description: 'City in Oregon',
-  })
+    name: "Portland",
+    type: "place",
+    description: "City in Oregon",
+  });
   const datapipe = await upsertNode(db, {
-    name: 'DataPipe',
-    type: 'entity',
-    description: 'Tech company, real-time data pipelines',
-  })
+    name: "DataPipe",
+    type: "entity",
+    description: "Tech company, real-time data pipelines",
+  });
   const rust = await upsertNode(db, {
-    name: 'Rust',
-    type: 'concept',
-    description: 'Programming language',
-  })
+    name: "Rust",
+    type: "concept",
+    description: "Programming language",
+  });
   const shellfish = await upsertNode(db, {
-    name: 'Shellfish',
-    type: 'concept',
-    description: 'Food allergen',
-  })
+    name: "Shellfish",
+    type: "concept",
+    description: "Food allergen",
+  });
 
   const nodeIds: Record = {
     alex: alex.id,
@@ -231,59 +233,59 @@ export async function seedGraph(
     datapipe: datapipe.id,
     rust: rust.id,
     shellfish: shellfish.id,
-  }
+  };
 
   // Edges — each linked to the relevant memory
   await upsertEdge(db, {
     source_id: alex.id,
     target_id: miso.id,
-    relation: 'owns',
+    relation: "owns",
     memory_id: memoryIds.miso,
-  })
+  });
   await upsertEdge(db, {
     source_id: alex.id,
     target_id: portland.id,
-    relation: 'lives_in',
+    relation: "lives_in",
     memory_id: memoryIds.portland,
-  })
+  });
   await upsertEdge(db, {
     source_id: alex.id,
     target_id: datapipe.id,
-    relation: 'works_at',
+    relation: "works_at",
     memory_id: memoryIds.datapipe,
-  })
+  });
   await upsertEdge(db, {
     source_id: alex.id,
     target_id: rust.id,
-    relation: 'learning',
+    relation: "learning",
     memory_id: memoryIds.rust,
-  })
+  });
   await upsertEdge(db, {
     source_id: alex.id,
     target_id: shellfish.id,
-    relation: 'allergic_to',
+    relation: "allergic_to",
     memory_id: memoryIds.shellfish,
-  })
+  });
   // DataPipe→Portland edge with no memory_id
   await upsertEdge(db, {
     source_id: datapipe.id,
     target_id: portland.id,
-    relation: 'located_in',
-  })
+    relation: "located_in",
+  });
 
-  return { nodeIds }
+  return { nodeIds };
 }
 
 export async function seedObservations(
   db: Kysely,
   conversationId: string,
 ): Promise {
-  const ids: string[] = []
+  const ids: string[] = [];
   for (const obs of observationFixtures) {
-    const id = nanoid()
-    ids.push(id)
+    const id = nanoid();
+    ids.push(id);
     await db
-      .insertInto('observations')
+      .insertInto("observations")
       .values({
         id,
         conversation_id: conversationId,
@@ -294,21 +296,85 @@ export async function seedObservations(
         token_count: Math.ceil(obs.content.length / 4),
         generation: 0,
       })
-      .execute()
+      .execute();
   }
-  return ids
+  return ids;
+}
+
+// --- Factory functions ---
+
+/** Create a test message (CairnMessage shape). */
+export function createTestMessage(overrides: Partial = {}): CairnMessage {
+  return {
+    id: "msg-test-1",
+    role: "user",
+    content: "Test message content",
+    created_at: "2024-01-15T00:00:00Z",
+    ...overrides,
+  };
+}
+
+/** Create a test observation (runtime Observation shape). */
+export function createTestObservation(overrides: Partial = {}): Observation {
+  return {
+    id: "obs-test-1",
+    conversation_id: "conv-test-1",
+    content: "Test observation content",
+    priority: "medium",
+    observation_date: "2024-01-15",
+    source_message_ids: [],
+    token_count: 10,
+    generation: 0,
+    superseded_at: null,
+    created_at: "2024-01-15T00:00:00Z",
+    ...overrides,
+  };
 }
 
+/** Create test ProcessMessageOpts. */
+export function createTestProcessOpts(
+  overrides: Partial = {},
+): ProcessMessageOpts {
+  return {
+    source: "cli",
+    externalId: null,
+    ...overrides,
+  };
+}
+
+/** Create a test tool call result entry (as returned in AgentResponse.toolCalls). */
+export function createTestToolResult(
+  overrides: Partial = {},
+): AgentResponse["toolCalls"][number] {
+  return {
+    name: "test_tool",
+    args: {},
+    result: "Tool executed successfully",
+    ...overrides,
+  };
+}
+
+/** Create a test AgentResponse. */
+export function createTestAgentResponse(overrides: Partial = {}): AgentResponse {
+  return {
+    text: "Test response",
+    toolCalls: [],
+    ...overrides,
+  };
+}
+
+// --- Seed functions ---
+
 export async function seedAll(
   db: Kysely,
   conversationId: string,
 ): Promise<{
-  memoryIds: Record
-  nodeIds: Record
-  observationIds: string[]
+  memoryIds: Record;
+  nodeIds: Record;
+  observationIds: string[];
 }> {
-  const { ids: memoryIds } = await seedMemories(db)
-  const { nodeIds } = await seedGraph(db, memoryIds)
-  const observationIds = await seedObservations(db, conversationId)
-  return { memoryIds, nodeIds, observationIds }
+  const { ids: memoryIds } = await seedMemories(db);
+  const { nodeIds } = await seedGraph(db, memoryIds);
+  const observationIds = await seedObservations(db, conversationId);
+  return { memoryIds, nodeIds, observationIds };
 }
diff --git a/apps/construct/src/__tests__/graph-recall.test.ts b/apps/construct/src/__tests__/graph-recall.test.ts
index d1f206c..f158bb9 100644
--- a/apps/construct/src/__tests__/graph-recall.test.ts
+++ b/apps/construct/src/__tests__/graph-recall.test.ts
@@ -8,32 +8,27 @@
  * No API key needed — uses synthetic data.
  */
 
-import { describe, it, expect, beforeEach, afterEach } from 'vitest'
-import type { Kysely } from 'kysely'
-import type { Database } from '../db/schema.js'
-import {
-  recallMemories,
-  searchNodes,
-  traverseGraph,
-  getRelatedMemoryIds,
-} from '@repo/cairn'
-import { setupDb, seedMemories, seedGraph } from './fixtures.js'
-
-let db: Kysely
-let memIds: Record
-let nodeIds: Record
+import { describe, it, expect, beforeEach, afterEach } from "vitest";
+import type { Kysely } from "kysely";
+import type { Database } from "../db/schema.js";
+import { recallMemories, searchNodes, traverseGraph, getRelatedMemoryIds } from "@repo/cairn";
+import { setupDb, seedMemories, seedGraph } from "./fixtures.js";
+
+let db: Kysely;
+let memIds: Record;
+let nodeIds: Record;
 
 beforeEach(async () => {
-  db = await setupDb()
-  const seeded = await seedMemories(db)
-  memIds = seeded.ids
-  const graph = await seedGraph(db, memIds)
-  nodeIds = graph.nodeIds
-})
+  db = await setupDb();
+  const seeded = await seedMemories(db);
+  memIds = seeded.ids;
+  const graph = await seedGraph(db, memIds);
+  nodeIds = graph.nodeIds;
+});
 
 afterEach(async () => {
-  await db.destroy()
-})
+  await db.destroy();
+});
 
 /**
  * Replicate the graph expansion logic from memory_recall tool.
@@ -42,139 +37,139 @@ afterEach(async () => {
 async function graphExpand(
   query: string,
 ): Promise<{ relatedMemIds: string[]; matchedNodeIds: string[] }> {
-  const nodes = await searchNodes(db, query, 5)
-  if (nodes.length === 0) return { relatedMemIds: [], matchedNodeIds: [] }
+  const nodes = await searchNodes(db, query, 5);
+  if (nodes.length === 0) return { relatedMemIds: [], matchedNodeIds: [] };
 
-  const allNodeIds = new Set()
+  const allNodeIds = new Set();
   for (const node of nodes) {
-    allNodeIds.add(node.id)
-    const traversed = await traverseGraph(db, node.id, 2)
+    allNodeIds.add(node.id);
+    const traversed = await traverseGraph(db, node.id, 2);
     for (const t of traversed) {
-      allNodeIds.add(t.node.id)
+      allNodeIds.add(t.node.id);
     }
   }
 
-  const relatedMemIds = await getRelatedMemoryIds(db, [...allNodeIds])
-  return { relatedMemIds, matchedNodeIds: nodes.map((n) => n.id) }
+  const relatedMemIds = await getRelatedMemoryIds(db, [...allNodeIds]);
+  return { relatedMemIds, matchedNodeIds: nodes.map((n) => n.id) };
 }
 
-describe('graph-augmented recall — graph surfaces related memories', () => {
-  it('Portland search traverses to DataPipe via Alex', async () => {
+describe("graph-augmented recall — graph surfaces related memories", () => {
+  it("Portland search traverses to DataPipe via Alex", async () => {
     // Search "portland" → finds Portland node → traverses Alex→DataPipe
     // → surfaces DataPipe memory that wouldn't appear in text search for "portland"
-    const { relatedMemIds, matchedNodeIds } = await graphExpand('portland')
+    const { relatedMemIds, matchedNodeIds } = await graphExpand("portland");
 
     // Portland node should be matched
-    expect(matchedNodeIds).toContain(nodeIds.portland)
+    expect(matchedNodeIds).toContain(nodeIds.portland);
 
     // Related memories should include portland memory (direct edge)
-    expect(relatedMemIds).toContain(memIds.portland)
+    expect(relatedMemIds).toContain(memIds.portland);
 
     // And DataPipe memory (via Alex→Portland edge + Alex→DataPipe edge)
-    expect(relatedMemIds).toContain(memIds.datapipe)
-  })
+    expect(relatedMemIds).toContain(memIds.datapipe);
+  });
 
-  it('Miso search reaches all Alex-connected memories', async () => {
-    const { relatedMemIds } = await graphExpand('miso')
+  it("Miso search reaches all Alex-connected memories", async () => {
+    const { relatedMemIds } = await graphExpand("miso");
 
     // Direct: Alex→Miso edge → miso memory
-    expect(relatedMemIds).toContain(memIds.miso)
+    expect(relatedMemIds).toContain(memIds.miso);
 
     // Via Alex hub: should reach other Alex-connected memories
-    expect(relatedMemIds).toContain(memIds.portland)
-    expect(relatedMemIds).toContain(memIds.datapipe)
-    expect(relatedMemIds).toContain(memIds.rust)
-    expect(relatedMemIds).toContain(memIds.shellfish)
-  })
+    expect(relatedMemIds).toContain(memIds.portland);
+    expect(relatedMemIds).toContain(memIds.datapipe);
+    expect(relatedMemIds).toContain(memIds.rust);
+    expect(relatedMemIds).toContain(memIds.shellfish);
+  });
 
-  it('DataPipe search reaches Portland and Alex-connected memories', async () => {
-    const { relatedMemIds, matchedNodeIds } = await graphExpand('datapipe')
+  it("DataPipe search reaches Portland and Alex-connected memories", async () => {
+    const { relatedMemIds, matchedNodeIds } = await graphExpand("datapipe");
 
-    expect(matchedNodeIds).toContain(nodeIds.datapipe)
+    expect(matchedNodeIds).toContain(nodeIds.datapipe);
 
     // Direct: Alex→DataPipe → datapipe memory
-    expect(relatedMemIds).toContain(memIds.datapipe)
+    expect(relatedMemIds).toContain(memIds.datapipe);
 
     // Via DataPipe→Portland edge (no memory) + Alex→Portland edge → portland memory
-    expect(relatedMemIds).toContain(memIds.portland)
-  })
-})
+    expect(relatedMemIds).toContain(memIds.portland);
+  });
+});
 
-describe('graph-augmented recall — merge with direct recall', () => {
-  it('combining direct recall + graph expansion produces no duplicates', async () => {
+describe("graph-augmented recall — merge with direct recall", () => {
+  it("combining direct recall + graph expansion produces no duplicates", async () => {
     // Direct recall for "Miso" → finds cat memory via FTS/keyword
-    const directResults = await recallMemories(db, 'Miso')
-    const directIds = new Set(directResults.map((r) => r.id))
+    const directResults = await recallMemories(db, "Miso");
+    const directIds = new Set(directResults.map((r) => r.id));
 
     // Graph expansion for "miso"
-    const { relatedMemIds } = await graphExpand('miso')
+    const { relatedMemIds } = await graphExpand("miso");
 
     // Merge like the tool does: direct first, then graph results not in direct
-    const seen = new Set(directIds)
-    const graphOnly = relatedMemIds.filter((id) => !seen.has(id))
+    const seen = new Set(directIds);
+    const graphOnly = relatedMemIds.filter((id) => !seen.has(id));
 
     // Cat memory should be in direct results
-    expect(directIds.has(memIds.miso)).toBe(true)
+    expect(directIds.has(memIds.miso)).toBe(true);
 
     // Graph should add memories not in direct results
     // (e.g. datapipe, portland, etc. depending on what FTS matched)
-    const allIds = [...directIds, ...graphOnly]
-    const uniqueIds = new Set(allIds)
-    expect(uniqueIds.size).toBe(allIds.length) // no duplicates
-  })
-})
+    const allIds = [...directIds, ...graphOnly];
+    const uniqueIds = new Set(allIds);
+    expect(uniqueIds.size).toBe(allIds.length); // no duplicates
+  });
+});
 
-describe('graph-augmented recall — isolated nodes', () => {
-  it('query matching no nodes returns empty', async () => {
-    const { relatedMemIds, matchedNodeIds } = await graphExpand('xyzzy_nonexistent')
+describe("graph-augmented recall — isolated nodes", () => {
+  it("query matching no nodes returns empty", async () => {
+    const { relatedMemIds, matchedNodeIds } = await graphExpand("xyzzy_nonexistent");
 
-    expect(matchedNodeIds).toHaveLength(0)
-    expect(relatedMemIds).toHaveLength(0)
-  })
+    expect(matchedNodeIds).toHaveLength(0);
+    expect(relatedMemIds).toHaveLength(0);
+  });
 
-  it('a node with no edges returns only its own edge memories', async () => {
+  it("a node with no edges returns only its own edge memories", async () => {
     // Create an isolated node with no connections
-    const { upsertNode } = await import('@repo/cairn')
+    const { upsertNode } = await import("@repo/cairn");
     const isolated = await upsertNode(db, {
-      name: 'IsolatedThing',
-      type: 'concept',
-      description: 'A thing with no connections',
-    })
+      name: "IsolatedThing",
+      type: "concept",
+      description: "A thing with no connections",
+    });
 
-    const nodes = await searchNodes(db, 'isolatedthing', 5)
-    expect(nodes.length).toBeGreaterThan(0)
+    const nodes = await searchNodes(db, "isolatedthing", 5);
+    expect(nodes.length).toBeGreaterThan(0);
 
     // Traverse from isolated node — should find no neighbors
-    const traversed = await traverseGraph(db, isolated.id, 2)
-    expect(traversed).toHaveLength(0)
+    const traversed = await traverseGraph(db, isolated.id, 2);
+    expect(traversed).toHaveLength(0);
 
     // getRelatedMemoryIds with just the isolated node — no edges, no memories
-    const memoryIds = await getRelatedMemoryIds(db, [isolated.id])
-    expect(memoryIds).toHaveLength(0)
-  })
-})
+    const memoryIds = await getRelatedMemoryIds(db, [isolated.id]);
+    expect(memoryIds).toHaveLength(0);
+  });
+});
 
-describe('traverseGraph depth control', () => {
-  it('depth=1 reaches direct neighbors only', async () => {
+describe("traverseGraph depth control", () => {
+  it("depth=1 reaches direct neighbors only", async () => {
     // From Portland: depth=1 should reach Alex and DataPipe (direct edges)
-    const traversed = await traverseGraph(db, nodeIds.portland, 1)
-    const reachedIds = traversed.map((t) => t.node.id)
+    const traversed = await traverseGraph(db, nodeIds.portland, 1);
+    const reachedIds = traversed.map((t) => t.node.id);
 
-    expect(reachedIds).toContain(nodeIds.alex)
-    expect(reachedIds).toContain(nodeIds.datapipe) // DataPipe→Portland edge
+    expect(reachedIds).toContain(nodeIds.alex);
+    expect(reachedIds).toContain(nodeIds.datapipe); // DataPipe→Portland edge
 
     // Miso is 2 hops away (Portland→Alex→Miso) — should NOT be reached at depth=1
-    expect(reachedIds).not.toContain(nodeIds.miso)
-  })
+    expect(reachedIds).not.toContain(nodeIds.miso);
+  });
 
-  it('depth=2 reaches 2-hop neighbors', async () => {
+  it("depth=2 reaches 2-hop neighbors", async () => {
     // From Portland: depth=2 should reach Alex's other connections
-    const traversed = await traverseGraph(db, nodeIds.portland, 2)
-    const reachedIds = traversed.map((t) => t.node.id)
-
-    expect(reachedIds).toContain(nodeIds.alex)
-    expect(reachedIds).toContain(nodeIds.miso) // 2 hops: Portland→Alex→Miso
-    expect(reachedIds).toContain(nodeIds.rust)
-    expect(reachedIds).toContain(nodeIds.shellfish)
-  })
-})
+    const traversed = await traverseGraph(db, nodeIds.portland, 2);
+    const reachedIds = traversed.map((t) => t.node.id);
+
+    expect(reachedIds).toContain(nodeIds.alex);
+    expect(reachedIds).toContain(nodeIds.miso); // 2 hops: Portland→Alex→Miso
+    expect(reachedIds).toContain(nodeIds.rust);
+    expect(reachedIds).toContain(nodeIds.shellfish);
+  });
+});
diff --git a/apps/construct/src/__tests__/memory/context.test.ts b/apps/construct/src/__tests__/memory/context.test.ts
index d3a5304..424ff20 100644
--- a/apps/construct/src/__tests__/memory/context.test.ts
+++ b/apps/construct/src/__tests__/memory/context.test.ts
@@ -1,144 +1,177 @@
-import { describe, it, expect } from 'vitest'
-import { renderObservations, buildContextWindow, renderObservationsWithBudget } from '@repo/cairn'
-import type { Observation } from '@repo/cairn'
+import { describe, it, expect } from "vitest";
+import { renderObservations, buildContextWindow, renderObservationsWithBudget } from "@repo/cairn";
+import type { Observation } from "@repo/cairn";
 
 function makeObs(overrides: Partial = {}): Observation {
   return {
-    id: 'obs-1',
-    conversation_id: 'conv-1',
-    content: 'Test observation',
-    priority: 'medium',
-    observation_date: '2024-01-15',
+    id: "obs-1",
+    conversation_id: "conv-1",
+    content: "Test observation",
+    priority: "medium",
+    observation_date: "2024-01-15",
     source_message_ids: [],
     token_count: 10,
     generation: 0,
     superseded_at: null,
-    created_at: '2024-01-15T00:00:00',
+    created_at: "2024-01-15T00:00:00",
     ...overrides,
-  }
+  };
 }
 
-describe('renderObservations', () => {
-  it('renders empty string for no observations', () => {
-    expect(renderObservations([])).toBe('')
-  })
+describe("renderObservations", () => {
+  it("renders empty string for no observations", () => {
+    expect(renderObservations([])).toBe("");
+  });
 
-  it('renders medium priority with dash prefix', () => {
-    const result = renderObservations([makeObs({ content: 'A fact', priority: 'medium' })])
-    expect(result).toBe('- [2024-01-15] A fact')
-  })
+  it("renders medium priority with dash prefix", () => {
+    const result = renderObservations([makeObs({ content: "A fact", priority: "medium" })]);
+    expect(result).toBe("- [2024-01-15] A fact");
+  });
 
-  it('renders high priority with ! prefix', () => {
-    const result = renderObservations([makeObs({ content: 'Important', priority: 'high' })])
-    expect(result).toBe('! [2024-01-15] Important')
-  })
+  it("renders high priority with ! prefix", () => {
+    const result = renderObservations([makeObs({ content: "Important", priority: "high" })]);
+    expect(result).toBe("! [2024-01-15] Important");
+  });
 
-  it('renders low priority with ~ prefix', () => {
-    const result = renderObservations([makeObs({ content: 'Minor', priority: 'low' })])
-    expect(result).toBe('~ [2024-01-15] Minor')
-  })
+  it("renders low priority with ~ prefix", () => {
+    const result = renderObservations([makeObs({ content: "Minor", priority: "low" })]);
+    expect(result).toBe("~ [2024-01-15] Minor");
+  });
 
-  it('renders multiple observations separated by newlines', () => {
+  it("renders multiple observations separated by newlines", () => {
     const obs = [
-      makeObs({ id: '1', content: 'First', priority: 'high', observation_date: '2024-01-15' }),
-      makeObs({ id: '2', content: 'Second', priority: 'medium', observation_date: '2024-01-16' }),
-    ]
-    const result = renderObservations(obs)
-    expect(result).toBe('! [2024-01-15] First\n- [2024-01-16] Second')
-  })
-})
+      makeObs({ id: "1", content: "First", priority: "high", observation_date: "2024-01-15" }),
+      makeObs({ id: "2", content: "Second", priority: "medium", observation_date: "2024-01-16" }),
+    ];
+    const result = renderObservations(obs);
+    expect(result).toBe("! [2024-01-15] First\n- [2024-01-16] Second");
+  });
+});
 
-describe('renderObservationsWithBudget', () => {
-  it('returns empty result for no observations', () => {
-    const result = renderObservationsWithBudget([])
-    expect(result).toEqual({ text: '', included: 0, evicted: 0, totalTokens: 0 })
-  })
+describe("renderObservationsWithBudget", () => {
+  it("returns empty result for no observations", () => {
+    const result = renderObservationsWithBudget([]);
+    expect(result).toEqual({ text: "", included: 0, evicted: 0, totalTokens: 0 });
+  });
 
-  it('includes all observations when under budget', () => {
+  it("includes all observations when under budget", () => {
     const obs = [
-      makeObs({ id: '1', content: 'First', token_count: 10 }),
-      makeObs({ id: '2', content: 'Second', token_count: 10 }),
-    ]
-    const result = renderObservationsWithBudget(obs, 100)
-    expect(result.included).toBe(2)
-    expect(result.evicted).toBe(0)
-    expect(result.totalTokens).toBe(20)
-    expect(result.text).not.toContain('omitted')
+      makeObs({ id: "1", content: "First", token_count: 10 }),
+      makeObs({ id: "2", content: "Second", token_count: 10 }),
+    ];
+    const result = renderObservationsWithBudget(obs, 100);
+    expect(result.included).toBe(2);
+    expect(result.evicted).toBe(0);
+    expect(result.totalTokens).toBe(20);
+    expect(result.text).not.toContain("omitted");
     // Should match unbounded render
-    expect(result.text).toBe(renderObservations(obs))
-  })
+    expect(result.text).toBe(renderObservations(obs));
+  });
 
-  it('evicts low-priority observations first when over budget', () => {
+  it("evicts low-priority observations first when over budget", () => {
     const obs = [
-      makeObs({ id: '1', content: 'Important', priority: 'high', token_count: 50 }),
-      makeObs({ id: '2', content: 'Meh', priority: 'low', token_count: 50 }),
-      makeObs({ id: '3', content: 'Normal', priority: 'medium', token_count: 50 }),
-    ]
-    const result = renderObservationsWithBudget(obs, 100)
-    expect(result.included).toBe(2)
-    expect(result.evicted).toBe(1)
-    expect(result.text).toContain('Important')
-    expect(result.text).toContain('Normal')
-    expect(result.text).not.toContain('Meh')
-  })
+      makeObs({ id: "1", content: "Important", priority: "high", token_count: 50 }),
+      makeObs({ id: "2", content: "Meh", priority: "low", token_count: 50 }),
+      makeObs({ id: "3", content: "Normal", priority: "medium", token_count: 50 }),
+    ];
+    const result = renderObservationsWithBudget(obs, 100);
+    expect(result.included).toBe(2);
+    expect(result.evicted).toBe(1);
+    expect(result.text).toContain("Important");
+    expect(result.text).toContain("Normal");
+    expect(result.text).not.toContain("Meh");
+  });
 
-  it('uses recency as tiebreaker within same priority', () => {
+  it("uses recency as tiebreaker within same priority", () => {
     const obs = [
-      makeObs({ id: '1', content: 'Old', priority: 'medium', token_count: 60, created_at: '2024-01-01T00:00:00' }),
-      makeObs({ id: '2', content: 'New', priority: 'medium', token_count: 60, created_at: '2024-01-15T00:00:00' }),
-    ]
-    const result = renderObservationsWithBudget(obs, 60)
-    expect(result.included).toBe(1)
-    expect(result.text).toContain('New')
-    expect(result.text).not.toContain('Old')
-  })
+      makeObs({
+        id: "1",
+        content: "Old",
+        priority: "medium",
+        token_count: 60,
+        created_at: "2024-01-01T00:00:00",
+      }),
+      makeObs({
+        id: "2",
+        content: "New",
+        priority: "medium",
+        token_count: 60,
+        created_at: "2024-01-15T00:00:00",
+      }),
+    ];
+    const result = renderObservationsWithBudget(obs, 60);
+    expect(result.included).toBe(1);
+    expect(result.text).toContain("New");
+    expect(result.text).not.toContain("Old");
+  });
 
-  it('re-sorts included observations chronologically', () => {
+  it("re-sorts included observations chronologically", () => {
     const obs = [
-      makeObs({ id: '1', content: 'A', priority: 'high', token_count: 10, created_at: '2024-01-01T00:00:00', observation_date: '2024-01-01' }),
-      makeObs({ id: '2', content: 'B', priority: 'low', token_count: 10, created_at: '2024-01-02T00:00:00', observation_date: '2024-01-02' }),
-      makeObs({ id: '3', content: 'C', priority: 'high', token_count: 10, created_at: '2024-01-03T00:00:00', observation_date: '2024-01-03' }),
-    ]
-    const result = renderObservationsWithBudget(obs, 30)
+      makeObs({
+        id: "1",
+        content: "A",
+        priority: "high",
+        token_count: 10,
+        created_at: "2024-01-01T00:00:00",
+        observation_date: "2024-01-01",
+      }),
+      makeObs({
+        id: "2",
+        content: "B",
+        priority: "low",
+        token_count: 10,
+        created_at: "2024-01-02T00:00:00",
+        observation_date: "2024-01-02",
+      }),
+      makeObs({
+        id: "3",
+        content: "C",
+        priority: "high",
+        token_count: 10,
+        created_at: "2024-01-03T00:00:00",
+        observation_date: "2024-01-03",
+      }),
+    ];
+    const result = renderObservationsWithBudget(obs, 30);
     // All fit — should be in created_at order: A, B, C
-    const lines = result.text.split('\n')
-    expect(lines[0]).toContain('A')
-    expect(lines[1]).toContain('B')
-    expect(lines[2]).toContain('C')
-  })
+    const lines = result.text.split("\n");
+    expect(lines[0]).toContain("A");
+    expect(lines[1]).toContain("B");
+    expect(lines[2]).toContain("C");
+  });
 
-  it('always includes at least one observation even if it exceeds budget', () => {
-    const obs = [makeObs({ id: '1', content: 'Huge observation', token_count: 10000 })]
-    const result = renderObservationsWithBudget(obs, 100)
-    expect(result.included).toBe(1)
-    expect(result.evicted).toBe(0)
-    expect(result.totalTokens).toBe(10000)
-    expect(result.text).toContain('Huge observation')
-  })
+  it("always includes at least one observation even if it exceeds budget", () => {
+    const obs = [makeObs({ id: "1", content: "Huge observation", token_count: 10000 })];
+    const result = renderObservationsWithBudget(obs, 100);
+    expect(result.included).toBe(1);
+    expect(result.evicted).toBe(0);
+    expect(result.totalTokens).toBe(10000);
+    expect(result.text).toContain("Huge observation");
+  });
 
-  it('falls back to estimateTokens when token_count is 0', () => {
+  it("falls back to estimateTokens when token_count is 0", () => {
     const obs = [
-      makeObs({ id: '1', content: 'x'.repeat(400), token_count: 0 }), // ~100 tokens
-      makeObs({ id: '2', content: 'y'.repeat(400), token_count: 0 }), // ~100 tokens
-    ]
-    const result = renderObservationsWithBudget(obs, 100)
-    expect(result.included).toBe(1)
-    expect(result.evicted).toBe(1)
-  })
-})
+      makeObs({ id: "1", content: "x".repeat(400), token_count: 0 }), // ~100 tokens
+      makeObs({ id: "2", content: "y".repeat(400), token_count: 0 }), // ~100 tokens
+    ];
+    const result = renderObservationsWithBudget(obs, 100);
+    expect(result.included).toBe(1);
+    expect(result.evicted).toBe(1);
+  });
+});
 
-describe('buildContextWindow', () => {
-  it('returns empty observations and active messages', () => {
-    const result = buildContextWindow([], [])
-    expect(result.observations).toBe('')
-    expect(result.activeMessages).toEqual([])
-  })
+describe("buildContextWindow", () => {
+  it("returns empty observations and active messages", () => {
+    const result = buildContextWindow([], []);
+    expect(result.observations).toBe("");
+    expect(result.activeMessages).toEqual([]);
+  });
 
-  it('passes through observations and messages', () => {
-    const obs = [makeObs({ content: 'Fact' })]
-    const msgs = [{ role: 'user', content: 'hello' }]
-    const result = buildContextWindow(obs, msgs)
-    expect(result.observations).toContain('Fact')
-    expect(result.activeMessages).toHaveLength(1)
-  })
-})
+  it("passes through observations and messages", () => {
+    const obs = [makeObs({ content: "Fact" })];
+    const msgs = [{ role: "user", content: "hello" }];
+    const result = buildContextWindow(obs, msgs);
+    expect(result.observations).toContain("Fact");
+    expect(result.activeMessages).toHaveLength(1);
+  });
+});
diff --git a/apps/construct/src/__tests__/memory/graph-quality.ai.test.ts b/apps/construct/src/__tests__/memory/graph-quality.ai.test.ts
index 2b183dd..8536c9f 100644
--- a/apps/construct/src/__tests__/memory/graph-quality.ai.test.ts
+++ b/apps/construct/src/__tests__/memory/graph-quality.ai.test.ts
@@ -6,11 +6,11 @@
  * Run: OPENROUTER_API_KEY= npm run test:ai
  */
 
-import { describe, it, expect, beforeEach, afterEach } from 'vitest'
-import type { Kysely } from 'kysely'
-import type { Database } from '../../db/schema.js'
-import type { WorkerModelConfig } from '@repo/cairn'
-import { setupDb } from '../../__tests__/fixtures.js'
+import { describe, it, expect, beforeEach, afterEach } from "vitest";
+import type { Kysely } from "kysely";
+import type { Database } from "../../db/schema.js";
+import type { WorkerModelConfig } from "@repo/cairn";
+import { setupDb } from "../../__tests__/fixtures.js";
 import {
   storeMemory,
   processMemoryForGraph,
@@ -18,179 +18,169 @@ import {
   traverseGraph,
   getRelatedMemoryIds,
   getNodeEdges,
-} from '@repo/cairn'
+} from "@repo/cairn";
 
-const API_KEY = process.env.OPENROUTER_API_KEY ?? ''
+const API_KEY = process.env.OPENROUTER_API_KEY ?? "";
 
 const WORKER_CONFIG: WorkerModelConfig = {
   apiKey: API_KEY,
-  model: 'google/gemini-2.5-flash-lite',
-  baseUrl: 'https://openrouter.ai/api/v1/chat/completions',
-}
+  model: "google/gemini-2.5-flash-lite",
+  baseUrl: "https://openrouter.ai/api/v1/chat/completions",
+};
 
-const shouldRun = API_KEY.length > 0
-const describeGraph = shouldRun ? describe : describe.skip
+const shouldRun = API_KEY.length > 0;
+const describeGraph = shouldRun ? describe : describe.skip;
 
-let db: Kysely
+let db: Kysely;
 
-describeGraph('graph quality — real extraction', () => {
+describeGraph("graph quality — real extraction", () => {
   beforeEach(async () => {
-    db = await setupDb()
-  })
+    db = await setupDb();
+  });
 
   afterEach(async () => {
-    await db.destroy()
-  })
+    await db.destroy();
+  });
 
   // ── Accumulation: 5 memories → connected graph ──────────────────
 
-  it('builds a connected hub-and-spoke graph from related memories', async () => {
+  it("builds a connected hub-and-spoke graph from related memories", async () => {
     const inputs = [
-      'Alex works at DataPipe as a senior backend engineer',
-      'Alex has a cat named Miso who is 3 years old',
-      'Alex is allergic to shellfish, carries an EpiPen',
-      'Alex lives in Portland, Oregon',
-      'Alex is learning Rust programming',
-    ]
-
-    const memoryIds: string[] = []
+      "Alex works at DataPipe as a senior backend engineer",
+      "Alex has a cat named Miso who is 3 years old",
+      "Alex is allergic to shellfish, carries an EpiPen",
+      "Alex lives in Portland, Oregon",
+      "Alex is learning Rust programming",
+    ];
+
+    const memoryIds: string[] = [];
     for (const content of inputs) {
-      const mem = await storeMemory(db, { content, source: 'user' })
-      memoryIds.push(mem.id)
-      await processMemoryForGraph(db, WORKER_CONFIG, mem.id, mem.content)
+      const mem = await storeMemory(db, { content, source: "user" });
+      memoryIds.push(mem.id);
+      await processMemoryForGraph(db, WORKER_CONFIG, mem.id, mem.content);
     }
 
-    const allNodes = await db.selectFrom('graph_nodes').selectAll().execute()
-    const allEdges = await db.selectFrom('graph_edges').selectAll().execute()
+    const allNodes = await db.selectFrom("graph_nodes").selectAll().execute();
+    const allEdges = await db.selectFrom("graph_edges").selectAll().execute();
 
     // -- Reasonable node count (not too sparse, not explosive) --
-    expect(allNodes.length).toBeGreaterThanOrEqual(4)
-    expect(allNodes.length).toBeLessThanOrEqual(20)
+    expect(allNodes.length).toBeGreaterThanOrEqual(4);
+    expect(allNodes.length).toBeLessThanOrEqual(20);
 
     // -- Alex should exist as a person and act as hub --
-    const alex = await findNodeByName(db, 'alex')
-    expect(alex).toBeDefined()
-    expect(alex!.node_type).toBe('person')
+    const alex = await findNodeByName(db, "alex");
+    expect(alex).toBeDefined();
+    expect(alex!.node_type).toBe("person");
 
-    const alexEdges = allEdges.filter(
-      (e) => e.source_id === alex!.id || e.target_id === alex!.id,
-    )
+    const alexEdges = allEdges.filter((e) => e.source_id === alex!.id || e.target_id === alex!.id);
     // Alex should connect to at least 3 distinct entities
-    expect(alexEdges.length).toBeGreaterThanOrEqual(3)
+    expect(alexEdges.length).toBeGreaterThanOrEqual(3);
 
     // -- Every edge should carry a memory_id --
     for (const edge of allEdges) {
-      expect(edge.memory_id).toBeTruthy()
+      expect(edge.memory_id).toBeTruthy();
     }
 
     // -- Traversal from Alex reaches the neighborhood --
-    const traversed = await traverseGraph(db, alex!.id, 2)
-    expect(traversed.length).toBeGreaterThanOrEqual(3)
+    const traversed = await traverseGraph(db, alex!.id, 2);
+    expect(traversed.length).toBeGreaterThanOrEqual(3);
 
     // -- Most memory IDs should be reachable via graph --
-    const reachableNodeIds = [alex!.id, ...traversed.map((t) => t.node.id)]
-    const relatedMemIds = await getRelatedMemoryIds(db, reachableNodeIds)
+    const reachableNodeIds = [alex!.id, ...traversed.map((t) => t.node.id)];
+    const relatedMemIds = await getRelatedMemoryIds(db, reachableNodeIds);
     // At least 3 of our 5 memories should be graph-linked
-    expect(relatedMemIds.length).toBeGreaterThanOrEqual(3)
+    expect(relatedMemIds.length).toBeGreaterThanOrEqual(3);
 
-    const linkedSet = new Set(relatedMemIds)
-    const linkedCount = memoryIds.filter((id) => linkedSet.has(id)).length
-    expect(linkedCount).toBeGreaterThanOrEqual(3)
-  }, 60_000)
+    const linkedSet = new Set(relatedMemIds);
+    const linkedCount = memoryIds.filter((id) => linkedSet.has(id)).length;
+    expect(linkedCount).toBeGreaterThanOrEqual(3);
+  }, 60_000);
 
   // ── Deduplication: same entity across memories → one node ──────
 
-  it('deduplicates entities mentioned across multiple memories', async () => {
+  it("deduplicates entities mentioned across multiple memories", async () => {
     const mem1 = await storeMemory(db, {
-      content: 'Alex works at DataPipe',
-      source: 'user',
-    })
-    await processMemoryForGraph(db, WORKER_CONFIG, mem1.id, mem1.content)
+      content: "Alex works at DataPipe",
+      source: "user",
+    });
+    await processMemoryForGraph(db, WORKER_CONFIG, mem1.id, mem1.content);
 
     const mem2 = await storeMemory(db, {
-      content: 'Alex has a cat named Miso',
-      source: 'user',
-    })
-    await processMemoryForGraph(db, WORKER_CONFIG, mem2.id, mem2.content)
+      content: "Alex has a cat named Miso",
+      source: "user",
+    });
+    await processMemoryForGraph(db, WORKER_CONFIG, mem2.id, mem2.content);
 
     // Alex should be one node, not two
-    const allNodes = await db.selectFrom('graph_nodes').selectAll().execute()
-    const alexNodes = allNodes.filter((n) => n.name === 'alex')
-    expect(alexNodes).toHaveLength(1)
+    const allNodes = await db.selectFrom("graph_nodes").selectAll().execute();
+    const alexNodes = allNodes.filter((n) => n.name === "alex");
+    expect(alexNodes).toHaveLength(1);
 
     // But Alex should have edges from both memories
-    const alexEdges = await getNodeEdges(db, alexNodes[0].id)
-    const edgeMemoryIds = new Set(
-      alexEdges.map((e) => e.memory_id).filter(Boolean),
-    )
-    expect(edgeMemoryIds.size).toBeGreaterThanOrEqual(2)
-    expect(edgeMemoryIds.has(mem1.id)).toBe(true)
-    expect(edgeMemoryIds.has(mem2.id)).toBe(true)
-  }, 30_000)
+    const alexEdges = await getNodeEdges(db, alexNodes[0].id);
+    const edgeMemoryIds = new Set(alexEdges.map((e) => e.memory_id).filter(Boolean));
+    expect(edgeMemoryIds.size).toBeGreaterThanOrEqual(2);
+    expect(edgeMemoryIds.has(mem1.id)).toBe(true);
+    expect(edgeMemoryIds.has(mem2.id)).toBe(true);
+  }, 30_000);
 
   // ── Cross-topic discovery via hub ──────────────────────────────
 
-  it('graph bridges unrelated topics through a shared entity', async () => {
+  it("graph bridges unrelated topics through a shared entity", async () => {
     // Pet fact and health fact — completely different topics,
     // connected only through "Alex"
     const petMem = await storeMemory(db, {
-      content: 'Alex has a cat named Miso',
-      source: 'user',
-    })
-    await processMemoryForGraph(db, WORKER_CONFIG, petMem.id, petMem.content)
+      content: "Alex has a cat named Miso",
+      source: "user",
+    });
+    await processMemoryForGraph(db, WORKER_CONFIG, petMem.id, petMem.content);
 
     const healthMem = await storeMemory(db, {
-      content: 'Alex is severely allergic to shellfish',
-      source: 'user',
-    })
-    await processMemoryForGraph(
-      db,
-      WORKER_CONFIG,
-      healthMem.id,
-      healthMem.content,
-    )
+      content: "Alex is severely allergic to shellfish",
+      source: "user",
+    });
+    await processMemoryForGraph(db, WORKER_CONFIG, healthMem.id, healthMem.content);
 
     // Both memories should be reachable from the Alex hub
-    const alex = await findNodeByName(db, 'alex')
-    expect(alex).toBeDefined()
+    const alex = await findNodeByName(db, "alex");
+    expect(alex).toBeDefined();
 
-    const traversed = await traverseGraph(db, alex!.id, 1)
-    const neighborhood = [alex!.id, ...traversed.map((t) => t.node.id)]
-    const relatedMemIds = await getRelatedMemoryIds(db, neighborhood)
+    const traversed = await traverseGraph(db, alex!.id, 1);
+    const neighborhood = [alex!.id, ...traversed.map((t) => t.node.id)];
+    const relatedMemIds = await getRelatedMemoryIds(db, neighborhood);
 
     // The key assertion: both the pet AND health memories are
     // discoverable from Alex's neighborhood
-    expect(relatedMemIds).toContain(petMem.id)
-    expect(relatedMemIds).toContain(healthMem.id)
-  }, 30_000)
+    expect(relatedMemIds).toContain(petMem.id);
+    expect(relatedMemIds).toContain(healthMem.id);
+  }, 30_000);
 
   // ── Dense extraction: one sentence, multiple entities ──────────
 
-  it('extracts multiple entities from a single dense sentence', async () => {
+  it("extracts multiple entities from a single dense sentence", async () => {
     const mem = await storeMemory(db, {
-      content:
-        'Sarah and Alex went hiking at Forest Park in Portland last Saturday',
-      source: 'user',
-    })
-    await processMemoryForGraph(db, WORKER_CONFIG, mem.id, mem.content)
+      content: "Sarah and Alex went hiking at Forest Park in Portland last Saturday",
+      source: "user",
+    });
+    await processMemoryForGraph(db, WORKER_CONFIG, mem.id, mem.content);
 
-    const allNodes = await db.selectFrom('graph_nodes').selectAll().execute()
-    const allEdges = await db.selectFrom('graph_edges').selectAll().execute()
+    const allNodes = await db.selectFrom("graph_nodes").selectAll().execute();
+    const allEdges = await db.selectFrom("graph_edges").selectAll().execute();
 
     // Should extract at least 3 entities (Alex, Sarah, Portland/Forest Park)
-    expect(allNodes.length).toBeGreaterThanOrEqual(3)
+    expect(allNodes.length).toBeGreaterThanOrEqual(3);
 
     // Should create at least 2 relationships
-    expect(allEdges.length).toBeGreaterThanOrEqual(2)
+    expect(allEdges.length).toBeGreaterThanOrEqual(2);
 
     // All edges linked to this memory
     for (const edge of allEdges) {
-      expect(edge.memory_id).toBe(mem.id)
+      expect(edge.memory_id).toBe(mem.id);
     }
 
     // Node names should be lowercase (normalization)
     for (const node of allNodes) {
-      expect(node.name).toBe(node.name.toLowerCase())
+      expect(node.name).toBe(node.name.toLowerCase());
     }
-  }, 30_000)
-})
+  }, 30_000);
+});
diff --git a/apps/construct/src/__tests__/memory/graph.test.ts b/apps/construct/src/__tests__/memory/graph.test.ts
index 85b2a01..1687ece 100644
--- a/apps/construct/src/__tests__/memory/graph.test.ts
+++ b/apps/construct/src/__tests__/memory/graph.test.ts
@@ -1,9 +1,9 @@
-import { describe, it, expect, beforeEach, afterEach } from 'vitest'
-import { Kysely } from 'kysely'
-import { createDb } from '@repo/db'
-import type { Database } from '../../db/schema.js'
-import * as migration001 from '../../db/migrations/001-initial.js'
-import * as migration005 from '../../db/migrations/005-graph-memory.js'
+import { describe, it, expect, beforeEach, afterEach } from "vitest";
+import { Kysely } from "kysely";
+import { createDb } from "@repo/db";
+import type { Database } from "../../db/schema.js";
+import * as migration001 from "../../db/migrations/001-initial.js";
+import * as migration005 from "../../db/migrations/005-graph-memory.js";
 import {
   upsertNode,
   upsertEdge,
@@ -14,282 +14,286 @@ import {
   getRelatedMemoryIds,
   getMemoryNodes,
   storeMemory,
-} from '@repo/cairn'
-import * as migration002 from '../../db/migrations/002-fts5-and-embeddings.js'
+} from "@repo/cairn";
+import * as migration002 from "../../db/migrations/002-fts5-and-embeddings.js";
 
-let db: Kysely
+let db: Kysely;
 
 beforeEach(async () => {
-  const result = createDb(':memory:')
-  db = result.db
-  await migration001.up(db as Kysely)
-  await migration002.up(db as Kysely)
-  await migration005.up(db as Kysely)
-})
+  const result = createDb(":memory:");
+  db = result.db;
+  await migration001.up(db as Kysely);
+  await migration002.up(db as Kysely);
+  await migration005.up(db as Kysely);
+});
 
 afterEach(async () => {
-  await db.destroy()
-})
+  await db.destroy();
+});
 
-describe('upsertNode', () => {
-  it('creates a new node', async () => {
+describe("upsertNode", () => {
+  it("creates a new node", async () => {
     const node = await upsertNode(db, {
-      name: 'Alice',
-      type: 'person',
-      description: 'A friend',
-    })
-
-    expect(node.name).toBe('alice')
-    expect(node.display_name).toBe('Alice')
-    expect(node.node_type).toBe('person')
-    expect(node.description).toBe('A friend')
-  })
-
-  it('returns existing node on duplicate name+type', async () => {
-    const first = await upsertNode(db, { name: 'Alice', type: 'person' })
-    const second = await upsertNode(db, { name: 'alice', type: 'person' })
-
-    expect(second.id).toBe(first.id)
-  })
-
-  it('fills in description if existing node has none', async () => {
-    await upsertNode(db, { name: 'Alice', type: 'person' })
+      name: "Alice",
+      type: "person",
+      description: "A friend",
+    });
+
+    expect(node.name).toBe("alice");
+    expect(node.display_name).toBe("Alice");
+    expect(node.node_type).toBe("person");
+    expect(node.description).toBe("A friend");
+  });
+
+  it("returns existing node on duplicate name+type", async () => {
+    const first = await upsertNode(db, { name: "Alice", type: "person" });
+    const second = await upsertNode(db, { name: "alice", type: "person" });
+
+    expect(second.id).toBe(first.id);
+  });
+
+  it("fills in description if existing node has none", async () => {
+    await upsertNode(db, { name: "Alice", type: "person" });
     const updated = await upsertNode(db, {
-      name: 'Alice',
-      type: 'person',
-      description: 'Best friend',
-    })
+      name: "Alice",
+      type: "person",
+      description: "Best friend",
+    });
 
-    expect(updated.description).toBe('Best friend')
-  })
+    expect(updated.description).toBe("Best friend");
+  });
 
-  it('does not overwrite existing description', async () => {
+  it("does not overwrite existing description", async () => {
     await upsertNode(db, {
-      name: 'Alice',
-      type: 'person',
-      description: 'Original',
-    })
+      name: "Alice",
+      type: "person",
+      description: "Original",
+    });
     const again = await upsertNode(db, {
-      name: 'Alice',
-      type: 'person',
-      description: 'New description',
-    })
+      name: "Alice",
+      type: "person",
+      description: "New description",
+    });
 
-    expect(again.description).toBe('Original')
-  })
+    expect(again.description).toBe("Original");
+  });
 
-  it('allows same name with different types', async () => {
-    const person = await upsertNode(db, { name: 'Java', type: 'concept' })
-    const place = await upsertNode(db, { name: 'Java', type: 'place' })
+  it("allows same name with different types", async () => {
+    const person = await upsertNode(db, { name: "Java", type: "concept" });
+    const place = await upsertNode(db, { name: "Java", type: "place" });
 
-    expect(person.id).not.toBe(place.id)
-  })
-})
+    expect(person.id).not.toBe(place.id);
+  });
+});
 
-describe('upsertEdge', () => {
-  it('creates an edge between two nodes', async () => {
-    const alice = await upsertNode(db, { name: 'Alice', type: 'person' })
-    const bob = await upsertNode(db, { name: 'Bob', type: 'person' })
+describe("upsertEdge", () => {
+  it("creates an edge between two nodes", async () => {
+    const alice = await upsertNode(db, { name: "Alice", type: "person" });
+    const bob = await upsertNode(db, { name: "Bob", type: "person" });
 
     const edge = await upsertEdge(db, {
       source_id: alice.id,
       target_id: bob.id,
-      relation: 'knows',
-    })
+      relation: "knows",
+    });
 
-    expect(edge.source_id).toBe(alice.id)
-    expect(edge.target_id).toBe(bob.id)
-    expect(edge.relation).toBe('knows')
-    expect(edge.weight).toBe(1)
-  })
+    expect(edge.source_id).toBe(alice.id);
+    expect(edge.target_id).toBe(bob.id);
+    expect(edge.relation).toBe("knows");
+    expect(edge.weight).toBe(1);
+  });
 
-  it('increments weight on duplicate edge', async () => {
-    const alice = await upsertNode(db, { name: 'Alice', type: 'person' })
-    const bob = await upsertNode(db, { name: 'Bob', type: 'person' })
+  it("increments weight on duplicate edge", async () => {
+    const alice = await upsertNode(db, { name: "Alice", type: "person" });
+    const bob = await upsertNode(db, { name: "Bob", type: "person" });
 
-    await upsertEdge(db, { source_id: alice.id, target_id: bob.id, relation: 'knows' })
-    const second = await upsertEdge(db, { source_id: alice.id, target_id: bob.id, relation: 'knows' })
+    await upsertEdge(db, { source_id: alice.id, target_id: bob.id, relation: "knows" });
+    const second = await upsertEdge(db, {
+      source_id: alice.id,
+      target_id: bob.id,
+      relation: "knows",
+    });
 
-    expect(second.weight).toBe(2)
-  })
+    expect(second.weight).toBe(2);
+  });
 
-  it('stores memory_id on edge', async () => {
-    const alice = await upsertNode(db, { name: 'Alice', type: 'person' })
-    const bob = await upsertNode(db, { name: 'Bob', type: 'person' })
-    const mem = await storeMemory(db, { content: 'Alice and Bob are friends', source: 'user' })
+  it("stores memory_id on edge", async () => {
+    const alice = await upsertNode(db, { name: "Alice", type: "person" });
+    const bob = await upsertNode(db, { name: "Bob", type: "person" });
+    const mem = await storeMemory(db, { content: "Alice and Bob are friends", source: "user" });
 
     const edge = await upsertEdge(db, {
       source_id: alice.id,
       target_id: bob.id,
-      relation: 'friends with',
+      relation: "friends with",
       memory_id: mem.id,
-    })
-
-    expect(edge.memory_id).toBe(mem.id)
-  })
-})
-
-describe('findNodeByName', () => {
-  it('finds node case-insensitively', async () => {
-    await upsertNode(db, { name: 'Alice', type: 'person' })
-
-    const found = await findNodeByName(db, 'ALICE')
-    expect(found).toBeDefined()
-    expect(found!.display_name).toBe('Alice')
-  })
-
-  it('filters by type when specified', async () => {
-    await upsertNode(db, { name: 'Java', type: 'concept' })
-    await upsertNode(db, { name: 'Java', type: 'place' })
-
-    const concept = await findNodeByName(db, 'java', 'concept')
-    expect(concept!.node_type).toBe('concept')
-  })
-
-  it('returns undefined for missing node', async () => {
-    const found = await findNodeByName(db, 'nobody')
-    expect(found).toBeUndefined()
-  })
-})
-
-describe('searchNodes', () => {
-  it('finds nodes by partial name match', async () => {
-    await upsertNode(db, { name: 'Alice Smith', type: 'person' })
-    await upsertNode(db, { name: 'Bob Jones', type: 'person' })
-
-    const results = await searchNodes(db, 'alice')
-    expect(results).toHaveLength(1)
-    expect(results[0].display_name).toBe('Alice Smith')
-  })
-
-  it('returns empty array for no matches', async () => {
-    const results = await searchNodes(db, 'nobody')
-    expect(results).toHaveLength(0)
-  })
-})
-
-describe('getNodeEdges', () => {
-  it('returns edges where node is source or target', async () => {
-    const alice = await upsertNode(db, { name: 'Alice', type: 'person' })
-    const bob = await upsertNode(db, { name: 'Bob', type: 'person' })
-    const carol = await upsertNode(db, { name: 'Carol', type: 'person' })
-
-    await upsertEdge(db, { source_id: alice.id, target_id: bob.id, relation: 'knows' })
-    await upsertEdge(db, { source_id: carol.id, target_id: alice.id, relation: 'works with' })
-
-    const edges = await getNodeEdges(db, alice.id)
-    expect(edges).toHaveLength(2)
-  })
-})
-
-describe('traverseGraph', () => {
-  it('traverses 1 hop from start node', async () => {
-    const alice = await upsertNode(db, { name: 'Alice', type: 'person' })
-    const bob = await upsertNode(db, { name: 'Bob', type: 'person' })
-    const carol = await upsertNode(db, { name: 'Carol', type: 'person' })
-
-    await upsertEdge(db, { source_id: alice.id, target_id: bob.id, relation: 'knows' })
-    await upsertEdge(db, { source_id: bob.id, target_id: carol.id, relation: 'knows' })
-
-    const results = await traverseGraph(db, alice.id, 1)
-    expect(results).toHaveLength(1)
-    expect(results[0].node.name).toBe('bob')
-    expect(results[0].depth).toBe(1)
-  })
-
-  it('traverses 2 hops from start node', async () => {
-    const alice = await upsertNode(db, { name: 'Alice', type: 'person' })
-    const bob = await upsertNode(db, { name: 'Bob', type: 'person' })
-    const carol = await upsertNode(db, { name: 'Carol', type: 'person' })
-
-    await upsertEdge(db, { source_id: alice.id, target_id: bob.id, relation: 'knows' })
-    await upsertEdge(db, { source_id: bob.id, target_id: carol.id, relation: 'knows' })
-
-    const results = await traverseGraph(db, alice.id, 2)
-    expect(results).toHaveLength(2)
-
-    const names = results.map((r) => r.node.name)
-    expect(names).toContain('bob')
-    expect(names).toContain('carol')
-  })
-
-  it('does not revisit nodes (prevents cycles)', async () => {
-    const alice = await upsertNode(db, { name: 'Alice', type: 'person' })
-    const bob = await upsertNode(db, { name: 'Bob', type: 'person' })
+    });
+
+    expect(edge.memory_id).toBe(mem.id);
+  });
+});
+
+describe("findNodeByName", () => {
+  it("finds node case-insensitively", async () => {
+    await upsertNode(db, { name: "Alice", type: "person" });
+
+    const found = await findNodeByName(db, "ALICE");
+    expect(found).toBeDefined();
+    expect(found!.display_name).toBe("Alice");
+  });
+
+  it("filters by type when specified", async () => {
+    await upsertNode(db, { name: "Java", type: "concept" });
+    await upsertNode(db, { name: "Java", type: "place" });
+
+    const concept = await findNodeByName(db, "java", "concept");
+    expect(concept!.node_type).toBe("concept");
+  });
+
+  it("returns undefined for missing node", async () => {
+    const found = await findNodeByName(db, "nobody");
+    expect(found).toBeUndefined();
+  });
+});
+
+describe("searchNodes", () => {
+  it("finds nodes by partial name match", async () => {
+    await upsertNode(db, { name: "Alice Smith", type: "person" });
+    await upsertNode(db, { name: "Bob Jones", type: "person" });
+
+    const results = await searchNodes(db, "alice");
+    expect(results).toHaveLength(1);
+    expect(results[0].display_name).toBe("Alice Smith");
+  });
+
+  it("returns empty array for no matches", async () => {
+    const results = await searchNodes(db, "nobody");
+    expect(results).toHaveLength(0);
+  });
+});
+
+describe("getNodeEdges", () => {
+  it("returns edges where node is source or target", async () => {
+    const alice = await upsertNode(db, { name: "Alice", type: "person" });
+    const bob = await upsertNode(db, { name: "Bob", type: "person" });
+    const carol = await upsertNode(db, { name: "Carol", type: "person" });
+
+    await upsertEdge(db, { source_id: alice.id, target_id: bob.id, relation: "knows" });
+    await upsertEdge(db, { source_id: carol.id, target_id: alice.id, relation: "works with" });
+
+    const edges = await getNodeEdges(db, alice.id);
+    expect(edges).toHaveLength(2);
+  });
+});
+
+describe("traverseGraph", () => {
+  it("traverses 1 hop from start node", async () => {
+    const alice = await upsertNode(db, { name: "Alice", type: "person" });
+    const bob = await upsertNode(db, { name: "Bob", type: "person" });
+    const carol = await upsertNode(db, { name: "Carol", type: "person" });
+
+    await upsertEdge(db, { source_id: alice.id, target_id: bob.id, relation: "knows" });
+    await upsertEdge(db, { source_id: bob.id, target_id: carol.id, relation: "knows" });
+
+    const results = await traverseGraph(db, alice.id, 1);
+    expect(results).toHaveLength(1);
+    expect(results[0].node.name).toBe("bob");
+    expect(results[0].depth).toBe(1);
+  });
+
+  it("traverses 2 hops from start node", async () => {
+    const alice = await upsertNode(db, { name: "Alice", type: "person" });
+    const bob = await upsertNode(db, { name: "Bob", type: "person" });
+    const carol = await upsertNode(db, { name: "Carol", type: "person" });
+
+    await upsertEdge(db, { source_id: alice.id, target_id: bob.id, relation: "knows" });
+    await upsertEdge(db, { source_id: bob.id, target_id: carol.id, relation: "knows" });
+
+    const results = await traverseGraph(db, alice.id, 2);
+    expect(results).toHaveLength(2);
+
+    const names = results.map((r) => r.node.name);
+    expect(names).toContain("bob");
+    expect(names).toContain("carol");
+  });
+
+  it("does not revisit nodes (prevents cycles)", async () => {
+    const alice = await upsertNode(db, { name: "Alice", type: "person" });
+    const bob = await upsertNode(db, { name: "Bob", type: "person" });
 
     // Create a cycle: Alice → Bob → Alice
-    await upsertEdge(db, { source_id: alice.id, target_id: bob.id, relation: 'knows' })
-    await upsertEdge(db, { source_id: bob.id, target_id: alice.id, relation: 'knows' })
+    await upsertEdge(db, { source_id: alice.id, target_id: bob.id, relation: "knows" });
+    await upsertEdge(db, { source_id: bob.id, target_id: alice.id, relation: "knows" });
 
-    const results = await traverseGraph(db, alice.id, 3)
+    const results = await traverseGraph(db, alice.id, 3);
     // Should only find Bob (Alice is start, not revisited)
-    expect(results).toHaveLength(1)
-    expect(results[0].node.name).toBe('bob')
-  })
+    expect(results).toHaveLength(1);
+    expect(results[0].node.name).toBe("bob");
+  });
 
-  it('returns empty for isolated node', async () => {
-    const alone = await upsertNode(db, { name: 'Alone', type: 'person' })
+  it("returns empty for isolated node", async () => {
+    const alone = await upsertNode(db, { name: "Alone", type: "person" });
 
-    const results = await traverseGraph(db, alone.id, 2)
-    expect(results).toHaveLength(0)
-  })
-})
+    const results = await traverseGraph(db, alone.id, 2);
+    expect(results).toHaveLength(0);
+  });
+});
 
-describe('getRelatedMemoryIds', () => {
-  it('finds memory IDs connected to given nodes', async () => {
-    const alice = await upsertNode(db, { name: 'Alice', type: 'person' })
-    const bob = await upsertNode(db, { name: 'Bob', type: 'person' })
-    const mem = await storeMemory(db, { content: 'Alice knows Bob', source: 'user' })
+describe("getRelatedMemoryIds", () => {
+  it("finds memory IDs connected to given nodes", async () => {
+    const alice = await upsertNode(db, { name: "Alice", type: "person" });
+    const bob = await upsertNode(db, { name: "Bob", type: "person" });
+    const mem = await storeMemory(db, { content: "Alice knows Bob", source: "user" });
 
     await upsertEdge(db, {
       source_id: alice.id,
       target_id: bob.id,
-      relation: 'knows',
+      relation: "knows",
       memory_id: mem.id,
-    })
+    });
 
-    const memoryIds = await getRelatedMemoryIds(db, [alice.id])
-    expect(memoryIds).toContain(mem.id)
-  })
+    const memoryIds = await getRelatedMemoryIds(db, [alice.id]);
+    expect(memoryIds).toContain(mem.id);
+  });
 
-  it('returns empty for nodes with no memory links', async () => {
-    const alice = await upsertNode(db, { name: 'Alice', type: 'person' })
-    const bob = await upsertNode(db, { name: 'Bob', type: 'person' })
-    await upsertEdge(db, { source_id: alice.id, target_id: bob.id, relation: 'knows' })
+  it("returns empty for nodes with no memory links", async () => {
+    const alice = await upsertNode(db, { name: "Alice", type: "person" });
+    const bob = await upsertNode(db, { name: "Bob", type: "person" });
+    await upsertEdge(db, { source_id: alice.id, target_id: bob.id, relation: "knows" });
 
-    const memoryIds = await getRelatedMemoryIds(db, [alice.id])
-    expect(memoryIds).toHaveLength(0)
-  })
+    const memoryIds = await getRelatedMemoryIds(db, [alice.id]);
+    expect(memoryIds).toHaveLength(0);
+  });
 
-  it('returns empty for empty node list', async () => {
-    const memoryIds = await getRelatedMemoryIds(db, [])
-    expect(memoryIds).toHaveLength(0)
-  })
-})
+  it("returns empty for empty node list", async () => {
+    const memoryIds = await getRelatedMemoryIds(db, []);
+    expect(memoryIds).toHaveLength(0);
+  });
+});
 
-describe('getMemoryNodes', () => {
-  it('finds nodes connected to a memory', async () => {
-    const alice = await upsertNode(db, { name: 'Alice', type: 'person' })
-    const bob = await upsertNode(db, { name: 'Bob', type: 'person' })
-    const mem = await storeMemory(db, { content: 'Alice knows Bob', source: 'user' })
+describe("getMemoryNodes", () => {
+  it("finds nodes connected to a memory", async () => {
+    const alice = await upsertNode(db, { name: "Alice", type: "person" });
+    const bob = await upsertNode(db, { name: "Bob", type: "person" });
+    const mem = await storeMemory(db, { content: "Alice knows Bob", source: "user" });
 
     await upsertEdge(db, {
       source_id: alice.id,
       target_id: bob.id,
-      relation: 'knows',
+      relation: "knows",
       memory_id: mem.id,
-    })
+    });
 
-    const nodes = await getMemoryNodes(db, mem.id)
-    expect(nodes).toHaveLength(2)
+    const nodes = await getMemoryNodes(db, mem.id);
+    expect(nodes).toHaveLength(2);
 
-    const names = nodes.map((n) => n.name)
-    expect(names).toContain('alice')
-    expect(names).toContain('bob')
-  })
+    const names = nodes.map((n) => n.name);
+    expect(names).toContain("alice");
+    expect(names).toContain("bob");
+  });
 
-  it('returns empty for memory with no graph links', async () => {
-    const nodes = await getMemoryNodes(db, 'mem-nonexistent')
-    expect(nodes).toHaveLength(0)
-  })
-})
+  it("returns empty for memory with no graph links", async () => {
+    const nodes = await getMemoryNodes(db, "mem-nonexistent");
+    expect(nodes).toHaveLength(0);
+  });
+});
diff --git a/apps/construct/src/__tests__/memory/integration.ai.test.ts b/apps/construct/src/__tests__/memory/integration.ai.test.ts
index 69e267a..f0dafb2 100644
--- a/apps/construct/src/__tests__/memory/integration.ai.test.ts
+++ b/apps/construct/src/__tests__/memory/integration.ai.test.ts
@@ -6,15 +6,15 @@
  * Run: OPENROUTER_API_KEY= npm run test:ai
  */
 
-import { describe, it, expect, beforeEach, afterEach } from 'vitest'
-import type { Kysely } from 'kysely'
-import { createDb } from '@repo/db'
-import type { Database } from '../../db/schema.js'
-import * as migration001 from '../../db/migrations/001-initial.js'
-import * as migration002 from '../../db/migrations/002-fts5-and-embeddings.js'
-import * as migration004 from '../../db/migrations/004-telegram-message-ids.js'
-import * as migration005 from '../../db/migrations/005-graph-memory.js'
-import * as migration006 from '../../db/migrations/006-observational-memory.js'
+import { describe, it, expect, beforeEach, afterEach } from "vitest";
+import type { Kysely } from "kysely";
+import { createDb } from "@repo/db";
+import type { Database } from "../../db/schema.js";
+import * as migration001 from "../../db/migrations/001-initial.js";
+import * as migration002 from "../../db/migrations/002-fts5-and-embeddings.js";
+import * as migration004 from "../../db/migrations/004-telegram-message-ids.js";
+import * as migration005 from "../../db/migrations/005-graph-memory.js";
+import * as migration006 from "../../db/migrations/006-observational-memory.js";
 import {
   extractEntities,
   processMemoryForGraph,
@@ -25,87 +25,87 @@ import {
   OBSERVER_MAX_BATCH_TOKENS,
   storeMemory,
   estimateMessageTokens,
-} from '@repo/cairn'
-import type { WorkerModelConfig } from '@repo/cairn'
-import { saveMessage, getOrCreateConversation } from '../../db/queries.js'
+} from "@repo/cairn";
+import type { WorkerModelConfig } from "@repo/cairn";
+import { saveMessage, getOrCreateConversation } from "../../db/queries.js";
 
 // Read directly — can't import src/env.ts (requires TELEGRAM_BOT_TOKEN at parse time)
-const API_KEY = process.env.OPENROUTER_API_KEY ?? ''
+const API_KEY = process.env.OPENROUTER_API_KEY ?? "";
 
 const WORKER_CONFIG: WorkerModelConfig = {
   apiKey: API_KEY,
-  model: 'google/gemini-2.5-flash-lite',
-  baseUrl: 'https://openrouter.ai/api/v1/chat/completions',
-}
+  model: "google/gemini-2.5-flash-lite",
+  baseUrl: "https://openrouter.ai/api/v1/chat/completions",
+};
 
-const shouldRun = API_KEY.length > 0
-const describeIntegration = shouldRun ? describe : describe.skip
+const shouldRun = API_KEY.length > 0;
+const describeIntegration = shouldRun ? describe : describe.skip;
 
-let db: Kysely
+let db: Kysely;
 
 async function setupDb(): Promise> {
-  const result = createDb(':memory:')
-  const d = result.db
-  await migration001.up(d as Kysely)
-  await migration002.up(d as Kysely)
-  await migration004.up(d as Kysely)
-  await migration005.up(d as Kysely)
-  await migration006.up(d as Kysely)
-  return d
+  const result = createDb(":memory:");
+  const d = result.db;
+  await migration001.up(d as Kysely);
+  await migration002.up(d as Kysely);
+  await migration004.up(d as Kysely);
+  await migration005.up(d as Kysely);
+  await migration006.up(d as Kysely);
+  return d;
 }
 
-describeIntegration('memory integration (LLM)', () => {
+describeIntegration("memory integration (LLM)", () => {
   beforeEach(async () => {
-    db = await setupDb()
-  })
+    db = await setupDb();
+  });
 
   afterEach(async () => {
-    await db.destroy()
-  })
+    await db.destroy();
+  });
 
   // ── extractEntities ──────────────────────────────────────────────
 
-  describe('extractEntities', () => {
-    it('extracts entities from factual text', async () => {
+  describe("extractEntities", () => {
+    it("extracts entities from factual text", async () => {
       const result = await extractEntities(
         WORKER_CONFIG,
-        'Alice works at Google in Mountain View. She is friends with Bob, who lives in San Francisco.',
-      )
+        "Alice works at Google in Mountain View. She is friends with Bob, who lives in San Francisco.",
+      );
 
-      expect(result.entities.length).toBeGreaterThan(0)
-      expect(result.relationships.length).toBeGreaterThan(0)
+      expect(result.entities.length).toBeGreaterThan(0);
+      expect(result.relationships.length).toBeGreaterThan(0);
 
-      const names = result.entities.map((e) => e.name.toLowerCase())
-      expect(names).toEqual(expect.arrayContaining([expect.stringContaining('alice')]))
+      const names = result.entities.map((e) => e.name.toLowerCase());
+      expect(names).toEqual(expect.arrayContaining([expect.stringContaining("alice")]));
 
-      const types = new Set(result.entities.map((e) => e.type))
-      expect(types.size).toBeGreaterThan(0)
+      const types = new Set(result.entities.map((e) => e.type));
+      expect(types.size).toBeGreaterThan(0);
       for (const t of types) {
-        expect(['person', 'place', 'concept', 'event', 'entity']).toContain(t)
+        expect(["person", "place", "concept", "event", "entity"]).toContain(t);
       }
 
       if (result.usage) {
-        expect(result.usage.input_tokens).toBeGreaterThan(0)
-        expect(result.usage.output_tokens).toBeGreaterThan(0)
+        expect(result.usage.input_tokens).toBeGreaterThan(0);
+        expect(result.usage.output_tokens).toBeGreaterThan(0);
       }
-    }, 30_000)
+    }, 30_000);
 
-    it('returns empty for mundane text', async () => {
-      const result = await extractEntities(
-        WORKER_CONFIG,
-        'ok sounds good, thanks!',
-      )
+    it("returns empty for mundane text", async () => {
+      const result = await extractEntities(WORKER_CONFIG, "ok sounds good, thanks!");
 
       // Should return empty or minimal — no throw
-      expect(result.entities.length + result.relationships.length).toBeLessThanOrEqual(2)
-    }, 30_000)
-  })
+      expect(result.entities.length + result.relationships.length).toBeLessThanOrEqual(2);
+    }, 30_000);
+  });
 
   // ── processMemoryForGraph ────────────────────────────────────────
 
-  describe('processMemoryForGraph', () => {
-    it('creates nodes and edges end-to-end', async () => {
-      const mem = await storeMemory(db, { content: 'Alice works at Google in Mountain View', source: 'user' })
+  describe("processMemoryForGraph", () => {
+    it("creates nodes and edges end-to-end", async () => {
+      const mem = await storeMemory(db, {
+        content: "Alice works at Google in Mountain View",
+        source: "user",
+      });
 
       const result = await processMemoryForGraph(
         db,
@@ -113,249 +113,296 @@ describeIntegration('memory integration (LLM)', () => {
         mem.id,
         mem.content,
         // skip embeddings in test — no embedding model needed
-      )
+      );
 
-      expect(result.entities.length).toBeGreaterThan(0)
+      expect(result.entities.length).toBeGreaterThan(0);
 
       // Nodes should be findable in the DB
-      const alice = await findNodeByName(db, 'alice')
-      expect(alice).toBeDefined()
-      expect(alice!.node_type).toBe('person')
+      const alice = await findNodeByName(db, "alice");
+      expect(alice).toBeDefined();
+      expect(alice!.node_type).toBe("person");
 
       // At least one edge should exist
       const edges = await db
-        .selectFrom('graph_edges')
+        .selectFrom("graph_edges")
         .selectAll()
-        .where('memory_id', '=', mem.id)
-        .execute()
-      expect(edges.length).toBeGreaterThan(0)
-    }, 30_000)
+        .where("memory_id", "=", mem.id)
+        .execute();
+      expect(edges.length).toBeGreaterThan(0);
+    }, 30_000);
 
-    it('handles text with no entities', async () => {
-      const mem = await storeMemory(db, { content: 'haha lol ok sure thing', source: 'user' })
+    it("handles text with no entities", async () => {
+      const mem = await storeMemory(db, { content: "haha lol ok sure thing", source: "user" });
 
-      const result = await processMemoryForGraph(
-        db,
-        WORKER_CONFIG,
-        mem.id,
-        mem.content,
-      )
+      const result = await processMemoryForGraph(db, WORKER_CONFIG, mem.id, mem.content);
 
-      expect(result.entities).toHaveLength(0)
+      expect(result.entities).toHaveLength(0);
 
       // No nodes should have been created
-      const nodes = await db.selectFrom('graph_nodes').selectAll().execute()
-      expect(nodes).toHaveLength(0)
-    }, 30_000)
-  })
+      const nodes = await db.selectFrom("graph_nodes").selectAll().execute();
+      expect(nodes).toHaveLength(0);
+    }, 30_000);
+  });
 
   // ── observe ──────────────────────────────────────────────────────
 
-  describe('observe', () => {
-    it('compresses conversation into observations', async () => {
+  describe("observe", () => {
+    it("compresses conversation into observations", async () => {
       const result = await observe(WORKER_CONFIG, {
         messages: [
-          { role: 'user', content: 'I have a dentist appointment on March 5th at 9am', created_at: '2024-01-15T10:00:00Z' },
-          { role: 'assistant', content: 'Got it! I\'ll remember your dentist appointment on March 5th at 9am.', created_at: '2024-01-15T10:00:05Z' },
-          { role: 'user', content: 'Also, I started learning Rust last week. Really enjoying it so far.', created_at: '2024-01-15T10:01:00Z' },
-          { role: 'assistant', content: 'Nice! Rust is a great language. How are you finding the borrow checker?', created_at: '2024-01-15T10:01:05Z' },
-          { role: 'user', content: 'It\'s tricky but I\'m getting used to it. My cat Max keeps sitting on my keyboard though.', created_at: '2024-01-15T10:02:00Z' },
+          {
+            role: "user",
+            content: "I have a dentist appointment on March 5th at 9am",
+            created_at: "2024-01-15T10:00:00Z",
+          },
+          {
+            role: "assistant",
+            content: "Got it! I'll remember your dentist appointment on March 5th at 9am.",
+            created_at: "2024-01-15T10:00:05Z",
+          },
+          {
+            role: "user",
+            content: "Also, I started learning Rust last week. Really enjoying it so far.",
+            created_at: "2024-01-15T10:01:00Z",
+          },
+          {
+            role: "assistant",
+            content: "Nice! Rust is a great language. How are you finding the borrow checker?",
+            created_at: "2024-01-15T10:01:05Z",
+          },
+          {
+            role: "user",
+            content:
+              "It's tricky but I'm getting used to it. My cat Max keeps sitting on my keyboard though.",
+            created_at: "2024-01-15T10:02:00Z",
+          },
         ],
-      })
+      });
 
-      expect(result.observations.length).toBeGreaterThan(0)
+      expect(result.observations.length).toBeGreaterThan(0);
 
       for (const obs of result.observations) {
-        expect(obs.content).toBeTruthy()
-        expect(['low', 'medium', 'high']).toContain(obs.priority)
-        expect(obs.observation_date).toBeTruthy()
+        expect(obs.content).toBeTruthy();
+        expect(["low", "medium", "high"]).toContain(obs.priority);
+        expect(obs.observation_date).toBeTruthy();
       }
-    }, 30_000)
-  })
+    }, 30_000);
+  });
 
   // ── reflect ──────────────────────────────────────────────────────
 
-  describe('reflect', () => {
-    it('condenses related observations', async () => {
+  describe("reflect", () => {
+    it("condenses related observations", async () => {
       const observations = [
         {
-          id: 'obs-1',
-          conversation_id: 'conv-1',
-          content: 'User has a dentist appointment on March 5th at 9am',
-          priority: 'high' as const,
-          observation_date: '2024-01-15',
+          id: "obs-1",
+          conversation_id: "conv-1",
+          content: "User has a dentist appointment on March 5th at 9am",
+          priority: "high" as const,
+          observation_date: "2024-01-15",
           source_message_ids: [],
           token_count: 15,
           generation: 0,
           superseded_at: null,
-          created_at: '2024-01-15T10:00:00Z',
+          created_at: "2024-01-15T10:00:00Z",
         },
         {
-          id: 'obs-2',
-          conversation_id: 'conv-1',
-          content: 'User started learning Rust last week',
-          priority: 'medium' as const,
-          observation_date: '2024-01-15',
+          id: "obs-2",
+          conversation_id: "conv-1",
+          content: "User started learning Rust last week",
+          priority: "medium" as const,
+          observation_date: "2024-01-15",
           source_message_ids: [],
           token_count: 10,
           generation: 0,
           superseded_at: null,
-          created_at: '2024-01-15T10:01:00Z',
+          created_at: "2024-01-15T10:01:00Z",
         },
         {
-          id: 'obs-3',
-          conversation_id: 'conv-1',
-          content: 'User is enjoying learning Rust, finding borrow checker tricky',
-          priority: 'medium' as const,
-          observation_date: '2024-01-15',
+          id: "obs-3",
+          conversation_id: "conv-1",
+          content: "User is enjoying learning Rust, finding borrow checker tricky",
+          priority: "medium" as const,
+          observation_date: "2024-01-15",
           source_message_ids: [],
           token_count: 15,
           generation: 0,
           superseded_at: null,
-          created_at: '2024-01-15T10:02:00Z',
+          created_at: "2024-01-15T10:02:00Z",
         },
         {
-          id: 'obs-4',
-          conversation_id: 'conv-1',
-          content: 'User has a cat named Max',
-          priority: 'low' as const,
-          observation_date: '2024-01-15',
+          id: "obs-4",
+          conversation_id: "conv-1",
+          content: "User has a cat named Max",
+          priority: "low" as const,
+          observation_date: "2024-01-15",
           source_message_ids: [],
           token_count: 8,
           generation: 0,
           superseded_at: null,
-          created_at: '2024-01-15T10:02:30Z',
+          created_at: "2024-01-15T10:02:30Z",
         },
-      ]
+      ];
 
-      const result = await reflect(WORKER_CONFIG, { observations })
+      const result = await reflect(WORKER_CONFIG, { observations });
 
       // Should produce fewer or same number of observations
-      expect(result.observations.length).toBeLessThanOrEqual(observations.length)
-      expect(result.observations.length).toBeGreaterThan(0)
+      expect(result.observations.length).toBeLessThanOrEqual(observations.length);
+      expect(result.observations.length).toBeGreaterThan(0);
 
       // superseded_ids should reference valid input IDs
-      const inputIds = new Set(observations.map((o) => o.id))
+      const inputIds = new Set(observations.map((o) => o.id));
       for (const id of result.superseded_ids) {
-        expect(inputIds.has(id)).toBe(true)
+        expect(inputIds.has(id)).toBe(true);
       }
 
       for (const obs of result.observations) {
-        expect(obs.content).toBeTruthy()
-        expect(['low', 'medium', 'high']).toContain(obs.priority)
+        expect(obs.content).toBeTruthy();
+        expect(["low", "medium", "high"]).toContain(obs.priority);
       }
-    }, 30_000)
-  })
+    }, 30_000);
+  });
 
   // ── MemoryManager.runObserver ────────────────────────────────────
 
-  describe('MemoryManager.runObserver', () => {
-    it('end-to-end pipeline with DB', async () => {
-      const mm = new MemoryManager(db, { workerConfig: WORKER_CONFIG, apiKey: API_KEY })
-      const convId = await getOrCreateConversation(db, 'test', null)
+  describe("MemoryManager.runObserver", () => {
+    it("end-to-end pipeline with DB", async () => {
+      const mm = new MemoryManager(db, { workerConfig: WORKER_CONFIG, apiKey: API_KEY });
+      const convId = await getOrCreateConversation(db, "test", null);
 
       // OBSERVER_THRESHOLD is 3000 tokens (~12000 chars). Seed a realistic conversation
       // then pad with additional exchanges to exceed the threshold.
       const seed = [
-        { role: 'user', content: 'I just moved to Portland, Oregon last month. I work as a backend engineer at a startup called DataPipe. We do real-time data pipeline orchestration using Kafka and Flink. I own the Flink jobs — tons of stateful processing, aggregating clickstream data in 5-minute windows.' },
-        { role: 'assistant', content: 'Welcome to Portland! DataPipe sounds interesting — Kafka and Flink are a solid combo for stream processing. Clickstream aggregation in windowed Flink jobs is no joke. Are you working on the core pipeline infrastructure or more on the consumer side?' },
-        { role: 'user', content: 'Core infra mostly. Also, my girlfriend Sarah is visiting next weekend so I need to plan some stuff — thinking Forest Park for hiking and Powell\'s Books since she loves bookstores. Oh and I need to pick up my prescription from Walgreens tomorrow before 6pm.' },
-        { role: 'assistant', content: 'Got it — prescription from Walgreens before 6pm tomorrow. Forest Park and Powell\'s are great choices for Sarah\'s visit. Powell\'s is especially wonderful if she loves bookstores. Want me to look up any restaurant recommendations too?' },
-        { role: 'user', content: 'Sure! I live near Hawthorne Boulevard so anything walkable from there. I\'ve been learning Rust on the side too — the borrow checker is tough but I\'m getting the hang of it. My cat Whiskers keeps sitting on my keyboard while I code though.' },
-        { role: 'assistant', content: 'Hawthorne has tons of great food options within walking distance. And Rust is a great complement to TypeScript — the borrow checker really clicks after a while. Classic cat behavior from Whiskers! How old is he?' },
-      ]
+        {
+          role: "user",
+          content:
+            "I just moved to Portland, Oregon last month. I work as a backend engineer at a startup called DataPipe. We do real-time data pipeline orchestration using Kafka and Flink. I own the Flink jobs — tons of stateful processing, aggregating clickstream data in 5-minute windows.",
+        },
+        {
+          role: "assistant",
+          content:
+            "Welcome to Portland! DataPipe sounds interesting — Kafka and Flink are a solid combo for stream processing. Clickstream aggregation in windowed Flink jobs is no joke. Are you working on the core pipeline infrastructure or more on the consumer side?",
+        },
+        {
+          role: "user",
+          content:
+            "Core infra mostly. Also, my girlfriend Sarah is visiting next weekend so I need to plan some stuff — thinking Forest Park for hiking and Powell's Books since she loves bookstores. Oh and I need to pick up my prescription from Walgreens tomorrow before 6pm.",
+        },
+        {
+          role: "assistant",
+          content:
+            "Got it — prescription from Walgreens before 6pm tomorrow. Forest Park and Powell's are great choices for Sarah's visit. Powell's is especially wonderful if she loves bookstores. Want me to look up any restaurant recommendations too?",
+        },
+        {
+          role: "user",
+          content:
+            "Sure! I live near Hawthorne Boulevard so anything walkable from there. I've been learning Rust on the side too — the borrow checker is tough but I'm getting the hang of it. My cat Whiskers keeps sitting on my keyboard while I code though.",
+        },
+        {
+          role: "assistant",
+          content:
+            "Hawthorne has tons of great food options within walking distance. And Rust is a great complement to TypeScript — the borrow checker really clicks after a while. Classic cat behavior from Whiskers! How old is he?",
+        },
+      ];
 
       // Pad with enough exchanges to exceed 3000 tokens. Each padded pair is ~500 chars = ~125 tokens.
       // 6 seed messages ≈ 500 tokens. Need ~2500 more = ~20 pairs.
       const padding = Array.from({ length: 25 }, (_, i) => [
-        { role: 'user', content: `Topic ${i + 1}: I've been exploring various aspects of software engineering lately, including distributed systems design patterns, microservice architectures with service mesh configurations, container orchestration best practices, and infrastructure as code tooling. Each area has its own set of challenges and tradeoffs that I find fascinating to reason about in depth.` },
-        { role: 'assistant', content: `That's a broad and interesting set of topics! Distributed systems especially have so many subtle tradeoffs — CAP theorem implications, consistency models, partition tolerance strategies, and the operational complexity of running service meshes at scale. Which area are you finding most relevant to your current work at DataPipe?` },
-      ]).flat()
+        {
+          role: "user",
+          content: `Topic ${i + 1}: I've been exploring various aspects of software engineering lately, including distributed systems design patterns, microservice architectures with service mesh configurations, container orchestration best practices, and infrastructure as code tooling. Each area has its own set of challenges and tradeoffs that I find fascinating to reason about in depth.`,
+        },
+        {
+          role: "assistant",
+          content: `That's a broad and interesting set of topics! Distributed systems especially have so many subtle tradeoffs — CAP theorem implications, consistency models, partition tolerance strategies, and the operational complexity of running service meshes at scale. Which area are you finding most relevant to your current work at DataPipe?`,
+        },
+      ]).flat();
 
       for (const msg of [...seed, ...padding]) {
-        await saveMessage(db, { conversation_id: convId, role: msg.role, content: msg.content })
+        await saveMessage(db, { conversation_id: convId, role: msg.role, content: msg.content });
       }
 
       // Sanity check: we actually exceed the threshold
-      const preCheck = await mm.getUnobservedMessages(convId)
-      const preTokens = estimateMessageTokens(preCheck)
-      expect(preTokens).toBeGreaterThan(3000)
+      const preCheck = await mm.getUnobservedMessages(convId);
+      const preTokens = estimateMessageTokens(preCheck);
+      expect(preTokens).toBeGreaterThan(3000);
 
-      const result = await mm.runObserver(convId)
-      expect(result).toBe(true)
+      const result = await mm.runObserver(convId);
+      expect(result).toBe(true);
 
       // Observations should be stored
-      const obs = await mm.getActiveObservations(convId)
-      expect(obs.length).toBeGreaterThan(0)
+      const obs = await mm.getActiveObservations(convId);
+      expect(obs.length).toBeGreaterThan(0);
 
       // Watermark should have advanced
       const conv = await db
-        .selectFrom('conversations')
-        .select(['observed_up_to_message_id', 'observation_token_count'])
-        .where('id', '=', convId)
-        .executeTakeFirst()
+        .selectFrom("conversations")
+        .select(["observed_up_to_message_id", "observation_token_count"])
+        .where("id", "=", convId)
+        .executeTakeFirst();
 
-      expect(conv!.observed_up_to_message_id).toBeTruthy()
-      expect(conv!.observation_token_count).toBeGreaterThan(0)
+      expect(conv!.observed_up_to_message_id).toBeTruthy();
+      expect(conv!.observation_token_count).toBeGreaterThan(0);
 
       // No unobserved messages should remain
-      const remaining = await mm.getUnobservedMessages(convId)
-      expect(remaining).toHaveLength(0)
-    }, 60_000)
+      const remaining = await mm.getUnobservedMessages(convId);
+      expect(remaining).toHaveLength(0);
+    }, 60_000);
 
-    it('batches large message sets', async () => {
-      const mm = new MemoryManager(db, { workerConfig: WORKER_CONFIG, apiKey: API_KEY })
-      const convId = await getOrCreateConversation(db, 'test', null)
+    it("batches large message sets", async () => {
+      const mm = new MemoryManager(db, { workerConfig: WORKER_CONFIG, apiKey: API_KEY });
+      const convId = await getOrCreateConversation(db, "test", null);
 
       // Generate messages that exceed OBSERVER_MAX_BATCH_TOKENS (16000 tokens = ~64000 chars).
       // Each message: ~500 chars content + "[Message N] " prefix = ~128 tokens + 4 overhead = ~132 tokens.
       // Need 16000/132 ≈ 122 messages minimum. Use 140 for margin.
-      const largeContent = 'The user discussed various topics including their work schedule, '
-        + 'favorite programming languages (TypeScript, Rust, Go), upcoming travel plans to Japan '
-        + 'in April, their cat named Whiskers who is 3 years old, their preference for dark mode '
-        + 'in all editors, and their ongoing project to build a home automation system using '
-        + 'Raspberry Pi and Zigbee sensors. They also mentioned their partner Alex who works in '
-        + 'graphic design and their weekly board game night on Thursdays with friends.'
+      const largeContent =
+        "The user discussed various topics including their work schedule, " +
+        "favorite programming languages (TypeScript, Rust, Go), upcoming travel plans to Japan " +
+        "in April, their cat named Whiskers who is 3 years old, their preference for dark mode " +
+        "in all editors, and their ongoing project to build a home automation system using " +
+        "Raspberry Pi and Zigbee sensors. They also mentioned their partner Alex who works in " +
+        "graphic design and their weekly board game night on Thursdays with friends.";
 
       // Insert enough messages to trigger batching
-      const msgCount = 140
+      const msgCount = 140;
       for (let i = 0; i < msgCount; i++) {
-        const role = i % 2 === 0 ? 'user' : 'assistant'
+        const role = i % 2 === 0 ? "user" : "assistant";
         await saveMessage(db, {
           conversation_id: convId,
           role,
           content: `[Message ${i + 1}] ${largeContent}`,
-        })
+        });
       }
 
       // Verify messages actually exceed the batch limit
-      const unobserved = await mm.getUnobservedMessages(convId)
-      const totalTokens = estimateMessageTokens(unobserved)
-      expect(totalTokens).toBeGreaterThan(OBSERVER_MAX_BATCH_TOKENS)
+      const unobserved = await mm.getUnobservedMessages(convId);
+      const totalTokens = estimateMessageTokens(unobserved);
+      expect(totalTokens).toBeGreaterThan(OBSERVER_MAX_BATCH_TOKENS);
 
       // Verify batching would produce multiple batches
-      const batches = mm.batchMessages(unobserved, OBSERVER_MAX_BATCH_TOKENS)
-      expect(batches.length).toBeGreaterThan(1)
+      const batches = mm.batchMessages(unobserved, OBSERVER_MAX_BATCH_TOKENS);
+      expect(batches.length).toBeGreaterThan(1);
 
-      const result = await mm.runObserver(convId)
-      expect(result).toBe(true)
+      const result = await mm.runObserver(convId);
+      expect(result).toBe(true);
 
       // Watermark should have advanced to the last message
       const conv = await db
-        .selectFrom('conversations')
-        .select('observed_up_to_message_id')
-        .where('id', '=', convId)
-        .executeTakeFirst()
+        .selectFrom("conversations")
+        .select("observed_up_to_message_id")
+        .where("id", "=", convId)
+        .executeTakeFirst();
 
-      expect(conv!.observed_up_to_message_id).toBe(unobserved[unobserved.length - 1].id)
+      expect(conv!.observed_up_to_message_id).toBe(unobserved[unobserved.length - 1].id);
 
       // All messages should now be observed
-      const remaining = await mm.getUnobservedMessages(convId)
-      expect(remaining).toHaveLength(0)
+      const remaining = await mm.getUnobservedMessages(convId);
+      expect(remaining).toHaveLength(0);
 
       // Observations should have been created
-      const obs = await mm.getActiveObservations(convId)
-      expect(obs.length).toBeGreaterThan(0)
-    }, 120_000)
-  })
-})
+      const obs = await mm.getActiveObservations(convId);
+      expect(obs.length).toBeGreaterThan(0);
+    }, 120_000);
+  });
+});
diff --git a/apps/construct/src/__tests__/memory/observer.test.ts b/apps/construct/src/__tests__/memory/observer.test.ts
index 468dd5d..7cb7c59 100644
--- a/apps/construct/src/__tests__/memory/observer.test.ts
+++ b/apps/construct/src/__tests__/memory/observer.test.ts
@@ -1,231 +1,242 @@
-import { describe, it, expect, beforeEach, afterEach } from 'vitest'
-import type { Kysely } from 'kysely'
-import { createDb } from '@repo/db'
-import type { Database } from '../../db/schema.js'
-import * as migration001 from '../../db/migrations/001-initial.js'
-import * as migration004 from '../../db/migrations/004-telegram-message-ids.js'
-import * as migration006 from '../../db/migrations/006-observational-memory.js'
-import { MemoryManager } from '@repo/cairn'
-import { saveMessage, getOrCreateConversation } from '../../db/queries.js'
-import { isDegenerateRaw, sanitizeObservations } from '@repo/cairn'
-
-let db: Kysely
+import { describe, it, expect, beforeEach, afterEach } from "vitest";
+import type { Kysely } from "kysely";
+import { createDb } from "@repo/db";
+import type { Database } from "../../db/schema.js";
+import * as migration001 from "../../db/migrations/001-initial.js";
+import * as migration004 from "../../db/migrations/004-telegram-message-ids.js";
+import * as migration006 from "../../db/migrations/006-observational-memory.js";
+import { MemoryManager } from "@repo/cairn";
+import { saveMessage, getOrCreateConversation } from "../../db/queries.js";
+import { isDegenerateRaw, sanitizeObservations } from "@repo/cairn";
+
+let db: Kysely;
 
 beforeEach(async () => {
-  const result = createDb(':memory:')
-  db = result.db
-  await migration001.up(db as Kysely)
-  await migration004.up(db as Kysely)
-  await migration006.up(db as Kysely)
-})
+  const result = createDb(":memory:");
+  db = result.db;
+  await migration001.up(db as Kysely);
+  await migration004.up(db as Kysely);
+  await migration006.up(db as Kysely);
+});
 
 afterEach(async () => {
-  await db.destroy()
-})
+  await db.destroy();
+});
 
-describe('MemoryManager observations', () => {
-  it('getActiveObservations returns empty for new conversation', async () => {
-    const mm = new MemoryManager(db, { workerConfig: null, apiKey: '' })
-    const convId = await getOrCreateConversation(db, 'cli', null)
+describe("MemoryManager observations", () => {
+  it("getActiveObservations returns empty for new conversation", async () => {
+    const mm = new MemoryManager(db, { workerConfig: null, apiKey: "" });
+    const convId = await getOrCreateConversation(db, "cli", null);
 
-    const obs = await mm.getActiveObservations(convId)
-    expect(obs).toHaveLength(0)
-  })
+    const obs = await mm.getActiveObservations(convId);
+    expect(obs).toHaveLength(0);
+  });
 
-  it('getUnobservedMessages returns all messages when no watermark', async () => {
-    const mm = new MemoryManager(db, { workerConfig: null, apiKey: '' })
-    const convId = await getOrCreateConversation(db, 'cli', null)
+  it("getUnobservedMessages returns all messages when no watermark", async () => {
+    const mm = new MemoryManager(db, { workerConfig: null, apiKey: "" });
+    const convId = await getOrCreateConversation(db, "cli", null);
 
-    await saveMessage(db, { conversation_id: convId, role: 'user', content: 'hello' })
-    await saveMessage(db, { conversation_id: convId, role: 'assistant', content: 'hi' })
+    await saveMessage(db, { conversation_id: convId, role: "user", content: "hello" });
+    await saveMessage(db, { conversation_id: convId, role: "assistant", content: "hi" });
 
-    const msgs = await mm.getUnobservedMessages(convId)
-    expect(msgs).toHaveLength(2)
-  })
+    const msgs = await mm.getUnobservedMessages(convId);
+    expect(msgs).toHaveLength(2);
+  });
 
-  it('getUnobservedMessages respects watermark', async () => {
-    const mm = new MemoryManager(db, { workerConfig: null, apiKey: '' })
-    const convId = await getOrCreateConversation(db, 'cli', null)
+  it("getUnobservedMessages respects watermark", async () => {
+    const mm = new MemoryManager(db, { workerConfig: null, apiKey: "" });
+    const convId = await getOrCreateConversation(db, "cli", null);
 
-    await saveMessage(db, { conversation_id: convId, role: 'user', content: 'first' })
-    const id2 = await saveMessage(db, { conversation_id: convId, role: 'assistant', content: 'response' })
-    await saveMessage(db, { conversation_id: convId, role: 'user', content: 'second' })
+    await saveMessage(db, { conversation_id: convId, role: "user", content: "first" });
+    const id2 = await saveMessage(db, {
+      conversation_id: convId,
+      role: "assistant",
+      content: "response",
+    });
+    await saveMessage(db, { conversation_id: convId, role: "user", content: "second" });
 
     // Set watermark to id2
     await db
-      .updateTable('conversations')
+      .updateTable("conversations")
       .set({ observed_up_to_message_id: id2 })
-      .where('id', '=', convId)
-      .execute()
+      .where("id", "=", convId)
+      .execute();
 
-    const msgs = await mm.getUnobservedMessages(convId)
-    expect(msgs).toHaveLength(1)
-    expect(msgs[0].content).toBe('second')
-  })
+    const msgs = await mm.getUnobservedMessages(convId);
+    expect(msgs).toHaveLength(1);
+    expect(msgs[0].content).toBe("second");
+  });
 
-  it('runObserver skips when no worker model configured', async () => {
-    const mm = new MemoryManager(db, { workerConfig: null, apiKey: '' })
-    const convId = await getOrCreateConversation(db, 'cli', null)
+  it("runObserver skips when no worker model configured", async () => {
+    const mm = new MemoryManager(db, { workerConfig: null, apiKey: "" });
+    const convId = await getOrCreateConversation(db, "cli", null);
 
-    const result = await mm.runObserver(convId)
-    expect(result).toBe(false)
-  })
+    const result = await mm.runObserver(convId);
+    expect(result).toBe(false);
+  });
 
-  it('runObserver skips when messages below threshold', async () => {
+  it("runObserver skips when messages below threshold", async () => {
     // Use a fake config — it won't be reached since threshold won't be met
-    const mm = new MemoryManager(db, { workerConfig: { apiKey: 'fake', model: 'fake' }, apiKey: 'fake' })
-    const convId = await getOrCreateConversation(db, 'cli', null)
+    const mm = new MemoryManager(db, {
+      workerConfig: { apiKey: "fake", model: "fake" },
+      apiKey: "fake",
+    });
+    const convId = await getOrCreateConversation(db, "cli", null);
 
-    await saveMessage(db, { conversation_id: convId, role: 'user', content: 'hello' })
+    await saveMessage(db, { conversation_id: convId, role: "user", content: "hello" });
 
-    const result = await mm.runObserver(convId)
-    expect(result).toBe(false)
-  })
+    const result = await mm.runObserver(convId);
+    expect(result).toBe(false);
+  });
 
-  it('buildContext returns empty observations and all messages for new conversation', async () => {
-    const mm = new MemoryManager(db, { workerConfig: null, apiKey: '' })
-    const convId = await getOrCreateConversation(db, 'cli', null)
+  it("buildContext returns empty observations and all messages for new conversation", async () => {
+    const mm = new MemoryManager(db, { workerConfig: null, apiKey: "" });
+    const convId = await getOrCreateConversation(db, "cli", null);
 
-    await saveMessage(db, { conversation_id: convId, role: 'user', content: 'hello' })
+    await saveMessage(db, { conversation_id: convId, role: "user", content: "hello" });
 
-    const ctx = await mm.buildContext(convId)
-    expect(ctx.hasObservations).toBe(false)
-    expect(ctx.observationsText).toBe('')
-    expect(ctx.activeMessages).toHaveLength(1)
-  })
+    const ctx = await mm.buildContext(convId);
+    expect(ctx.hasObservations).toBe(false);
+    expect(ctx.observationsText).toBe("");
+    expect(ctx.activeMessages).toHaveLength(1);
+  });
 
-  it('buildContext separates observations from active messages', async () => {
-    const mm = new MemoryManager(db, { workerConfig: null, apiKey: '' })
-    const convId = await getOrCreateConversation(db, 'cli', null)
+  it("buildContext separates observations from active messages", async () => {
+    const mm = new MemoryManager(db, { workerConfig: null, apiKey: "" });
+    const convId = await getOrCreateConversation(db, "cli", null);
 
     // Insert a manual observation
-    const { nanoid } = await import('nanoid')
+    const { nanoid } = await import("nanoid");
     await db
-      .insertInto('observations')
+      .insertInto("observations")
       .values({
         id: nanoid(),
         conversation_id: convId,
-        content: 'User prefers TypeScript',
-        priority: 'high',
-        observation_date: '2024-01-15',
+        content: "User prefers TypeScript",
+        priority: "high",
+        observation_date: "2024-01-15",
         token_count: 10,
       })
-      .execute()
+      .execute();
 
-    const id1 = await saveMessage(db, { conversation_id: convId, role: 'user', content: 'old msg' })
+    const id1 = await saveMessage(db, {
+      conversation_id: convId,
+      role: "user",
+      content: "old msg",
+    });
     // Set watermark to id1 so only messages after it are "active"
     await db
-      .updateTable('conversations')
+      .updateTable("conversations")
       .set({ observed_up_to_message_id: id1 })
-      .where('id', '=', convId)
-      .execute()
-
-    await saveMessage(db, { conversation_id: convId, role: 'user', content: 'new msg' })
-
-    const ctx = await mm.buildContext(convId)
-    expect(ctx.hasObservations).toBe(true)
-    expect(ctx.observationsText).toContain('TypeScript')
-    expect(ctx.activeMessages).toHaveLength(1)
-    expect(ctx.activeMessages[0].content).toBe('new msg')
-  })
-})
-
-describe('isDegenerateRaw', () => {
-  it('returns true for text over 50KB', () => {
-    const text = 'x'.repeat(50_001)
-    expect(isDegenerateRaw(text)).toBe(true)
-  })
-
-  it('returns true for repeated 100-char blocks', () => {
-    const block = 'a'.repeat(100)
+      .where("id", "=", convId)
+      .execute();
+
+    await saveMessage(db, { conversation_id: convId, role: "user", content: "new msg" });
+
+    const ctx = await mm.buildContext(convId);
+    expect(ctx.hasObservations).toBe(true);
+    expect(ctx.observationsText).toContain("TypeScript");
+    expect(ctx.activeMessages).toHaveLength(1);
+    expect(ctx.activeMessages[0].content).toBe("new msg");
+  });
+});
+
+describe("isDegenerateRaw", () => {
+  it("returns true for text over 50KB", () => {
+    const text = "x".repeat(50_001);
+    expect(isDegenerateRaw(text)).toBe(true);
+  });
+
+  it("returns true for repeated 100-char blocks", () => {
+    const block = "a".repeat(100);
     // 3 repetitions at 100-char intervals triggers detection
-    const text = block + block + block
-    expect(isDegenerateRaw(text)).toBe(true)
-  })
+    const text = block + block + block;
+    expect(isDegenerateRaw(text)).toBe(true);
+  });
 
-  it('returns false for normal JSON text', () => {
+  it("returns false for normal JSON text", () => {
     const text = JSON.stringify({
       observations: [
-        { content: 'User likes TypeScript', priority: 'medium', observation_date: '2025-01-15' },
-        { content: 'User moved to Portland', priority: 'high', observation_date: '2025-01-15' },
+        { content: "User likes TypeScript", priority: "medium", observation_date: "2025-01-15" },
+        { content: "User moved to Portland", priority: "high", observation_date: "2025-01-15" },
       ],
-    })
-    expect(isDegenerateRaw(text)).toBe(false)
-  })
+    });
+    expect(isDegenerateRaw(text)).toBe(false);
+  });
 
-  it('returns false for short text', () => {
-    expect(isDegenerateRaw('hello')).toBe(false)
-  })
+  it("returns false for short text", () => {
+    expect(isDegenerateRaw("hello")).toBe(false);
+  });
 
-  it('returns false for text exactly at 50KB', () => {
-    const text = 'x'.repeat(50_000)
+  it("returns false for text exactly at 50KB", () => {
+    const text = "x".repeat(50_000);
     // All blocks are identical but text is exactly at the limit, not over
     // However, repeated blocks will still trigger
-    expect(isDegenerateRaw(text)).toBe(true)
-  })
+    expect(isDegenerateRaw(text)).toBe(true);
+  });
 
-  it('returns false for varied text under 50KB', () => {
+  it("returns false for varied text under 50KB", () => {
     // Build text where every 100-char block is unique
-    let text = ''
+    let text = "";
     for (let i = 0; i < 100; i++) {
-      text += String(i).padStart(4, '0') + 'y'.repeat(96)
+      text += String(i).padStart(4, "0") + "y".repeat(96);
     }
-    expect(isDegenerateRaw(text)).toBe(false)
-  })
-})
-
-describe('sanitizeObservations', () => {
-  const obs = (content: string, priority: 'low' | 'medium' | 'high' = 'medium') => ({
-    content,
-    priority,
-    observation_date: '2025-01-15',
-  })
-
-  it('truncates content over 2000 chars', () => {
-    const longContent = 'a'.repeat(2500)
-    const result = sanitizeObservations([obs(longContent)], 10)
-    expect(result[0].content).toHaveLength(2003) // 2000 + '...'
-    expect(result[0].content.endsWith('...')).toBe(true)
-  })
-
-  it('does not truncate content at exactly 2000 chars', () => {
-    const content = 'a'.repeat(2000)
-    const result = sanitizeObservations([obs(content)], 10)
-    expect(result[0].content).toHaveLength(2000)
-  })
-
-  it('caps observation count at inputMessages * 3', () => {
-    const observations = Array.from({ length: 20 }, (_, i) => obs(`fact ${i}`))
-    const result = sanitizeObservations(observations, 5) // cap = max(15, 50) = 50
-    expect(result).toHaveLength(20) // 20 < 50, all kept
-  })
-
-  it('caps observation count with floor of 50', () => {
-    const observations = Array.from({ length: 60 }, (_, i) => obs(`fact ${i}`))
-    const result = sanitizeObservations(observations, 10) // cap = max(30, 50) = 50
-    expect(result).toHaveLength(50)
-  })
-
-  it('caps when inputMessages * 3 exceeds 50', () => {
-    const observations = Array.from({ length: 100 }, (_, i) => obs(`fact ${i}`))
-    const result = sanitizeObservations(observations, 20) // cap = max(60, 50) = 60
-    expect(result).toHaveLength(60)
-  })
-
-  it('deduplicates identical content', () => {
+    expect(isDegenerateRaw(text)).toBe(false);
+  });
+});
+
+const obs = (content: string, priority: "low" | "medium" | "high" = "medium") => ({
+  content,
+  priority,
+  observation_date: "2025-01-15",
+});
+
+describe("sanitizeObservations", () => {
+  it("truncates content over 2000 chars", () => {
+    const longContent = "a".repeat(2500);
+    const result = sanitizeObservations([obs(longContent)], 10);
+    expect(result[0].content).toHaveLength(2003); // 2000 + '...'
+    expect(result[0].content.endsWith("...")).toBe(true);
+  });
+
+  it("does not truncate content at exactly 2000 chars", () => {
+    const content = "a".repeat(2000);
+    const result = sanitizeObservations([obs(content)], 10);
+    expect(result[0].content).toHaveLength(2000);
+  });
+
+  it("caps observation count at inputMessages * 3", () => {
+    const observations = Array.from({ length: 20 }, (_, i) => obs(`fact ${i}`));
+    const result = sanitizeObservations(observations, 5); // cap = max(15, 50) = 50
+    expect(result).toHaveLength(20); // 20 < 50, all kept
+  });
+
+  it("caps observation count with floor of 50", () => {
+    const observations = Array.from({ length: 60 }, (_, i) => obs(`fact ${i}`));
+    const result = sanitizeObservations(observations, 10); // cap = max(30, 50) = 50
+    expect(result).toHaveLength(50);
+  });
+
+  it("caps when inputMessages * 3 exceeds 50", () => {
+    const observations = Array.from({ length: 100 }, (_, i) => obs(`fact ${i}`));
+    const result = sanitizeObservations(observations, 20); // cap = max(60, 50) = 60
+    expect(result).toHaveLength(60);
+  });
+
+  it("deduplicates identical content", () => {
     const result = sanitizeObservations(
-      [obs('User likes cats'), obs('User likes dogs'), obs('User likes cats')],
+      [obs("User likes cats"), obs("User likes dogs"), obs("User likes cats")],
       10,
-    )
-    expect(result).toHaveLength(2)
-    expect(result.map((o: any) => o.content)).toEqual(['User likes cats', 'User likes dogs'])
-  })
+    );
+    expect(result).toHaveLength(2);
+    expect(result.map((o: any) => o.content)).toEqual(["User likes cats", "User likes dogs"]);
+  });
 
-  it('preserves order after deduplication', () => {
+  it("preserves order after deduplication", () => {
     const result = sanitizeObservations(
-      [obs('first'), obs('second'), obs('first'), obs('third')],
+      [obs("first"), obs("second"), obs("first"), obs("third")],
       10,
-    )
-    expect(result.map((o: any) => o.content)).toEqual(['first', 'second', 'third'])
-  })
-})
+    );
+    expect(result.map((o: any) => o.content)).toEqual(["first", "second", "third"]);
+  });
+});
diff --git a/apps/construct/src/__tests__/memory/promote.test.ts b/apps/construct/src/__tests__/memory/promote.test.ts
index 664df08..cfe8a1d 100644
--- a/apps/construct/src/__tests__/memory/promote.test.ts
+++ b/apps/construct/src/__tests__/memory/promote.test.ts
@@ -1,33 +1,33 @@
-import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest'
-import type { Kysely } from 'kysely'
-import { nanoid } from 'nanoid'
-import { createDb } from '@repo/db'
-import type { Database } from '../../db/schema.js'
-import * as m001 from '../../db/migrations/001-initial.js'
-import * as m002 from '../../db/migrations/002-fts5-and-embeddings.js'
-import * as m004 from '../../db/migrations/004-telegram-message-ids.js'
-import * as m005 from '../../db/migrations/005-graph-memory.js'
-import * as m006 from '../../db/migrations/006-observational-memory.js'
-import * as m007 from '../../db/migrations/007-observation-promoted-at.js'
-import { MemoryManager, storeMemory, generateEmbedding } from '@repo/cairn'
-import { getOrCreateConversation } from '../../db/queries.js'
+import { describe, it, expect, beforeEach, afterEach, vi } from "vitest";
+import type { Kysely } from "kysely";
+import { nanoid } from "nanoid";
+import { createDb } from "@repo/db";
+import type { Database } from "../../db/schema.js";
+import * as m001 from "../../db/migrations/001-initial.js";
+import * as m002 from "../../db/migrations/002-fts5-and-embeddings.js";
+import * as m004 from "../../db/migrations/004-telegram-message-ids.js";
+import * as m005 from "../../db/migrations/005-graph-memory.js";
+import * as m006 from "../../db/migrations/006-observational-memory.js";
+import * as m007 from "../../db/migrations/007-observation-promoted-at.js";
+import { MemoryManager, storeMemory, generateEmbedding } from "@repo/cairn";
+import { getOrCreateConversation } from "../../db/queries.js";
 
 // Mock the embeddings module that MemoryManager uses internally.
 // vitest v4 resolves mocks by file path, so mocking the subpath export
 // `@repo/cairn/embeddings` intercepts `./embeddings.js` within cairn too.
-vi.mock('@repo/cairn/embeddings', () => {
-  const embeddingMap = new Map()
-  let nextDim = 0
-  const DIM = 16
+vi.mock("@repo/cairn/embeddings", () => {
+  const embeddingMap = new Map();
+  let nextDim = 0;
+  const DIM = 16;
 
   function getEmbedding(text: string): number[] {
-    const cached = embeddingMap.get(text)
-    if (cached) return cached
-    const v = new Array(DIM).fill(0)
-    v[nextDim % DIM] = 1
-    nextDim++
-    embeddingMap.set(text, v)
-    return v
+    const cached = embeddingMap.get(text);
+    if (cached) return cached;
+    const v = Array.from({ length: DIM }, () => 0);
+    v[nextDim % DIM] = 1;
+    nextDim++;
+    embeddingMap.set(text, v);
+    return v;
   }
 
   return {
@@ -35,201 +35,228 @@ vi.mock('@repo/cairn/embeddings', () => {
       Promise.resolve(getEmbedding(text)),
     ),
     cosineSimilarity: vi.fn((a: number[], b: number[]): number => {
-      if (a.length !== b.length) return 0
-      let dot = 0, na = 0, nb = 0
+      if (a.length !== b.length) return 0;
+      let dot = 0,
+        na = 0,
+        nb = 0;
       for (let i = 0; i < a.length; i++) {
-        dot += a[i] * b[i]
-        na += a[i] * a[i]
-        nb += b[i] * b[i]
+        dot += a[i] * b[i];
+        na += a[i] * a[i];
+        nb += b[i] * b[i];
       }
-      const d = Math.sqrt(na) * Math.sqrt(nb)
-      return d === 0 ? 0 : dot / d
+      const d = Math.sqrt(na) * Math.sqrt(nb);
+      return d === 0 ? 0 : dot / d;
     }),
     _embeddingMap: embeddingMap,
-    _reset: () => { embeddingMap.clear(); nextDim = 0 },
-  }
-})
+    _reset: () => {
+      embeddingMap.clear();
+      nextDim = 0;
+    },
+  };
+});
 
-let db: Kysely
+let db: Kysely;
 
 beforeEach(async () => {
-  const result = createDb(':memory:')
-  db = result.db
-  await m001.up(db as Kysely)
-  await m002.up(db as Kysely)
-  await m004.up(db as Kysely)
-  await m005.up(db as Kysely)
-  await m006.up(db as Kysely)
-  await m007.up(db as Kysely)
-})
+  const result = createDb(":memory:");
+  db = result.db;
+  await m001.up(db as Kysely);
+  await m002.up(db as Kysely);
+  await m004.up(db as Kysely);
+  await m005.up(db as Kysely);
+  await m006.up(db as Kysely);
+  await m007.up(db as Kysely);
+});
 
 afterEach(async () => {
-  await db.destroy()
-  vi.clearAllMocks()
+  await db.destroy();
+  vi.clearAllMocks();
   // Reset the embedding dimension counter between tests
-  const mod = await import('@repo/cairn/embeddings') as any
-  mod._reset()
-})
+  const mod = (await import("@repo/cairn/embeddings")) as any;
+  mod._reset();
+});
 
 async function insertObservation(
   conversationId: string,
   content: string,
-  priority: 'low' | 'medium' | 'high',
+  priority: "low" | "medium" | "high",
 ) {
-  const id = nanoid()
+  const id = nanoid();
   await db
-    .insertInto('observations')
+    .insertInto("observations")
     .values({
       id,
       conversation_id: conversationId,
       content,
       priority,
-      observation_date: '2024-01-15',
+      observation_date: "2024-01-15",
       source_message_ids: null,
       token_count: Math.ceil(content.length / 4),
       generation: 0,
     })
-    .execute()
-  return id
+    .execute();
+  return id;
 }
 
-describe('promoteObservations', () => {
-  it('promotes novel medium/high observations to memories', async () => {
-    const mm = new MemoryManager(db, { workerConfig: { apiKey: 'test', model: 'test-model' }, apiKey: 'test' })
-    const convId = await getOrCreateConversation(db, 'cli', null)
+describe("promoteObservations", () => {
+  it("promotes novel medium/high observations to memories", async () => {
+    const mm = new MemoryManager(db, {
+      workerConfig: { apiKey: "test", model: "test-model" },
+      apiKey: "test",
+    });
+    const convId = await getOrCreateConversation(db, "cli", null);
 
-    await insertObservation(convId, 'User has a dentist appointment on March 5th', 'high')
-    await insertObservation(convId, 'User prefers dark mode in all editors', 'medium')
+    await insertObservation(convId, "User has a dentist appointment on March 5th", "high");
+    await insertObservation(convId, "User prefers dark mode in all editors", "medium");
 
-    const promoted = await mm.promoteObservations(convId)
-    expect(promoted).toBe(2)
+    const promoted = await mm.promoteObservations(convId);
+    expect(promoted).toBe(2);
 
     // Verify memories were created
     const memories = await db
-      .selectFrom('memories')
+      .selectFrom("memories")
       .selectAll()
-      .where('source', '=', 'observer')
-      .execute()
-    expect(memories).toHaveLength(2)
-    expect(memories[0].category).toBe('observation')
-    expect(memories[0].embedding).not.toBeNull()
-  })
-
-  it('skips low-priority observations', async () => {
-    const mm = new MemoryManager(db, { workerConfig: { apiKey: 'test', model: 'test-model' }, apiKey: 'test' })
-    const convId = await getOrCreateConversation(db, 'cli', null)
-
-    await insertObservation(convId, 'User said hello', 'low')
-    await insertObservation(convId, 'Important fact about user', 'high')
-
-    const promoted = await mm.promoteObservations(convId)
-    expect(promoted).toBe(1)
+      .where("source", "=", "observer")
+      .execute();
+    expect(memories).toHaveLength(2);
+    expect(memories[0].category).toBe("observation");
+    expect(memories[0].embedding).not.toBeNull();
+  });
+
+  it("skips low-priority observations", async () => {
+    const mm = new MemoryManager(db, {
+      workerConfig: { apiKey: "test", model: "test-model" },
+      apiKey: "test",
+    });
+    const convId = await getOrCreateConversation(db, "cli", null);
+
+    await insertObservation(convId, "User said hello", "low");
+    await insertObservation(convId, "Important fact about user", "high");
+
+    const promoted = await mm.promoteObservations(convId);
+    expect(promoted).toBe(1);
 
     const memories = await db
-      .selectFrom('memories')
+      .selectFrom("memories")
       .selectAll()
-      .where('source', '=', 'observer')
-      .execute()
-    expect(memories).toHaveLength(1)
-    expect(memories[0].content).toBe('Important fact about user')
-  })
-
-  it('skips near-duplicate observations (sim >= 0.85)', async () => {
-    const mm = new MemoryManager(db, { workerConfig: { apiKey: 'test', model: 'test-model' }, apiKey: 'test' })
-    const convId = await getOrCreateConversation(db, 'cli', null)
+      .where("source", "=", "observer")
+      .execute();
+    expect(memories).toHaveLength(1);
+    expect(memories[0].content).toBe("Important fact about user");
+  });
+
+  it("skips near-duplicate observations (sim >= 0.85)", async () => {
+    const mm = new MemoryManager(db, {
+      workerConfig: { apiKey: "test", model: "test-model" },
+      apiKey: "test",
+    });
+    const convId = await getOrCreateConversation(db, "cli", null);
 
     // Pre-store a memory with the same content → embedding will match exactly
-    const content = 'User is allergic to shellfish'
-    const embedding = await generateEmbedding('test', content)
+    const content = "User is allergic to shellfish";
+    const embedding = await generateEmbedding("test", content);
     await storeMemory(db, {
       content,
-      category: 'health',
-      source: 'user',
+      category: "health",
+      source: "user",
       embedding: JSON.stringify(embedding),
       tags: null,
-    })
+    });
 
     // Insert observation with identical text → should be skipped
-    await insertObservation(convId, content, 'high')
+    await insertObservation(convId, content, "high");
 
-    const promoted = await mm.promoteObservations(convId)
-    expect(promoted).toBe(0)
-  })
+    const promoted = await mm.promoteObservations(convId);
+    expect(promoted).toBe(0);
+  });
 
-  it('marks all candidates promoted_at regardless of outcome', async () => {
-    const mm = new MemoryManager(db, { workerConfig: { apiKey: 'test', model: 'test-model' }, apiKey: 'test' })
-    const convId = await getOrCreateConversation(db, 'cli', null)
+  it("marks all candidates promoted_at regardless of outcome", async () => {
+    const mm = new MemoryManager(db, {
+      workerConfig: { apiKey: "test", model: "test-model" },
+      apiKey: "test",
+    });
+    const convId = await getOrCreateConversation(db, "cli", null);
 
     // One novel, one that will be a near-duplicate
-    const dupContent = 'Existing memory content'
-    const embedding = await generateEmbedding('test', dupContent)
+    const dupContent = "Existing memory content";
+    const embedding = await generateEmbedding("test", dupContent);
     await storeMemory(db, {
       content: dupContent,
-      category: 'general',
-      source: 'user',
+      category: "general",
+      source: "user",
       embedding: JSON.stringify(embedding),
       tags: null,
-    })
+    });
 
-    const id1 = await insertObservation(convId, dupContent, 'medium') // dup
-    const id2 = await insertObservation(convId, 'Completely novel observation content here xyz', 'high') // novel
+    const id1 = await insertObservation(convId, dupContent, "medium"); // dup
+    const id2 = await insertObservation(
+      convId,
+      "Completely novel observation content here xyz",
+      "high",
+    ); // novel
 
-    await mm.promoteObservations(convId)
+    await mm.promoteObservations(convId);
 
     // Both should have promoted_at set
     const obs = await db
-      .selectFrom('observations')
-      .select(['id', 'promoted_at'])
-      .where('id', 'in', [id1, id2])
-      .execute()
+      .selectFrom("observations")
+      .select(["id", "promoted_at"])
+      .where("id", "in", [id1, id2])
+      .execute();
 
     for (const o of obs) {
-      expect(o.promoted_at).not.toBeNull()
+      expect(o.promoted_at).not.toBeNull();
     }
-  })
+  });
 
-  it('does nothing without worker config', async () => {
-    const mm = new MemoryManager(db, { workerConfig: null, apiKey: '' })
-    const convId = await getOrCreateConversation(db, 'cli', null)
+  it("does nothing without worker config", async () => {
+    const mm = new MemoryManager(db, { workerConfig: null, apiKey: "" });
+    const convId = await getOrCreateConversation(db, "cli", null);
 
-    await insertObservation(convId, 'Some observation', 'high')
+    await insertObservation(convId, "Some observation", "high");
 
-    const promoted = await mm.promoteObservations(convId)
-    expect(promoted).toBe(0)
-  })
+    const promoted = await mm.promoteObservations(convId);
+    expect(promoted).toBe(0);
+  });
 
-  it('does not re-promote already-promoted observations', async () => {
-    const mm = new MemoryManager(db, { workerConfig: { apiKey: 'test', model: 'test-model' }, apiKey: 'test' })
-    const convId = await getOrCreateConversation(db, 'cli', null)
+  it("does not re-promote already-promoted observations", async () => {
+    const mm = new MemoryManager(db, {
+      workerConfig: { apiKey: "test", model: "test-model" },
+      apiKey: "test",
+    });
+    const convId = await getOrCreateConversation(db, "cli", null);
 
-    await insertObservation(convId, 'First run observation', 'high')
+    await insertObservation(convId, "First run observation", "high");
 
-    const first = await mm.promoteObservations(convId)
-    expect(first).toBe(1)
+    const first = await mm.promoteObservations(convId);
+    expect(first).toBe(1);
 
     // Second run: no new candidates
-    const second = await mm.promoteObservations(convId)
-    expect(second).toBe(0)
-  })
+    const second = await mm.promoteObservations(convId);
+    expect(second).toBe(0);
+  });
 
-  it('deduplicates within the same batch', async () => {
-    const mm = new MemoryManager(db, { workerConfig: { apiKey: 'test', model: 'test-model' }, apiKey: 'test' })
-    const convId = await getOrCreateConversation(db, 'cli', null)
+  it("deduplicates within the same batch", async () => {
+    const mm = new MemoryManager(db, {
+      workerConfig: { apiKey: "test", model: "test-model" },
+      apiKey: "test",
+    });
+    const convId = await getOrCreateConversation(db, "cli", null);
 
     // Two observations with identical content → second should be deduped
-    const content = 'User bought a new laptop today'
-    await insertObservation(convId, content, 'medium')
-    await insertObservation(convId, content, 'high')
+    const content = "User bought a new laptop today";
+    await insertObservation(convId, content, "medium");
+    await insertObservation(convId, content, "high");
 
-    const promoted = await mm.promoteObservations(convId)
+    const promoted = await mm.promoteObservations(convId);
     // First promotes, second is a duplicate of the first
-    expect(promoted).toBe(1)
+    expect(promoted).toBe(1);
 
     const memories = await db
-      .selectFrom('memories')
+      .selectFrom("memories")
       .selectAll()
-      .where('source', '=', 'observer')
-      .execute()
-    expect(memories).toHaveLength(1)
-  })
-})
+      .where("source", "=", "observer")
+      .execute();
+    expect(memories).toHaveLength(1);
+  });
+});
diff --git a/apps/construct/src/__tests__/memory/reflector.test.ts b/apps/construct/src/__tests__/memory/reflector.test.ts
index 4407b59..e2127e2 100644
--- a/apps/construct/src/__tests__/memory/reflector.test.ts
+++ b/apps/construct/src/__tests__/memory/reflector.test.ts
@@ -1,70 +1,68 @@
-import { describe, it, expect } from 'vitest'
-import { validateSupersededIds } from '@repo/cairn'
-import { isDegenerateRaw, sanitizeObservations } from '@repo/cairn'
+import { describe, it, expect } from "vitest";
+import { validateSupersededIds } from "@repo/cairn";
+import { isDegenerateRaw, sanitizeObservations } from "@repo/cairn";
 
-describe('validateSupersededIds', () => {
-  it('keeps IDs present in input set', () => {
-    const inputIds = new Set(['a', 'b', 'c'])
-    const result = validateSupersededIds(['a', 'c'], inputIds)
-    expect(result).toEqual(['a', 'c'])
-  })
+describe("validateSupersededIds", () => {
+  it("keeps IDs present in input set", () => {
+    const inputIds = new Set(["a", "b", "c"]);
+    const result = validateSupersededIds(["a", "c"], inputIds);
+    expect(result).toEqual(["a", "c"]);
+  });
 
-  it('filters out IDs not in input set', () => {
-    const inputIds = new Set(['a', 'b'])
-    const result = validateSupersededIds(['a', 'x', 'y', 'b'], inputIds)
-    expect(result).toEqual(['a', 'b'])
-  })
+  it("filters out IDs not in input set", () => {
+    const inputIds = new Set(["a", "b"]);
+    const result = validateSupersededIds(["a", "x", "y", "b"], inputIds);
+    expect(result).toEqual(["a", "b"]);
+  });
 
-  it('filters out non-string values', () => {
-    const inputIds = new Set(['a'])
-    const result = validateSupersededIds(['a', 42, null, undefined, true], inputIds)
-    expect(result).toEqual(['a'])
-  })
+  it("filters out non-string values", () => {
+    const inputIds = new Set(["a"]);
+    const result = validateSupersededIds(["a", 42, null, undefined, true], inputIds);
+    expect(result).toEqual(["a"]);
+  });
 
-  it('returns empty array when no IDs match', () => {
-    const inputIds = new Set(['a', 'b'])
-    const result = validateSupersededIds(['x', 'y', 'z'], inputIds)
-    expect(result).toEqual([])
-  })
+  it("returns empty array when no IDs match", () => {
+    const inputIds = new Set(["a", "b"]);
+    const result = validateSupersededIds(["x", "y", "z"], inputIds);
+    expect(result).toEqual([]);
+  });
 
-  it('returns empty array for empty input', () => {
-    const inputIds = new Set(['a'])
-    const result = validateSupersededIds([], inputIds)
-    expect(result).toEqual([])
-  })
-})
+  it("returns empty array for empty input", () => {
+    const inputIds = new Set(["a"]);
+    const result = validateSupersededIds([], inputIds);
+    expect(result).toEqual([]);
+  });
+});
 
-describe('reflector robustness (shared functions)', () => {
-  const obs = (content: string, priority: 'low' | 'medium' | 'high' = 'medium') => ({
-    content,
-    priority,
-    observation_date: '2025-01-15',
-  })
+const obs = (content: string, priority: "low" | "medium" | "high" = "medium") => ({
+  content,
+  priority,
+  observation_date: "2025-01-15",
+});
 
-  it('isDegenerateRaw catches repeated output from reflector', () => {
+describe("reflector robustness (shared functions)", () => {
+  it("isDegenerateRaw catches repeated output from reflector", () => {
     // Simulate a model stuck in a loop producing the same JSON block
-    const block = '{"content":"User likes cats","priority":"high","observation_date":"2025-01-15"},'
-    const repeated = block.repeat(500)
-    expect(isDegenerateRaw(repeated)).toBe(true)
-  })
+    const block =
+      '{"content":"User likes cats","priority":"high","observation_date":"2025-01-15"},';
+    const repeated = block.repeat(500);
+    expect(isDegenerateRaw(repeated)).toBe(true);
+  });
 
-  it('sanitizeObservations deduplicates merged observations', () => {
+  it("sanitizeObservations deduplicates merged observations", () => {
     // Reflector might emit duplicates when merging
     const result = sanitizeObservations(
-      [obs('User lives in Portland'), obs('User works remotely'), obs('User lives in Portland')],
+      [obs("User lives in Portland"), obs("User works remotely"), obs("User lives in Portland")],
       10,
-    )
-    expect(result).toHaveLength(2)
-    expect(result.map((o) => o.content)).toEqual([
-      'User lives in Portland',
-      'User works remotely',
-    ])
-  })
+    );
+    expect(result).toHaveLength(2);
+    expect(result.map((o) => o.content)).toEqual(["User lives in Portland", "User works remotely"]);
+  });
 
-  it('sanitizeObservations caps based on input observation count', () => {
-    const observations = Array.from({ length: 80 }, (_, i) => obs(`merged fact ${i}`))
+  it("sanitizeObservations caps based on input observation count", () => {
+    const observations = Array.from({ length: 80 }, (_, i) => obs(`merged fact ${i}`));
     // 5 input observations → cap = max(15, 50) = 50
-    const result = sanitizeObservations(observations, 5)
-    expect(result).toHaveLength(50)
-  })
-})
+    const result = sanitizeObservations(observations, 5);
+    expect(result).toHaveLength(50);
+  });
+});
diff --git a/apps/construct/src/__tests__/memory/tokens.test.ts b/apps/construct/src/__tests__/memory/tokens.test.ts
index ffad506..5afbc39 100644
--- a/apps/construct/src/__tests__/memory/tokens.test.ts
+++ b/apps/construct/src/__tests__/memory/tokens.test.ts
@@ -1,39 +1,39 @@
-import { describe, it, expect } from 'vitest'
-import { estimateTokens, estimateMessageTokens } from '@repo/cairn'
+import { describe, it, expect } from "vitest";
+import { estimateTokens, estimateMessageTokens } from "@repo/cairn";
 
-describe('estimateTokens', () => {
-  it('estimates tokens from string length', () => {
+describe("estimateTokens", () => {
+  it("estimates tokens from string length", () => {
     // 20 chars → 5 tokens (20/4)
-    expect(estimateTokens('12345678901234567890')).toBe(5)
-  })
+    expect(estimateTokens("12345678901234567890")).toBe(5);
+  });
 
-  it('rounds up partial tokens', () => {
+  it("rounds up partial tokens", () => {
     // 5 chars → ceil(5/4) = 2
-    expect(estimateTokens('hello')).toBe(2)
-  })
+    expect(estimateTokens("hello")).toBe(2);
+  });
 
-  it('returns 0 for empty string', () => {
-    expect(estimateTokens('')).toBe(0)
-  })
-})
+  it("returns 0 for empty string", () => {
+    expect(estimateTokens("")).toBe(0);
+  });
+});
 
-describe('estimateMessageTokens', () => {
-  it('includes per-message overhead', () => {
+describe("estimateMessageTokens", () => {
+  it("includes per-message overhead", () => {
     const tokens = estimateMessageTokens([
-      { role: 'user', content: '12345678901234567890' }, // 5 content + 4 overhead = 9
-    ])
-    expect(tokens).toBe(9)
-  })
+      { role: "user", content: "12345678901234567890" }, // 5 content + 4 overhead = 9
+    ]);
+    expect(tokens).toBe(9);
+  });
 
-  it('sums across multiple messages', () => {
+  it("sums across multiple messages", () => {
     const tokens = estimateMessageTokens([
-      { role: 'user', content: '12345678901234567890' },     // 5 + 4 = 9
-      { role: 'assistant', content: '12345678901234567890' }, // 5 + 4 = 9
-    ])
-    expect(tokens).toBe(18)
-  })
+      { role: "user", content: "12345678901234567890" }, // 5 + 4 = 9
+      { role: "assistant", content: "12345678901234567890" }, // 5 + 4 = 9
+    ]);
+    expect(tokens).toBe(18);
+  });
 
-  it('returns 0 for empty array', () => {
-    expect(estimateMessageTokens([])).toBe(0)
-  })
-})
+  it("returns 0 for empty array", () => {
+    expect(estimateMessageTokens([])).toBe(0);
+  });
+});
diff --git a/apps/construct/src/__tests__/observation-context.test.ts b/apps/construct/src/__tests__/observation-context.test.ts
index f128e95..b2ff395 100644
--- a/apps/construct/src/__tests__/observation-context.test.ts
+++ b/apps/construct/src/__tests__/observation-context.test.ts
@@ -5,133 +5,133 @@
  * No API key needed. MemoryManager is instantiated with null workerConfig.
  */
 
-import { describe, it, expect, beforeEach, afterEach } from 'vitest'
-import type { Kysely } from 'kysely'
-import type { Database } from '../db/schema.js'
-import { getOrCreateConversation, saveMessage } from '../db/queries.js'
-import { MemoryManager, renderObservations } from '@repo/cairn'
-import type { Observation } from '@repo/cairn'
-import { setupDb, seedObservations, observationFixtures } from './fixtures.js'
-
-let db: Kysely
-let mm: MemoryManager
-let convId: string
+import { describe, it, expect, beforeEach, afterEach } from "vitest";
+import type { Kysely } from "kysely";
+import type { Database } from "../db/schema.js";
+import { getOrCreateConversation, saveMessage } from "../db/queries.js";
+import { MemoryManager, renderObservations } from "@repo/cairn";
+import type { Observation } from "@repo/cairn";
+import { setupDb, seedObservations, observationFixtures } from "./fixtures.js";
+
+let db: Kysely;
+let mm: MemoryManager;
+let convId: string;
 
 beforeEach(async () => {
-  db = await setupDb()
-  mm = new MemoryManager(db, { workerConfig: null, apiKey: '' })
-  convId = await getOrCreateConversation(db, 'test', null)
-})
+  db = await setupDb();
+  mm = new MemoryManager(db, { workerConfig: null, apiKey: "" });
+  convId = await getOrCreateConversation(db, "test", null);
+});
 
 afterEach(async () => {
-  await db.destroy()
-})
+  await db.destroy();
+});
 
-describe('MemoryManager.buildContext — observations replace old messages', () => {
-  it('returns observations text + only unobserved messages', async () => {
+describe("MemoryManager.buildContext — observations replace old messages", () => {
+  it("returns observations text + only unobserved messages", async () => {
     // Insert 15 messages
-    const msgIds: string[] = []
+    const msgIds: string[] = [];
     for (let i = 1; i <= 15; i++) {
       const id = await saveMessage(db, {
         conversation_id: convId,
-        role: i % 2 === 0 ? 'assistant' : 'user',
+        role: i % 2 === 0 ? "assistant" : "user",
         content: `Message ${i}`,
-      })
-      msgIds.push(id)
+      });
+      msgIds.push(id);
     }
 
     // Seed observations
-    await seedObservations(db, convId)
+    await seedObservations(db, convId);
 
     // Set watermark to message 10 (messages 1-10 are "observed")
     await db
-      .updateTable('conversations')
+      .updateTable("conversations")
       .set({ observed_up_to_message_id: msgIds[9] }) // 0-indexed, msg 10
-      .where('id', '=', convId)
-      .execute()
+      .where("id", "=", convId)
+      .execute();
 
-    const ctx = await mm.buildContext(convId)
+    const ctx = await mm.buildContext(convId);
 
     // Should have observations
-    expect(ctx.hasObservations).toBe(true)
-    expect(ctx.observationsText).toBeTruthy()
+    expect(ctx.hasObservations).toBe(true);
+    expect(ctx.observationsText).toBeTruthy();
 
     // Active messages should be only 11-15 (after watermark)
-    expect(ctx.activeMessages).toHaveLength(5)
-    expect(ctx.activeMessages[0].content).toBe('Message 11')
-    expect(ctx.activeMessages[4].content).toBe('Message 15')
-  })
-})
-
-describe('MemoryManager.buildContext — no observations fallback', () => {
-  it('returns empty observations + all messages when no observations exist', async () => {
+    expect(ctx.activeMessages).toHaveLength(5);
+    expect(ctx.activeMessages[0].content).toBe("Message 11");
+    expect(ctx.activeMessages[4].content).toBe("Message 15");
+  });
+});
+
+describe("MemoryManager.buildContext — no observations fallback", () => {
+  it("returns empty observations + all messages when no observations exist", async () => {
     // Insert some messages, no observations
     for (let i = 1; i <= 5; i++) {
       await saveMessage(db, {
         conversation_id: convId,
-        role: i % 2 === 0 ? 'assistant' : 'user',
+        role: i % 2 === 0 ? "assistant" : "user",
         content: `Message ${i}`,
-      })
+      });
     }
 
-    const ctx = await mm.buildContext(convId)
+    const ctx = await mm.buildContext(convId);
 
-    expect(ctx.hasObservations).toBe(false)
-    expect(ctx.observationsText).toBe('')
-    expect(ctx.activeMessages).toHaveLength(5)
-    expect(ctx.activeMessages[0].content).toBe('Message 1')
-  })
-})
+    expect(ctx.hasObservations).toBe(false);
+    expect(ctx.observationsText).toBe("");
+    expect(ctx.activeMessages).toHaveLength(5);
+    expect(ctx.activeMessages[0].content).toBe("Message 1");
+  });
+});
 
-describe('renderObservations — priority markers', () => {
-  it('renders high/medium/low with correct markers', () => {
+describe("renderObservations — priority markers", () => {
+  it("renders high/medium/low with correct markers", () => {
     const observations: Observation[] = [
       {
-        id: 'obs-1',
+        id: "obs-1",
         conversation_id: convId,
-        content: 'High priority item',
-        priority: 'high',
-        observation_date: '2024-01-15',
+        content: "High priority item",
+        priority: "high",
+        observation_date: "2024-01-15",
         source_message_ids: [],
         token_count: 10,
         generation: 0,
         superseded_at: null,
-        created_at: '2024-01-15T10:00:00Z',
+        created_at: "2024-01-15T10:00:00Z",
       },
       {
-        id: 'obs-2',
+        id: "obs-2",
         conversation_id: convId,
-        content: 'Medium priority item',
-        priority: 'medium',
-        observation_date: '2024-01-15',
+        content: "Medium priority item",
+        priority: "medium",
+        observation_date: "2024-01-15",
         source_message_ids: [],
         token_count: 10,
         generation: 0,
         superseded_at: null,
-        created_at: '2024-01-15T10:01:00Z',
+        created_at: "2024-01-15T10:01:00Z",
       },
       {
-        id: 'obs-3',
+        id: "obs-3",
         conversation_id: convId,
-        content: 'Low priority item',
-        priority: 'low',
-        observation_date: '2024-01-15',
+        content: "Low priority item",
+        priority: "low",
+        observation_date: "2024-01-15",
         source_message_ids: [],
         token_count: 10,
         generation: 0,
         superseded_at: null,
-        created_at: '2024-01-15T10:02:00Z',
+        created_at: "2024-01-15T10:02:00Z",
       },
-    ]
+    ];
 
-    const rendered = renderObservations(observations)
+    const rendered = renderObservations(observations);
 
-    expect(rendered).toContain('! [2024-01-15] High priority item')
-    expect(rendered).toContain('- [2024-01-15] Medium priority item')
-    expect(rendered).toContain('~ [2024-01-15] Low priority item')
-  })
+    expect(rendered).toContain("! [2024-01-15] High priority item");
+    expect(rendered).toContain("- [2024-01-15] Medium priority item");
+    expect(rendered).toContain("~ [2024-01-15] Low priority item");
+  });
 
-  it('renders observations in input order', () => {
+  it("renders observations in input order", () => {
     const observations: Observation[] = observationFixtures.map((o, i) => ({
       id: `obs-${i}`,
       conversation_id: convId,
@@ -143,120 +143,120 @@ describe('renderObservations — priority markers', () => {
       generation: 0,
       superseded_at: null,
       created_at: `2024-01-15T10:0${i}:00Z`,
-    }))
+    }));
 
-    const rendered = renderObservations(observations)
-    const lines = rendered.split('\n')
+    const rendered = renderObservations(observations);
+    const lines = rendered.split("\n");
 
     // First observation (high priority dentist)
-    expect(lines[0]).toMatch(/^! \[2024-01-15\] User has a dentist/)
+    expect(lines[0]).toMatch(/^! \[2024-01-15\] User has a dentist/);
 
     // Order should match input array order
-    expect(lines.length).toBe(observationFixtures.length)
-  })
+    expect(lines.length).toBe(observationFixtures.length);
+  });
 
-  it('returns empty string for no observations', () => {
-    expect(renderObservations([])).toBe('')
-  })
-})
+  it("returns empty string for no observations", () => {
+    expect(renderObservations([])).toBe("");
+  });
+});
 
-describe('MemoryManager.buildContext — watermark accuracy', () => {
-  it('watermark at message 10 returns only messages 11+', async () => {
-    const msgIds: string[] = []
+describe("MemoryManager.buildContext — watermark accuracy", () => {
+  it("watermark at message 10 returns only messages 11+", async () => {
+    const msgIds: string[] = [];
     for (let i = 1; i <= 20; i++) {
       const id = await saveMessage(db, {
         conversation_id: convId,
-        role: i % 2 === 0 ? 'assistant' : 'user',
+        role: i % 2 === 0 ? "assistant" : "user",
         content: `Message ${i}`,
-      })
-      msgIds.push(id)
+      });
+      msgIds.push(id);
     }
 
     // Set watermark to message 10
     await db
-      .updateTable('conversations')
+      .updateTable("conversations")
       .set({ observed_up_to_message_id: msgIds[9] })
-      .where('id', '=', convId)
-      .execute()
+      .where("id", "=", convId)
+      .execute();
 
-    const unobserved = await mm.getUnobservedMessages(convId)
+    const unobserved = await mm.getUnobservedMessages(convId);
 
-    expect(unobserved).toHaveLength(10)
-    expect(unobserved[0].content).toBe('Message 11')
-    expect(unobserved[9].content).toBe('Message 20')
-  })
+    expect(unobserved).toHaveLength(10);
+    expect(unobserved[0].content).toBe("Message 11");
+    expect(unobserved[9].content).toBe("Message 20");
+  });
 
-  it('no watermark returns all messages', async () => {
+  it("no watermark returns all messages", async () => {
     for (let i = 1; i <= 5; i++) {
       await saveMessage(db, {
         conversation_id: convId,
-        role: 'user',
+        role: "user",
         content: `Message ${i}`,
-      })
+      });
     }
 
-    const unobserved = await mm.getUnobservedMessages(convId)
-    expect(unobserved).toHaveLength(5)
-    expect(unobserved[0].content).toBe('Message 1')
-  })
+    const unobserved = await mm.getUnobservedMessages(convId);
+    expect(unobserved).toHaveLength(5);
+    expect(unobserved[0].content).toBe("Message 1");
+  });
 
-  it('watermark at last message returns no unobserved', async () => {
-    const msgIds: string[] = []
+  it("watermark at last message returns no unobserved", async () => {
+    const msgIds: string[] = [];
     for (let i = 1; i <= 5; i++) {
       const id = await saveMessage(db, {
         conversation_id: convId,
-        role: 'user',
+        role: "user",
         content: `Message ${i}`,
-      })
-      msgIds.push(id)
+      });
+      msgIds.push(id);
     }
 
     // Set watermark to the last message
     await db
-      .updateTable('conversations')
+      .updateTable("conversations")
       .set({ observed_up_to_message_id: msgIds[4] })
-      .where('id', '=', convId)
-      .execute()
+      .where("id", "=", convId)
+      .execute();
 
-    const unobserved = await mm.getUnobservedMessages(convId)
-    expect(unobserved).toHaveLength(0)
-  })
-})
+    const unobserved = await mm.getUnobservedMessages(convId);
+    expect(unobserved).toHaveLength(0);
+  });
+});
 
-describe('MemoryManager.buildContext — full integration', () => {
-  it('observations text matches DB state + messages split correctly', async () => {
+describe("MemoryManager.buildContext — full integration", () => {
+  it("observations text matches DB state + messages split correctly", async () => {
     // Insert messages and set watermark
-    const msgIds: string[] = []
+    const msgIds: string[] = [];
     for (let i = 1; i <= 10; i++) {
       const id = await saveMessage(db, {
         conversation_id: convId,
-        role: i % 2 === 0 ? 'assistant' : 'user',
+        role: i % 2 === 0 ? "assistant" : "user",
         content: `Msg ${i}`,
-      })
-      msgIds.push(id)
+      });
+      msgIds.push(id);
     }
 
     // Seed observations and set watermark at message 7
-    await seedObservations(db, convId)
+    await seedObservations(db, convId);
     await db
-      .updateTable('conversations')
+      .updateTable("conversations")
       .set({ observed_up_to_message_id: msgIds[6] })
-      .where('id', '=', convId)
-      .execute()
+      .where("id", "=", convId)
+      .execute();
 
-    const ctx = await mm.buildContext(convId)
+    const ctx = await mm.buildContext(convId);
 
     // Observations rendered
-    expect(ctx.hasObservations).toBe(true)
-    expect(ctx.observationsText).toContain('dentist appointment')
-    expect(ctx.observationsText).toContain('learning Rust')
-    expect(ctx.observationsText).toContain('DataPipe')
-    expect(ctx.observationsText).toContain('Miso')
-    expect(ctx.observationsText).toContain('Sarah')
+    expect(ctx.hasObservations).toBe(true);
+    expect(ctx.observationsText).toContain("dentist appointment");
+    expect(ctx.observationsText).toContain("learning Rust");
+    expect(ctx.observationsText).toContain("DataPipe");
+    expect(ctx.observationsText).toContain("Miso");
+    expect(ctx.observationsText).toContain("Sarah");
 
     // Active messages are 8-10
-    expect(ctx.activeMessages).toHaveLength(3)
-    expect(ctx.activeMessages[0].content).toBe('Msg 8')
-    expect(ctx.activeMessages[2].content).toBe('Msg 10')
-  })
-})
+    expect(ctx.activeMessages).toHaveLength(3);
+    expect(ctx.activeMessages[0].content).toBe("Msg 8");
+    expect(ctx.activeMessages[2].content).toBe("Msg 10");
+  });
+});
diff --git a/apps/construct/src/__tests__/recall-pipeline.test.ts b/apps/construct/src/__tests__/recall-pipeline.test.ts
index d4c162e..a061e5f 100644
--- a/apps/construct/src/__tests__/recall-pipeline.test.ts
+++ b/apps/construct/src/__tests__/recall-pipeline.test.ts
@@ -5,194 +5,189 @@
  * Uses synthetic 16-d embeddings. No API key needed.
  */
 
-import { describe, it, expect, beforeEach, afterEach } from 'vitest'
-import type { Kysely } from 'kysely'
-import type { Database } from '../db/schema.js'
-import { recallMemories, cosineSimilarity } from '@repo/cairn'
-import {
-  setupDb,
-  seedMemories,
-  queryEmbeddings,
-  memoryEmbeddings,
-} from './fixtures.js'
-
-let db: Kysely
-let memIds: Record
+import { describe, it, expect, beforeEach, afterEach } from "vitest";
+import type { Kysely } from "kysely";
+import type { Database } from "../db/schema.js";
+import { recallMemories, cosineSimilarity } from "@repo/cairn";
+import { setupDb, seedMemories, queryEmbeddings, memoryEmbeddings } from "./fixtures.js";
+
+let db: Kysely;
+let memIds: Record;
 
 beforeEach(async () => {
-  db = await setupDb()
-  const seeded = await seedMemories(db)
-  memIds = seeded.ids
-})
+  db = await setupDb();
+  const seeded = await seedMemories(db);
+  memIds = seeded.ids;
+});
 
 afterEach(async () => {
-  await db.destroy()
-})
+  await db.destroy();
+});
 
-describe('recallMemories — embedding recall', () => {
-  it('finds semantically related memories via embedding similarity', async () => {
+describe("recallMemories — embedding recall", () => {
+  it("finds semantically related memories via embedding similarity", async () => {
     // "food allergies" query → should find shellfish allergy via embedding, not keyword
-    const results = await recallMemories(db, 'what are my food restrictions', {
+    const results = await recallMemories(db, "what are my food restrictions", {
       queryEmbedding: queryEmbeddings.foodAllergies,
-    })
+    });
 
-    const ids = results.map((r) => r.id)
-    expect(ids).toContain(memIds.shellfish)
+    const ids = results.map((r) => r.id);
+    expect(ids).toContain(memIds.shellfish);
 
     // Verify it was found via embedding, not keyword (query has no matching keywords)
-    const shellfish = results.find((r) => r.id === memIds.shellfish)
-    expect(shellfish?.matchType).toBe('embedding')
-    expect(shellfish?.score).toBeGreaterThan(0.9)
-  })
+    const shellfish = results.find((r) => r.id === memIds.shellfish);
+    expect(shellfish?.matchType).toBe("embedding");
+    expect(shellfish?.score).toBeGreaterThan(0.9);
+  });
 
-  it('pet query returns cat memory with high score', async () => {
-    const results = await recallMemories(db, 'furry friends', {
+  it("pet query returns cat memory with high score", async () => {
+    const results = await recallMemories(db, "furry friends", {
       queryEmbedding: queryEmbeddings.pet,
-    })
+    });
 
-    const ids = results.map((r) => r.id)
-    expect(ids).toContain(memIds.miso)
+    const ids = results.map((r) => r.id);
+    expect(ids).toContain(memIds.miso);
 
     // Should NOT contain unrelated memories (work, health)
-    expect(ids).not.toContain(memIds.datapipe)
-    expect(ids).not.toContain(memIds.shellfish)
-  })
+    expect(ids).not.toContain(memIds.datapipe);
+    expect(ids).not.toContain(memIds.shellfish);
+  });
 
-  it('work query returns both work-category memories', async () => {
-    const results = await recallMemories(db, 'engineering job', {
+  it("work query returns both work-category memories", async () => {
+    const results = await recallMemories(db, "engineering job", {
       queryEmbedding: queryEmbeddings.work,
-    })
+    });
 
-    const ids = results.map((r) => r.id)
-    expect(ids).toContain(memIds.datapipe)
-    expect(ids).toContain(memIds.clickstream)
+    const ids = results.map((r) => r.id);
+    expect(ids).toContain(memIds.datapipe);
+    expect(ids).toContain(memIds.clickstream);
 
     // darkMode has a work component — verify it also matches
-    expect(ids).toContain(memIds.darkMode)
-  })
-})
+    expect(ids).toContain(memIds.darkMode);
+  });
+});
 
-describe('recallMemories — FTS + embedding dedup', () => {
-  it('returns a memory once even when both FTS and embedding match', async () => {
+describe("recallMemories — FTS + embedding dedup", () => {
+  it("returns a memory once even when both FTS and embedding match", async () => {
     // "shellfish" keyword → FTS match, plus health embedding → embedding match
-    const results = await recallMemories(db, 'shellfish', {
+    const results = await recallMemories(db, "shellfish", {
       queryEmbedding: queryEmbeddings.foodAllergies,
-    })
+    });
 
-    const shellfish = results.filter((r) => r.id === memIds.shellfish)
-    expect(shellfish).toHaveLength(1)
+    const shellfish = results.filter((r) => r.id === memIds.shellfish);
+    expect(shellfish).toHaveLength(1);
 
     // FTS should have found it first
-    expect(shellfish[0].matchType).toBe('fts5')
-  })
+    expect(shellfish[0].matchType).toBe("fts5");
+  });
 
-  it('FTS results come before embedding-only results', async () => {
+  it("FTS results come before embedding-only results", async () => {
     // "Rust" is a keyword match AND has embedding similarity to hobbies
-    const results = await recallMemories(db, 'Rust', {
+    const results = await recallMemories(db, "Rust", {
       queryEmbedding: queryEmbeddings.hobbies,
-    })
+    });
 
     // Rust memory should be found via FTS first
-    const rustIdx = results.findIndex((r) => r.id === memIds.rust)
-    expect(rustIdx).toBeGreaterThanOrEqual(0)
-    expect(results[rustIdx].matchType).toBe('fts5')
+    const rustIdx = results.findIndex((r) => r.id === memIds.rust);
+    expect(rustIdx).toBeGreaterThanOrEqual(0);
+    expect(results[rustIdx].matchType).toBe("fts5");
 
     // Any embedding-only results should come after FTS results
-    const ftsResults = results.filter((r) => r.matchType === 'fts5')
-    const embeddingResults = results.filter((r) => r.matchType === 'embedding')
+    const ftsResults = results.filter((r) => r.matchType === "fts5");
+    const embeddingResults = results.filter((r) => r.matchType === "embedding");
     if (embeddingResults.length > 0) {
-      const lastFtsIdx = results.findIndex((r) => r.id === ftsResults[ftsResults.length - 1].id)
-      const firstEmbIdx = results.findIndex((r) => r.id === embeddingResults[0].id)
-      expect(firstEmbIdx).toBeGreaterThan(lastFtsIdx)
+      const lastFtsIdx = results.findIndex((r) => r.id === ftsResults[ftsResults.length - 1].id);
+      const firstEmbIdx = results.findIndex((r) => r.id === embeddingResults[0].id);
+      expect(firstEmbIdx).toBeGreaterThan(lastFtsIdx);
     }
-  })
-})
+  });
+});
 
-describe('recallMemories — threshold filtering', () => {
-  it('returns nothing when query embedding is orthogonal to all memories', async () => {
+describe("recallMemories — threshold filtering", () => {
+  it("returns nothing when query embedding is orthogonal to all memories", async () => {
     // orthogonal query (dim 15) has zero similarity with all memory embeddings
-    const results = await recallMemories(db, 'xyzzy_no_keyword_match_here', {
+    const results = await recallMemories(db, "xyzzy_no_keyword_match_here", {
       queryEmbedding: queryEmbeddings.orthogonal,
-    })
+    });
 
     // No FTS match (gibberish query), no embedding match (orthogonal), no LIKE match
-    expect(results).toHaveLength(0)
-  })
+    expect(results).toHaveLength(0);
+  });
 
-  it('respects custom similarity threshold', async () => {
+  it("respects custom similarity threshold", async () => {
     // With a very high threshold, even similar embeddings should be filtered out
-    const results = await recallMemories(db, 'xyzzy_no_match', {
+    const results = await recallMemories(db, "xyzzy_no_match", {
       queryEmbedding: queryEmbeddings.work,
       similarityThreshold: 0.99,
-    })
+    });
 
     // datapipe embedding has cosine ~0.995 with work query, so it might pass
     // but clickstream (0.9 work + 0.2 hobby) will be slightly under 0.99
     // The exact threshold behavior depends on normalized vectors
     for (const r of results) {
-      expect(r.score).toBeGreaterThanOrEqual(0.99)
+      expect(r.score).toBeGreaterThanOrEqual(0.99);
     }
-  })
-})
+  });
+});
 
-describe('recallMemories — category filtering', () => {
-  it('filters to work category even when other embeddings score higher', async () => {
+describe("recallMemories — category filtering", () => {
+  it("filters to work category even when other embeddings score higher", async () => {
     // hobbies query would normally match rust (learning) and darkMode (preference)
     // but with category=work, only work memories should return
-    const results = await recallMemories(db, 'engineering', {
+    const results = await recallMemories(db, "engineering", {
       queryEmbedding: queryEmbeddings.work,
-      category: 'work',
-    })
+      category: "work",
+    });
 
     for (const r of results) {
-      expect(r.category).toBe('work')
+      expect(r.category).toBe("work");
     }
 
-    const ids = results.map((r) => r.id)
-    expect(ids).toContain(memIds.datapipe)
-    expect(ids).toContain(memIds.clickstream)
+    const ids = results.map((r) => r.id);
+    expect(ids).toContain(memIds.datapipe);
+    expect(ids).toContain(memIds.clickstream);
     // Not personal/learning even if they have work-like embeddings
-    expect(ids).not.toContain(memIds.rust)
-    expect(ids).not.toContain(memIds.darkMode)
-  })
+    expect(ids).not.toContain(memIds.rust);
+    expect(ids).not.toContain(memIds.darkMode);
+  });
 
-  it('returns empty when category has no matches', async () => {
-    const results = await recallMemories(db, 'xyzzy', {
+  it("returns empty when category has no matches", async () => {
+    const results = await recallMemories(db, "xyzzy", {
       queryEmbedding: queryEmbeddings.pet,
-      category: 'nonexistent',
-    })
+      category: "nonexistent",
+    });
 
-    expect(results).toHaveLength(0)
-  })
-})
+    expect(results).toHaveLength(0);
+  });
+});
 
-describe('synthetic embedding sanity checks', () => {
-  it('same-cluster vectors have high cosine similarity', () => {
+describe("synthetic embedding sanity checks", () => {
+  it("same-cluster vectors have high cosine similarity", () => {
     // Pet query vs cat memory embedding
-    const sim = cosineSimilarity(queryEmbeddings.pet, memoryEmbeddings.miso)
-    expect(sim).toBeGreaterThan(0.9)
-  })
+    const sim = cosineSimilarity(queryEmbeddings.pet, memoryEmbeddings.miso);
+    expect(sim).toBeGreaterThan(0.9);
+  });
 
-  it('cross-cluster vectors have near-zero cosine similarity', () => {
+  it("cross-cluster vectors have near-zero cosine similarity", () => {
     // Pet query vs work memory embedding
-    const sim = cosineSimilarity(queryEmbeddings.pet, memoryEmbeddings.datapipe)
-    expect(sim).toBeCloseTo(0, 1)
-  })
+    const sim = cosineSimilarity(queryEmbeddings.pet, memoryEmbeddings.datapipe);
+    expect(sim).toBeCloseTo(0, 1);
+  });
 
-  it('orthogonal query has zero similarity with all memories', () => {
+  it("orthogonal query has zero similarity with all memories", () => {
     for (const [, emb] of Object.entries(memoryEmbeddings)) {
-      const sim = cosineSimilarity(queryEmbeddings.orthogonal, emb)
-      expect(sim).toBeCloseTo(0, 5)
+      const sim = cosineSimilarity(queryEmbeddings.orthogonal, emb);
+      expect(sim).toBeCloseTo(0, 5);
     }
-  })
+  });
 
-  it('blended embeddings have partial similarity with both clusters', () => {
+  it("blended embeddings have partial similarity with both clusters", () => {
     // darkMode is 50/50 work+hobby → should have ~0.7 similarity with both
-    const workSim = cosineSimilarity(queryEmbeddings.work, memoryEmbeddings.darkMode)
-    const hobbySim = cosineSimilarity(queryEmbeddings.hobbies, memoryEmbeddings.darkMode)
-    expect(workSim).toBeGreaterThan(0.6)
-    expect(hobbySim).toBeGreaterThan(0.6)
-    expect(workSim).toBeLessThan(0.8)
-    expect(hobbySim).toBeLessThan(0.8)
-  })
-})
+    const workSim = cosineSimilarity(queryEmbeddings.work, memoryEmbeddings.darkMode);
+    const hobbySim = cosineSimilarity(queryEmbeddings.hobbies, memoryEmbeddings.darkMode);
+    expect(workSim).toBeGreaterThan(0.6);
+    expect(hobbySim).toBeGreaterThan(0.6);
+    expect(workSim).toBeLessThan(0.8);
+    expect(hobbySim).toBeLessThan(0.8);
+  });
+});
diff --git a/apps/construct/src/__tests__/store-pipeline.test.ts b/apps/construct/src/__tests__/store-pipeline.test.ts
index 04f3c03..e1752ca 100644
--- a/apps/construct/src/__tests__/store-pipeline.test.ts
+++ b/apps/construct/src/__tests__/store-pipeline.test.ts
@@ -11,9 +11,9 @@
  * No API key needed.
  */
 
-import { describe, it, expect, beforeEach, afterEach } from 'vitest'
-import type { Kysely } from 'kysely'
-import type { Database } from '../db/schema.js'
+import { describe, it, expect, beforeEach, afterEach } from "vitest";
+import type { Kysely } from "kysely";
+import type { Database } from "../db/schema.js";
 import {
   storeMemory,
   updateMemoryEmbedding,
@@ -26,575 +26,583 @@ import {
   traverseGraph,
   getRelatedMemoryIds,
   getNodeEdges,
-} from '@repo/cairn'
-import { setupDb, memoryEmbeddings, queryEmbeddings } from './fixtures.js'
+} from "@repo/cairn";
+import { setupDb, memoryEmbeddings, queryEmbeddings } from "./fixtures.js";
 
-let db: Kysely
+let db: Kysely;
 
 beforeEach(async () => {
-  db = await setupDb()
-})
+  db = await setupDb();
+});
 
 afterEach(async () => {
-  await db.destroy()
-})
+  await db.destroy();
+});
 
 // ── Memory → FTS5 write consistency ─────────────────────────────────
 
-describe('memory writes → FTS5 sync', () => {
-  it('storeMemory makes content searchable via FTS5', async () => {
+describe("memory writes → FTS5 sync", () => {
+  it("storeMemory makes content searchable via FTS5", async () => {
     await storeMemory(db, {
-      content: 'Alex is allergic to shellfish',
-      category: 'health',
-      source: 'user',
-    })
-
-    const results = await recallMemories(db, 'shellfish')
-    expect(results.length).toBeGreaterThan(0)
-    expect(results[0].matchType).toBe('fts5')
-    expect(results[0].content).toContain('shellfish')
-  })
-
-  it('storeMemory makes tags searchable via FTS5', async () => {
+      content: "Alex is allergic to shellfish",
+      category: "health",
+      source: "user",
+    });
+
+    const results = await recallMemories(db, "shellfish");
+    expect(results.length).toBeGreaterThan(0);
+    expect(results[0].matchType).toBe("fts5");
+    expect(results[0].content).toContain("shellfish");
+  });
+
+  it("storeMemory makes tags searchable via FTS5", async () => {
     await storeMemory(db, {
-      content: 'Some memory about health',
-      category: 'health',
-      tags: 'epipen,medical,allergy',
-      source: 'user',
-    })
+      content: "Some memory about health",
+      category: "health",
+      tags: "epipen,medical,allergy",
+      source: "user",
+    });
 
     // Search by tag keyword, not in content
-    const results = await recallMemories(db, 'epipen')
-    expect(results.length).toBeGreaterThan(0)
-    expect(results[0].tags).toContain('epipen')
-  })
+    const results = await recallMemories(db, "epipen");
+    expect(results.length).toBeGreaterThan(0);
+    expect(results[0].tags).toContain("epipen");
+  });
 
-  it('archived memory excluded from FTS5 recall', async () => {
+  it("archived memory excluded from FTS5 recall", async () => {
     const mem = await storeMemory(db, {
-      content: 'Alex has a unique pet iguana named Spike',
-      category: 'personal',
-      source: 'user',
-    })
+      content: "Alex has a unique pet iguana named Spike",
+      category: "personal",
+      source: "user",
+    });
 
     // Findable before archiving
-    let results = await recallMemories(db, 'iguana')
-    expect(results.length).toBeGreaterThan(0)
+    let results = await recallMemories(db, "iguana");
+    expect(results.length).toBeGreaterThan(0);
 
-    await forgetMemory(db, mem.id)
+    await forgetMemory(db, mem.id);
 
     // Not findable after archiving
-    results = await recallMemories(db, 'iguana')
-    expect(results).toHaveLength(0)
-  })
+    results = await recallMemories(db, "iguana");
+    expect(results).toHaveLength(0);
+  });
 
-  it('multiple memories with overlapping keywords all appear in FTS5', async () => {
-    await storeMemory(db, { content: 'Alex likes Python for scripting', source: 'user' })
-    await storeMemory(db, { content: 'Alex prefers Python over Ruby', source: 'user' })
-    await storeMemory(db, { content: 'Alex uses Python at work daily', source: 'user' })
+  it("multiple memories with overlapping keywords all appear in FTS5", async () => {
+    await storeMemory(db, { content: "Alex likes Python for scripting", source: "user" });
+    await storeMemory(db, { content: "Alex prefers Python over Ruby", source: "user" });
+    await storeMemory(db, { content: "Alex uses Python at work daily", source: "user" });
 
-    const results = await recallMemories(db, 'Python')
-    expect(results.length).toBe(3)
-  })
-})
+    const results = await recallMemories(db, "Python");
+    expect(results.length).toBe(3);
+  });
+});
 
 // ── Memory → embedding write consistency ────────────────────────────
 
-describe('memory writes → embedding recall', () => {
-  it('memory without embedding is not found by embedding search', async () => {
+describe("memory writes → embedding recall", () => {
+  it("memory without embedding is not found by embedding search", async () => {
     await storeMemory(db, {
-      content: 'A memory with no embedding vector',
-      category: 'general',
-      source: 'user',
+      content: "A memory with no embedding vector",
+      category: "general",
+      source: "user",
       // no embedding
-    })
+    });
 
-    const results = await recallMemories(db, 'xyzzy_no_keyword', {
+    const results = await recallMemories(db, "xyzzy_no_keyword", {
       queryEmbedding: queryEmbeddings.pet,
-    })
+    });
 
     // Should not find it — no embedding to compare against
-    expect(results).toHaveLength(0)
-  })
+    expect(results).toHaveLength(0);
+  });
 
-  it('updateMemoryEmbedding makes memory findable by embedding recall', async () => {
+  it("updateMemoryEmbedding makes memory findable by embedding recall", async () => {
     const mem = await storeMemory(db, {
-      content: 'A fact about pets stored without embedding',
-      category: 'personal',
-      source: 'user',
-    })
+      content: "A fact about pets stored without embedding",
+      category: "personal",
+      source: "user",
+    });
 
     // Not findable before embedding
-    let results = await recallMemories(db, 'xyzzy_no_keyword', {
+    let results = await recallMemories(db, "xyzzy_no_keyword", {
       queryEmbedding: queryEmbeddings.pet,
-    })
-    expect(results.find((r) => r.id === mem.id)).toBeUndefined()
+    });
+    expect(results.find((r) => r.id === mem.id)).toBeUndefined();
 
     // Add embedding in pet direction
-    await updateMemoryEmbedding(db, mem.id, memoryEmbeddings.miso)
+    await updateMemoryEmbedding(db, mem.id, memoryEmbeddings.miso);
 
     // Now findable via embedding
-    results = await recallMemories(db, 'xyzzy_no_keyword', {
+    results = await recallMemories(db, "xyzzy_no_keyword", {
       queryEmbedding: queryEmbeddings.pet,
-    })
-    const found = results.find((r) => r.id === mem.id)
-    expect(found).toBeDefined()
-    expect(found!.matchType).toBe('embedding')
-    expect(found!.score).toBeGreaterThan(0.9)
-  })
-
-  it('updateMemoryEmbedding changes which queries match', async () => {
+    });
+    const found = results.find((r) => r.id === mem.id);
+    expect(found).toBeDefined();
+    expect(found!.matchType).toBe("embedding");
+    expect(found!.score).toBeGreaterThan(0.9);
+  });
+
+  it("updateMemoryEmbedding changes which queries match", async () => {
     const mem = await storeMemory(db, {
-      content: 'A fact that changes topic cluster',
-      category: 'general',
-      source: 'user',
+      content: "A fact that changes topic cluster",
+      category: "general",
+      source: "user",
       embedding: JSON.stringify(memoryEmbeddings.miso), // starts in pet cluster
-    })
+    });
 
     // Initially findable by pet query
-    let results = await recallMemories(db, 'xyzzy', {
+    let results = await recallMemories(db, "xyzzy", {
       queryEmbedding: queryEmbeddings.pet,
-    })
-    expect(results.find((r) => r.id === mem.id)).toBeDefined()
+    });
+    expect(results.find((r) => r.id === mem.id)).toBeDefined();
 
     // Re-embed into work cluster
-    await updateMemoryEmbedding(db, mem.id, memoryEmbeddings.datapipe)
+    await updateMemoryEmbedding(db, mem.id, memoryEmbeddings.datapipe);
 
     // No longer findable by pet query
-    results = await recallMemories(db, 'xyzzy', {
+    results = await recallMemories(db, "xyzzy", {
       queryEmbedding: queryEmbeddings.pet,
-    })
-    expect(results.find((r) => r.id === mem.id)).toBeUndefined()
+    });
+    expect(results.find((r) => r.id === mem.id)).toBeUndefined();
 
     // Now findable by work query
-    results = await recallMemories(db, 'xyzzy', {
+    results = await recallMemories(db, "xyzzy", {
       queryEmbedding: queryEmbeddings.work,
-    })
-    expect(results.find((r) => r.id === mem.id)).toBeDefined()
-  })
-})
+    });
+    expect(results.find((r) => r.id === mem.id)).toBeDefined();
+  });
+});
 
 // ── Graph node write integrity ──────────────────────────────────────
 
-describe('graph node writes', () => {
-  it('upsertNode normalizes name to lowercase', async () => {
-    const node = await upsertNode(db, { name: 'AlExAnDeR', type: 'person' })
-    expect(node.name).toBe('alexander')
-    expect(node.display_name).toBe('AlExAnDeR')
-  })
-
-  it('upsertNode is idempotent on same name+type', async () => {
-    const first = await upsertNode(db, { name: 'Alex', type: 'person' })
-    const second = await upsertNode(db, { name: 'Alex', type: 'person' })
-    expect(first.id).toBe(second.id)
-  })
-
-  it('upsertNode creates separate nodes for different types', async () => {
-    const person = await upsertNode(db, { name: 'Rust', type: 'person' })
-    const concept = await upsertNode(db, { name: 'Rust', type: 'concept' })
-    expect(person.id).not.toBe(concept.id)
-  })
-
-  it('findNodeByName is case-insensitive', async () => {
-    await upsertNode(db, { name: 'Portland', type: 'place' })
-
-    const found1 = await findNodeByName(db, 'Portland')
-    const found2 = await findNodeByName(db, 'portland')
-    const found3 = await findNodeByName(db, 'PORTLAND')
-
-    expect(found1).toBeDefined()
-    expect(found1!.id).toBe(found2!.id)
-    expect(found2!.id).toBe(found3!.id)
-  })
-
-  it('upsertNode fills description on existing node without one', async () => {
-    const bare = await upsertNode(db, { name: 'DataPipe', type: 'entity' })
-    expect(bare.description).toBeNull()
+describe("graph node writes", () => {
+  it("upsertNode normalizes name to lowercase", async () => {
+    const node = await upsertNode(db, { name: "AlExAnDeR", type: "person" });
+    expect(node.name).toBe("alexander");
+    expect(node.display_name).toBe("AlExAnDeR");
+  });
+
+  it("upsertNode is idempotent on same name+type", async () => {
+    const first = await upsertNode(db, { name: "Alex", type: "person" });
+    const second = await upsertNode(db, { name: "Alex", type: "person" });
+    expect(first.id).toBe(second.id);
+  });
+
+  it("upsertNode creates separate nodes for different types", async () => {
+    const person = await upsertNode(db, { name: "Rust", type: "person" });
+    const concept = await upsertNode(db, { name: "Rust", type: "concept" });
+    expect(person.id).not.toBe(concept.id);
+  });
+
+  it("findNodeByName is case-insensitive", async () => {
+    await upsertNode(db, { name: "Portland", type: "place" });
+
+    const found1 = await findNodeByName(db, "Portland");
+    const found2 = await findNodeByName(db, "portland");
+    const found3 = await findNodeByName(db, "PORTLAND");
+
+    expect(found1).toBeDefined();
+    expect(found1!.id).toBe(found2!.id);
+    expect(found2!.id).toBe(found3!.id);
+  });
+
+  it("upsertNode fills description on existing node without one", async () => {
+    const bare = await upsertNode(db, { name: "DataPipe", type: "entity" });
+    expect(bare.description).toBeNull();
 
     const updated = await upsertNode(db, {
-      name: 'DataPipe',
-      type: 'entity',
-      description: 'Real-time data pipeline company',
-    })
+      name: "DataPipe",
+      type: "entity",
+      description: "Real-time data pipeline company",
+    });
 
-    expect(updated.id).toBe(bare.id)
-    expect(updated.description).toBe('Real-time data pipeline company')
-  })
+    expect(updated.id).toBe(bare.id);
+    expect(updated.description).toBe("Real-time data pipeline company");
+  });
 
-  it('searchNodes finds nodes by partial name match', async () => {
-    await upsertNode(db, { name: 'Portland', type: 'place' })
-    await upsertNode(db, { name: 'Port Angeles', type: 'place' })
+  it("searchNodes finds nodes by partial name match", async () => {
+    await upsertNode(db, { name: "Portland", type: "place" });
+    await upsertNode(db, { name: "Port Angeles", type: "place" });
 
-    const results = await searchNodes(db, 'port', 10)
-    expect(results.length).toBe(2)
-  })
-})
+    const results = await searchNodes(db, "port", 10);
+    expect(results.length).toBe(2);
+  });
+});
 
 // ── Graph edge write integrity ──────────────────────────────────────
 
-describe('graph edge writes', () => {
-  it('upsertEdge with memory_id → getRelatedMemoryIds returns it', async () => {
-    const alex = await upsertNode(db, { name: 'Alex', type: 'person' })
-    const miso = await upsertNode(db, { name: 'Miso', type: 'entity' })
+describe("graph edge writes", () => {
+  it("upsertEdge with memory_id → getRelatedMemoryIds returns it", async () => {
+    const alex = await upsertNode(db, { name: "Alex", type: "person" });
+    const miso = await upsertNode(db, { name: "Miso", type: "entity" });
     const mem = await storeMemory(db, {
-      content: 'Alex has a cat named Miso',
-      source: 'user',
-    })
+      content: "Alex has a cat named Miso",
+      source: "user",
+    });
 
     await upsertEdge(db, {
       source_id: alex.id,
       target_id: miso.id,
-      relation: 'owns',
+      relation: "owns",
       memory_id: mem.id,
-    })
+    });
 
-    const memIds = await getRelatedMemoryIds(db, [alex.id, miso.id])
-    expect(memIds).toContain(mem.id)
-  })
+    const memIds = await getRelatedMemoryIds(db, [alex.id, miso.id]);
+    expect(memIds).toContain(mem.id);
+  });
 
-  it('upsertEdge without memory_id → getRelatedMemoryIds skips it', async () => {
-    const a = await upsertNode(db, { name: 'A', type: 'entity' })
-    const b = await upsertNode(db, { name: 'B', type: 'entity' })
+  it("upsertEdge without memory_id → getRelatedMemoryIds skips it", async () => {
+    const a = await upsertNode(db, { name: "A", type: "entity" });
+    const b = await upsertNode(db, { name: "B", type: "entity" });
 
     await upsertEdge(db, {
       source_id: a.id,
       target_id: b.id,
-      relation: 'related_to',
+      relation: "related_to",
       // no memory_id
-    })
+    });
 
-    const memIds = await getRelatedMemoryIds(db, [a.id, b.id])
-    expect(memIds).toHaveLength(0)
-  })
+    const memIds = await getRelatedMemoryIds(db, [a.id, b.id]);
+    expect(memIds).toHaveLength(0);
+  });
 
-  it('upsertEdge increments weight on duplicate', async () => {
-    const a = await upsertNode(db, { name: 'A', type: 'entity' })
-    const b = await upsertNode(db, { name: 'B', type: 'entity' })
+  it("upsertEdge increments weight on duplicate", async () => {
+    const a = await upsertNode(db, { name: "A", type: "entity" });
+    const b = await upsertNode(db, { name: "B", type: "entity" });
 
     const first = await upsertEdge(db, {
       source_id: a.id,
       target_id: b.id,
-      relation: 'knows',
-    })
-    expect(first.weight).toBe(1)
+      relation: "knows",
+    });
+    expect(first.weight).toBe(1);
 
     const second = await upsertEdge(db, {
       source_id: a.id,
       target_id: b.id,
-      relation: 'knows',
-    })
-    expect(second.weight).toBe(2)
+      relation: "knows",
+    });
+    expect(second.weight).toBe(2);
 
     const third = await upsertEdge(db, {
       source_id: a.id,
       target_id: b.id,
-      relation: 'knows',
-    })
-    expect(third.weight).toBe(3)
-  })
+      relation: "knows",
+    });
+    expect(third.weight).toBe(3);
+  });
 
-  it('different relations create separate edges', async () => {
-    const alex = await upsertNode(db, { name: 'Alex', type: 'person' })
-    const portland = await upsertNode(db, { name: 'Portland', type: 'place' })
+  it("different relations create separate edges", async () => {
+    const alex = await upsertNode(db, { name: "Alex", type: "person" });
+    const portland = await upsertNode(db, { name: "Portland", type: "place" });
 
     await upsertEdge(db, {
       source_id: alex.id,
       target_id: portland.id,
-      relation: 'lives_in',
-    })
+      relation: "lives_in",
+    });
     await upsertEdge(db, {
       source_id: alex.id,
       target_id: portland.id,
-      relation: 'was_born_in',
-    })
+      relation: "was_born_in",
+    });
 
-    const edges = await getNodeEdges(db, alex.id)
-    const portlandEdges = edges.filter((e) => e.target_id === portland.id)
-    expect(portlandEdges).toHaveLength(2)
-    expect(portlandEdges.map((e) => e.relation).sort()).toEqual(['lives_in', 'was_born_in'])
-  })
+    const edges = await getNodeEdges(db, alex.id);
+    const portlandEdges = edges.filter((e) => e.target_id === portland.id);
+    expect(portlandEdges).toHaveLength(2);
+    expect(portlandEdges.map((e) => e.relation).toSorted()).toEqual(["lives_in", "was_born_in"]);
+  });
 
-  it('edges reachable from both source and target via traversal', async () => {
-    const alex = await upsertNode(db, { name: 'Alex', type: 'person' })
-    const miso = await upsertNode(db, { name: 'Miso', type: 'entity' })
+  it("edges reachable from both source and target via traversal", async () => {
+    const alex = await upsertNode(db, { name: "Alex", type: "person" });
+    const miso = await upsertNode(db, { name: "Miso", type: "entity" });
 
     await upsertEdge(db, {
       source_id: alex.id,
       target_id: miso.id,
-      relation: 'owns',
-    })
+      relation: "owns",
+    });
 
     // Traverse from Alex → should reach Miso
-    const fromAlex = await traverseGraph(db, alex.id, 1)
-    expect(fromAlex.map((t) => t.node.id)).toContain(miso.id)
+    const fromAlex = await traverseGraph(db, alex.id, 1);
+    expect(fromAlex.map((t) => t.node.id)).toContain(miso.id);
 
     // Traverse from Miso → should reach Alex (edges are bidirectional in traversal)
-    const fromMiso = await traverseGraph(db, miso.id, 1)
-    expect(fromMiso.map((t) => t.node.id)).toContain(alex.id)
-  })
-})
+    const fromMiso = await traverseGraph(db, miso.id, 1);
+    expect(fromMiso.map((t) => t.node.id)).toContain(alex.id);
+  });
+});
 
 // ── Write → retrieval roundtrip ─────────────────────────────────────
 
-describe('write → retrieval roundtrip', () => {
-  it('memory findable via all three paths: FTS5, embedding, graph', async () => {
+describe("write → retrieval roundtrip", () => {
+  it("memory findable via all three paths: FTS5, embedding, graph", async () => {
     // 1. Store memory with embedding
     const mem = await storeMemory(db, {
-      content: 'Alex has a cat named Miso who is 3 years old',
-      category: 'personal',
-      tags: 'pet,cat',
-      source: 'user',
+      content: "Alex has a cat named Miso who is 3 years old",
+      category: "personal",
+      tags: "pet,cat",
+      source: "user",
       embedding: JSON.stringify(memoryEmbeddings.miso),
-    })
+    });
 
     // 2. Create graph nodes + edge linked to this memory
-    const alex = await upsertNode(db, { name: 'Alex', type: 'person' })
-    const miso = await upsertNode(db, { name: 'Miso', type: 'entity' })
+    const alex = await upsertNode(db, { name: "Alex", type: "person" });
+    const miso = await upsertNode(db, { name: "Miso", type: "entity" });
     await upsertEdge(db, {
       source_id: alex.id,
       target_id: miso.id,
-      relation: 'owns',
+      relation: "owns",
       memory_id: mem.id,
-    })
+    });
 
     // Path 1: FTS5 keyword search
-    const ftsResults = await recallMemories(db, 'cat Miso')
-    expect(ftsResults.find((r) => r.id === mem.id)).toBeDefined()
+    const ftsResults = await recallMemories(db, "cat Miso");
+    expect(ftsResults.find((r) => r.id === mem.id)).toBeDefined();
 
     // Path 2: Embedding cosine similarity
-    const embResults = await recallMemories(db, 'xyzzy_no_keyword', {
+    const embResults = await recallMemories(db, "xyzzy_no_keyword", {
       queryEmbedding: queryEmbeddings.pet,
-    })
-    const embMatch = embResults.find((r) => r.id === mem.id)
-    expect(embMatch).toBeDefined()
-    expect(embMatch!.score).toBeGreaterThan(0.9)
+    });
+    const embMatch = embResults.find((r) => r.id === mem.id);
+    expect(embMatch).toBeDefined();
+    expect(embMatch!.score).toBeGreaterThan(0.9);
 
     // Path 3: Graph traversal → getRelatedMemoryIds
-    const traversed = await traverseGraph(db, alex.id, 1)
-    const reachedNodeIds = [alex.id, ...traversed.map((t) => t.node.id)]
-    const graphMemIds = await getRelatedMemoryIds(db, reachedNodeIds)
-    expect(graphMemIds).toContain(mem.id)
-  })
+    const traversed = await traverseGraph(db, alex.id, 1);
+    const reachedNodeIds = [alex.id, ...traversed.map((t) => t.node.id)];
+    const graphMemIds = await getRelatedMemoryIds(db, reachedNodeIds);
+    expect(graphMemIds).toContain(mem.id);
+  });
 
-  it('archived memory excluded from all recall paths', async () => {
+  it("archived memory excluded from all recall paths", async () => {
     const mem = await storeMemory(db, {
-      content: 'Alex used to have a hamster named Biscuit',
-      category: 'personal',
-      tags: 'pet,hamster',
-      source: 'user',
+      content: "Alex used to have a hamster named Biscuit",
+      category: "personal",
+      tags: "pet,hamster",
+      source: "user",
       embedding: JSON.stringify(memoryEmbeddings.miso), // pet direction
-    })
+    });
 
-    const alex = await upsertNode(db, { name: 'Alex', type: 'person' })
-    const biscuit = await upsertNode(db, { name: 'Biscuit', type: 'entity' })
+    const alex = await upsertNode(db, { name: "Alex", type: "person" });
+    const biscuit = await upsertNode(db, { name: "Biscuit", type: "entity" });
     await upsertEdge(db, {
       source_id: alex.id,
       target_id: biscuit.id,
-      relation: 'owned',
+      relation: "owned",
       memory_id: mem.id,
-    })
+    });
 
     // Archive the memory
-    await forgetMemory(db, mem.id)
+    await forgetMemory(db, mem.id);
 
     // FTS5: excluded
-    const ftsResults = await recallMemories(db, 'hamster Biscuit')
-    expect(ftsResults.find((r) => r.id === mem.id)).toBeUndefined()
+    const ftsResults = await recallMemories(db, "hamster Biscuit");
+    expect(ftsResults.find((r) => r.id === mem.id)).toBeUndefined();
 
     // Embedding: excluded (recallMemories filters archived_at IS NULL)
-    const embResults = await recallMemories(db, 'xyzzy', {
+    const embResults = await recallMemories(db, "xyzzy", {
       queryEmbedding: queryEmbeddings.pet,
-    })
-    expect(embResults.find((r) => r.id === mem.id)).toBeUndefined()
+    });
+    expect(embResults.find((r) => r.id === mem.id)).toBeUndefined();
 
     // Graph: edge still exists, getRelatedMemoryIds still returns the ID...
-    const graphMemIds = await getRelatedMemoryIds(db, [alex.id, biscuit.id])
-    expect(graphMemIds).toContain(mem.id)
+    const graphMemIds = await getRelatedMemoryIds(db, [alex.id, biscuit.id]);
+    expect(graphMemIds).toContain(mem.id);
 
     // ...but fetching the memory with archived_at filter excludes it
     // (this is what memory_recall tool does)
     const fetchedMems = await db
-      .selectFrom('memories')
+      .selectFrom("memories")
       .selectAll()
-      .where('id', 'in', graphMemIds)
-      .where('archived_at', 'is', null)
-      .execute()
-    expect(fetchedMems.find((m) => m.id === mem.id)).toBeUndefined()
-  })
-})
+      .where("id", "in", graphMemIds)
+      .where("archived_at", "is", null)
+      .execute();
+    expect(fetchedMems.find((m) => m.id === mem.id)).toBeUndefined();
+  });
+});
 
 // ── Graph write patterns (simulating processMemoryForGraph) ─────────
 
-describe('graph write patterns — entity extraction simulation', () => {
-  it('multiple memories build up a connected graph', async () => {
+describe("graph write patterns — entity extraction simulation", () => {
+  it("multiple memories build up a connected graph", async () => {
     // Simulate processing three memories through the graph pipeline:
     // Memory 1: "Alex works at DataPipe"
     const mem1 = await storeMemory(db, {
-      content: 'Alex works at DataPipe as a backend engineer',
-      source: 'user',
-      category: 'work',
-    })
+      content: "Alex works at DataPipe as a backend engineer",
+      source: "user",
+      category: "work",
+    });
     // Simulated extraction: entities=[Alex(person), DataPipe(entity)], rels=[Alex works_at DataPipe]
-    const alex = await upsertNode(db, { name: 'Alex', type: 'person' })
-    const datapipe = await upsertNode(db, { name: 'DataPipe', type: 'entity' })
+    const alex = await upsertNode(db, { name: "Alex", type: "person" });
+    const datapipe = await upsertNode(db, { name: "DataPipe", type: "entity" });
     await upsertEdge(db, {
       source_id: alex.id,
       target_id: datapipe.id,
-      relation: 'works_at',
+      relation: "works_at",
       memory_id: mem1.id,
-    })
+    });
 
     // Memory 2: "Alex lives in Portland"
     const mem2 = await storeMemory(db, {
-      content: 'Alex lives in Portland, Oregon',
-      source: 'user',
-      category: 'personal',
-    })
-    const alex2 = await upsertNode(db, { name: 'Alex', type: 'person' })
-    const portland = await upsertNode(db, { name: 'Portland', type: 'place' })
+      content: "Alex lives in Portland, Oregon",
+      source: "user",
+      category: "personal",
+    });
+    const alex2 = await upsertNode(db, { name: "Alex", type: "person" });
+    const portland = await upsertNode(db, { name: "Portland", type: "place" });
     await upsertEdge(db, {
       source_id: alex2.id,
       target_id: portland.id,
-      relation: 'lives_in',
+      relation: "lives_in",
       memory_id: mem2.id,
-    })
+    });
 
     // Memory 3: "DataPipe is based in Portland"
     const mem3 = await storeMemory(db, {
-      content: 'DataPipe is headquartered in Portland',
-      source: 'user',
-      category: 'work',
-    })
-    const datapipe2 = await upsertNode(db, { name: 'DataPipe', type: 'entity' })
-    const portland2 = await upsertNode(db, { name: 'Portland', type: 'place' })
+      content: "DataPipe is headquartered in Portland",
+      source: "user",
+      category: "work",
+    });
+    const datapipe2 = await upsertNode(db, { name: "DataPipe", type: "entity" });
+    const portland2 = await upsertNode(db, { name: "Portland", type: "place" });
     await upsertEdge(db, {
       source_id: datapipe2.id,
       target_id: portland2.id,
-      relation: 'based_in',
+      relation: "based_in",
       memory_id: mem3.id,
-    })
+    });
 
     // Verify: Alex node was reused, not duplicated
-    expect(alex.id).toBe(alex2.id)
-    expect(datapipe.id).toBe(datapipe2.id)
-    expect(portland.id).toBe(portland2.id)
+    expect(alex.id).toBe(alex2.id);
+    expect(datapipe.id).toBe(datapipe2.id);
+    expect(portland.id).toBe(portland2.id);
 
     // Verify: traversal from Portland reaches both Alex and DataPipe
-    const fromPortland = await traverseGraph(db, portland.id, 1)
-    const reached = fromPortland.map((t) => t.node.id)
-    expect(reached).toContain(alex.id)
-    expect(reached).toContain(datapipe.id)
+    const fromPortland = await traverseGraph(db, portland.id, 1);
+    const reached = fromPortland.map((t) => t.node.id);
+    expect(reached).toContain(alex.id);
+    expect(reached).toContain(datapipe.id);
 
     // Verify: all three memories reachable from the Portland neighborhood
-    const allNodes = [portland.id, ...reached]
-    const memIds = await getRelatedMemoryIds(db, allNodes)
-    expect(memIds).toContain(mem1.id)
-    expect(memIds).toContain(mem2.id)
-    expect(memIds).toContain(mem3.id)
-  })
-
-  it('relationship referencing unknown entity creates generic node', async () => {
+    const allNodes = [portland.id, ...reached];
+    const memIds = await getRelatedMemoryIds(db, allNodes);
+    expect(memIds).toContain(mem1.id);
+    expect(memIds).toContain(mem2.id);
+    expect(memIds).toContain(mem3.id);
+  });
+
+  it("relationship referencing unknown entity creates generic node", async () => {
     // processMemoryForGraph creates 'entity' type nodes for unknown rel endpoints
     const mem = await storeMemory(db, {
-      content: 'Alex mentioned Sarah but extraction only found Alex as an entity',
-      source: 'user',
-    })
+      content: "Alex mentioned Sarah but extraction only found Alex as an entity",
+      source: "user",
+    });
 
     // Extraction found Alex but not Sarah
-    const alex = await upsertNode(db, { name: 'Alex', type: 'person' })
+    const alex = await upsertNode(db, { name: "Alex", type: "person" });
     // Relationship references Sarah → create as generic entity (like processMemoryForGraph does)
-    let sarah = await findNodeByName(db, 'Sarah')
+    let sarah = await findNodeByName(db, "Sarah");
     if (!sarah) {
-      sarah = await upsertNode(db, { name: 'Sarah', type: 'entity' })
+      sarah = await upsertNode(db, { name: "Sarah", type: "entity" });
     }
     await upsertEdge(db, {
       source_id: alex.id,
       target_id: sarah.id,
-      relation: 'knows',
+      relation: "knows",
       memory_id: mem.id,
-    })
+    });
 
     // Sarah should be findable and connected
-    const found = await findNodeByName(db, 'sarah')
-    expect(found).toBeDefined()
-    expect(found!.node_type).toBe('entity') // generic type
+    const found = await findNodeByName(db, "sarah");
+    expect(found).toBeDefined();
+    expect(found!.node_type).toBe("entity"); // generic type
 
     // Later extraction provides proper type — upsertNode returns existing
-    const sarah2 = await upsertNode(db, { name: 'Sarah', type: 'entity' })
-    expect(sarah2.id).toBe(sarah.id) // same node
-  })
+    const sarah2 = await upsertNode(db, { name: "Sarah", type: "entity" });
+    expect(sarah2.id).toBe(sarah.id); // same node
+  });
 
-  it('repeated processing of same fact increments edge weight', async () => {
-    const alex = await upsertNode(db, { name: 'Alex', type: 'person' })
-    const portland = await upsertNode(db, { name: 'Portland', type: 'place' })
+  it("repeated processing of same fact increments edge weight", async () => {
+    const alex = await upsertNode(db, { name: "Alex", type: "person" });
+    const portland = await upsertNode(db, { name: "Portland", type: "place" });
 
     // Three separate memories all mention Alex lives in Portland
     const mems = await Promise.all([
-      storeMemory(db, { content: 'Alex lives in Portland', source: 'user' }),
-      storeMemory(db, { content: 'Alex resides in Portland, OR', source: 'user' }),
-      storeMemory(db, { content: 'Alex moved to Portland last year', source: 'user' }),
-    ])
+      storeMemory(db, { content: "Alex lives in Portland", source: "user" }),
+      storeMemory(db, { content: "Alex resides in Portland, OR", source: "user" }),
+      storeMemory(db, { content: "Alex moved to Portland last year", source: "user" }),
+    ]);
 
     for (const mem of mems) {
       await upsertEdge(db, {
         source_id: alex.id,
         target_id: portland.id,
-        relation: 'lives_in',
+        relation: "lives_in",
         memory_id: mem.id,
-      })
+      });
     }
 
     // Edge weight should be 3
-    const edges = await getNodeEdges(db, alex.id)
-    const livesIn = edges.find(
-      (e) => e.target_id === portland.id && e.relation === 'lives_in',
-    )
-    expect(livesIn).toBeDefined()
-    expect(livesIn!.weight).toBe(3)
-  })
-
-  it('hub node connects disparate facts for cross-topic discovery', async () => {
+    const edges = await getNodeEdges(db, alex.id);
+    const livesIn = edges.find((e) => e.target_id === portland.id && e.relation === "lives_in");
+    expect(livesIn).toBeDefined();
+    expect(livesIn!.weight).toBe(3);
+  });
+
+  it("hub node connects disparate facts for cross-topic discovery", async () => {
     // This tests the key value proposition of the graph:
     // searching for "Miso" (a cat) can surface "shellfish allergy" (health)
     // because they're both connected to Alex
 
     const mem1 = await storeMemory(db, {
-      content: 'Alex has a cat named Miso',
-      source: 'user',
+      content: "Alex has a cat named Miso",
+      source: "user",
       embedding: JSON.stringify(memoryEmbeddings.miso),
-    })
+    });
     const mem2 = await storeMemory(db, {
-      content: 'Alex is allergic to shellfish',
-      source: 'user',
+      content: "Alex is allergic to shellfish",
+      source: "user",
       embedding: JSON.stringify(memoryEmbeddings.shellfish),
-    })
+    });
 
-    const alex = await upsertNode(db, { name: 'Alex', type: 'person' })
-    const miso = await upsertNode(db, { name: 'Miso', type: 'entity' })
-    const shellfish = await upsertNode(db, { name: 'Shellfish', type: 'concept' })
+    const alex = await upsertNode(db, { name: "Alex", type: "person" });
+    const miso = await upsertNode(db, { name: "Miso", type: "entity" });
+    const shellfish = await upsertNode(db, { name: "Shellfish", type: "concept" });
 
-    await upsertEdge(db, { source_id: alex.id, target_id: miso.id, relation: 'owns', memory_id: mem1.id })
-    await upsertEdge(db, { source_id: alex.id, target_id: shellfish.id, relation: 'allergic_to', memory_id: mem2.id })
+    await upsertEdge(db, {
+      source_id: alex.id,
+      target_id: miso.id,
+      relation: "owns",
+      memory_id: mem1.id,
+    });
+    await upsertEdge(db, {
+      source_id: alex.id,
+      target_id: shellfish.id,
+      relation: "allergic_to",
+      memory_id: mem2.id,
+    });
 
     // Direct recall for "Miso" — only finds cat memory (different embedding clusters)
-    const directResults = await recallMemories(db, 'Miso', {
+    const directResults = await recallMemories(db, "Miso", {
       queryEmbedding: queryEmbeddings.pet,
-    })
-    const directIds = new Set(directResults.map((r) => r.id))
-    expect(directIds.has(mem1.id)).toBe(true)
-    expect(directIds.has(mem2.id)).toBe(false) // shellfish allergy NOT in direct results
+    });
+    const directIds = new Set(directResults.map((r) => r.id));
+    expect(directIds.has(mem1.id)).toBe(true);
+    expect(directIds.has(mem2.id)).toBe(false); // shellfish allergy NOT in direct results
 
     // Graph expansion from Miso → Alex → Shellfish → surfaces allergy memory
-    const misoNode = await findNodeByName(db, 'miso')
-    const traversed = await traverseGraph(db, misoNode!.id, 2)
-    const allNodeIds = [misoNode!.id, ...traversed.map((t) => t.node.id)]
-    const graphMemIds = await getRelatedMemoryIds(db, allNodeIds)
+    const misoNode = await findNodeByName(db, "miso");
+    const traversed = await traverseGraph(db, misoNode!.id, 2);
+    const allNodeIds = [misoNode!.id, ...traversed.map((t) => t.node.id)];
+    const graphMemIds = await getRelatedMemoryIds(db, allNodeIds);
 
     // Graph found the allergy memory through the Alex hub!
-    expect(graphMemIds).toContain(mem2.id)
+    expect(graphMemIds).toContain(mem2.id);
 
     // Combined: direct recall found pet fact, graph added health fact
-    const allMemIds = new Set([...directIds, ...graphMemIds])
-    expect(allMemIds.has(mem1.id)).toBe(true) // cat
-    expect(allMemIds.has(mem2.id)).toBe(true) // allergy — cross-topic discovery
-  })
-})
+    const allMemIds = new Set([...directIds, ...graphMemIds]);
+    expect(allMemIds.has(mem1.id)).toBe(true); // cat
+    expect(allMemIds.has(mem2.id)).toBe(true); // allergy — cross-topic discovery
+  });
+});
diff --git a/apps/construct/src/agent.ts b/apps/construct/src/agent.ts
index 3ddfd1e..08c83b9 100644
--- a/apps/construct/src/agent.ts
+++ b/apps/construct/src/agent.ts
@@ -1,17 +1,13 @@
-import {
-  Agent,
-  type AgentTool,
-  type AgentToolResult,
-} from '@mariozechner/pi-agent-core'
-import { getModel, type Usage } from '@mariozechner/pi-ai'
-import type { Static, TSchema } from '@sinclair/typebox'
-import type { Kysely } from 'kysely'
+import { Agent, type AgentTool, type AgentToolResult } from "@mariozechner/pi-agent-core";
+import { getModel, type Usage } from "@mariozechner/pi-ai";
+import type { Static, TSchema } from "@sinclair/typebox";
+import type { Kysely } from "kysely";
 
-import { env } from './env.js'
-import { getSystemPrompt, buildContextPreamble } from './system-prompt.js'
-import { agentLog, toolLog } from './logger.js'
-import type { Database } from './db/schema.js'
-import type { TelegramContext } from './telegram/types.js'
+import { env } from "./env.js";
+import { getSystemPrompt, buildContextPreamble } from "./system-prompt.js";
+import { agentLog, toolLog } from "./logger.js";
+import type { Database } from "./db/schema.js";
+import type { TelegramContext } from "./telegram/types.js";
 import {
   getOrCreateConversation,
   getRecentMessages,
@@ -19,136 +15,147 @@ import {
   recallMemories,
   saveMessage,
   trackUsage,
-} from './db/queries.js'
-import { generateEmbedding, estimateTokens, type WorkerModelConfig } from '@repo/cairn'
-import { ConstructMemoryManager, CONSTRUCT_OBSERVER_PROMPT, CONSTRUCT_REFLECTOR_PROMPT } from './memory.js'
-import { selectAndCreateTools, type InternalTool } from './tools/packs.js'
-import { selectSkills, getExtensionRegistry, selectAndCreateDynamicTools } from './extensions/index.js'
+} from "./db/queries.js";
+import { generateEmbedding, estimateTokens, SIMILARITY, type WorkerModelConfig } from "@repo/cairn";
+import {
+  ConstructMemoryManager,
+  CONSTRUCT_OBSERVER_PROMPT,
+  CONSTRUCT_REFLECTOR_PROMPT,
+} from "./memory.js";
+import { selectAndCreateTools, type InternalTool } from "./tools/packs.js";
+import {
+  selectSkills,
+  getExtensionRegistry,
+  selectAndCreateDynamicTools,
+} from "./extensions/index.js";
 
 // Adapt internal tool → pi-agent-core AgentTool
-function createPiTool(
-  tool: InternalTool,
-): AgentTool {
+function createPiTool(tool: InternalTool): AgentTool {
   return {
     name: tool.name,
-    label: tool.name.replace(/_/g, ' '),
+    label: tool.name.replace(/_/g, " "),
     description: tool.description,
     parameters: tool.parameters,
-    execute: async (
-      toolCallId: string,
-      params: Static,
-    ): Promise> => {
-      toolLog.info`Executing tool: ${tool.name}`
-      toolLog.debug`Tool params: ${JSON.stringify(params)}`
+    execute: async (toolCallId: string, params: Static): Promise> => {
+      toolLog.info`Executing tool: ${tool.name}`;
+      toolLog.debug`Tool params: ${JSON.stringify(params)}`;
       try {
-        const result = await tool.execute(toolCallId, params)
-        toolLog.info`Tool ${tool.name} completed`
+        const result = await tool.execute(toolCallId, params);
+        toolLog.info`Tool ${tool.name} completed`;
         return {
-          content: [{ type: 'text', text: result.output }],
+          content: [{ type: "text", text: result.output }],
           details: result.details,
-        }
+        };
       } catch (err) {
-        const msg = err instanceof Error ? err.message : String(err)
-        toolLog.error`Tool ${tool.name} failed: ${msg}`
+        const msg = err instanceof Error ? err.message : String(err);
+        toolLog.error`Tool ${tool.name} failed: ${msg}`;
         return {
-          content: [{ type: 'text', text: `Tool error: ${msg}` }],
+          content: [{ type: "text", text: `Tool error: ${msg}` }],
           details: { error: msg },
-        }
+        };
       }
     },
-  }
+  };
 }
 
 export interface AgentResponse {
-  text: string
-  toolCalls: Array<{ name: string; args: unknown; result: string }>
-  usage?: { input: number; output: number; cost: number }
-  messageId?: string
+  text: string;
+  toolCalls: Array<{ name: string; args: unknown; result: string }>;
+  usage?: { input: number; output: number; cost: number };
+  messageId?: string;
 }
 
 export interface ProcessMessageOpts {
-  source: 'telegram' | 'cli' | 'scheduler'
-  externalId: string | null
-  chatId?: string
-  telegram?: TelegramContext
-  replyContext?: string
-  incomingTelegramMessageId?: number
+  source: "telegram" | "cli" | "scheduler";
+  externalId: string | null;
+  chatId?: string;
+  telegram?: TelegramContext;
+  replyContext?: string;
+  incomingTelegramMessageId?: number;
 }
 
-export const isDev = env.NODE_ENV === 'development'
+export const isDev = env.NODE_ENV === "development";
 
+/**
+ * End-to-end message processing pipeline: context assembly, tool selection,
+ * agent prompting, response persistence, and async memory pipeline.
+ * @param db - Construct database handle.
+ * @param message - Raw user message text.
+ * @param opts - Source metadata (telegram/cli/scheduler), chat IDs, reply context.
+ * @returns Agent response text, tool call log, usage stats, and persisted message ID.
+ */
 export async function processMessage(
   db: Kysely,
   message: string,
   opts: ProcessMessageOpts,
 ): Promise {
-  agentLog.info`Processing message from ${opts.source}${opts.chatId ? ` (chat ${opts.chatId})` : ''}`
+  agentLog.info`Processing message from ${opts.source}${opts.chatId ? ` (chat ${opts.chatId})` : ""}`;
 
   // 1. Get or create conversation
-  const conversationId = await getOrCreateConversation(
-    db,
-    opts.source,
-    opts.externalId,
-  )
+  const conversationId = await getOrCreateConversation(db, opts.source, opts.externalId);
 
   // 2. Create MemoryManager for this conversation
   const workerConfig: WorkerModelConfig | null = env.MEMORY_WORKER_MODEL
-    ? { apiKey: env.OPENROUTER_API_KEY, model: env.MEMORY_WORKER_MODEL, extraBody: { reasoning: { max_tokens: 1 } } }
-    : null
+    ? {
+        apiKey: env.OPENROUTER_API_KEY,
+        model: env.MEMORY_WORKER_MODEL,
+        extraBody: { reasoning: { max_tokens: 1 } },
+      }
+    : null;
   const memoryManager = new ConstructMemoryManager(db, {
     workerConfig,
     embeddingModel: env.EMBEDDING_MODEL,
     apiKey: env.OPENROUTER_API_KEY,
     observerPrompt: CONSTRUCT_OBSERVER_PROMPT,
     reflectorPrompt: CONSTRUCT_REFLECTOR_PROMPT,
-  })
+  });
 
   // 3. Load context: observations (stable prefix) + un-observed messages (active suffix)
   // Falls back to last 20 messages if no observations exist yet
   const { observationsText, activeMessages, hasObservations, evictedObservations } =
-    await memoryManager.buildContext(conversationId)
+    await memoryManager.buildContext(conversationId);
 
-  let historyMessages: typeof activeMessages
+  let historyMessages: typeof activeMessages;
   if (hasObservations) {
     // Use only un-observed messages — observations cover the rest
-    historyMessages = activeMessages
-    agentLog.debug`Context: ${observationsText.split('\n').length} observations, ${activeMessages.length} active messages${evictedObservations > 0 ? `, ${evictedObservations} evicted` : ''}`
+    historyMessages = activeMessages;
+    agentLog.debug`Context: ${observationsText.split("\n").length} observations, ${activeMessages.length} active messages${evictedObservations > 0 ? `, ${evictedObservations} evicted` : ""}`;
   } else {
     // No observations yet — fall back to recent messages (current behavior)
-    historyMessages = await getRecentMessages(db, conversationId, 20)
-    agentLog.debug`Loaded ${historyMessages.length} history messages (no observations)`
+    historyMessages = await getRecentMessages(db, conversationId, 20);
+    agentLog.debug`Loaded ${historyMessages.length} history messages (no observations)`;
   }
 
   // 4. Load memories for context injection
-  const recentMemories = await getRecentMemories(db, 10)
+  const recentMemories = await getRecentMemories(db, 10);
 
   // Try to find semantically relevant memories for this specific message
   // queryEmbedding is also reused for tool pack selection below
-  let queryEmbedding: number[] | undefined
-  let relevantMemories: Array<{ content: string; category: string; score?: number }> = []
+  let queryEmbedding: number[] | undefined;
+  let relevantMemories: Array<{ content: string; category: string; score?: number }> = [];
   try {
-    queryEmbedding = await generateEmbedding(env.OPENROUTER_API_KEY, message, env.EMBEDDING_MODEL)
+    queryEmbedding = await generateEmbedding(env.OPENROUTER_API_KEY, message, env.EMBEDDING_MODEL);
     const results = await recallMemories(db, message, {
       limit: 5,
       queryEmbedding,
-      similarityThreshold: 0.4,
-    })
+      similarityThreshold: SIMILARITY.RECALL_STRICT,
+    });
     // Filter out any that are already in recent memories
-    const recentIds = new Set(recentMemories.map((m) => m.id))
+    const recentIds = new Set(recentMemories.map((m) => m.id));
     relevantMemories = results
       .filter((m) => !recentIds.has(m.id))
-      .map((m) => ({ content: m.content, category: m.category, score: m.score }))
+      .map((m) => ({ content: m.content, category: m.category, score: m.score }));
   } catch {
     // Embedding call failed — no relevant memories, that's fine
     // queryEmbedding stays undefined → all tool packs will load (graceful fallback)
   }
 
-  agentLog.debug`Context: ${recentMemories.length} recent memories, ${relevantMemories.length} relevant memories`
+  agentLog.debug`Context: ${recentMemories.length} recent memories, ${relevantMemories.length} relevant memories`;
 
   // 5. Select relevant skills based on query embedding
-  const selectedSkills = selectSkills(queryEmbedding)
+  const selectedSkills = selectSkills(queryEmbedding);
   if (selectedSkills.length > 0) {
-    agentLog.debug`Selected skills: ${selectedSkills.map((s) => s.name).join(', ')}`
+    agentLog.debug`Selected skills: ${selectedSkills.map((s) => s.name).join(", ")}`;
   }
 
   // 6. Build context preamble (dynamic, prepended to user message)
@@ -165,22 +172,22 @@ export async function processMessage(
     relevantMemories,
     skills: selectedSkills,
     replyContext: opts.replyContext,
-  })
+  });
 
   // 7. Create agent with system prompt (base + identity files)
-  const { identity } = getExtensionRegistry()
-  const model = getModel('openrouter', env.OPENROUTER_MODEL as Parameters[1])
+  const { identity } = getExtensionRegistry();
+  const model = getModel("openrouter", env.OPENROUTER_MODEL as Parameters[1]);
   const agent = new Agent({
     initialState: {
       systemPrompt: getSystemPrompt(identity),
       model,
     },
-  })
+  });
 
-  agent.setModel(model)
+  agent.setModel(model);
 
   // 8. Select tool packs based on message embedding and create tools
-  const chatId = opts.chatId ?? opts.externalId ?? 'unknown'
+  const chatId = opts.chatId ?? opts.externalId ?? "unknown";
   const toolCtx = {
     db,
     chatId,
@@ -195,128 +202,136 @@ export async function processMessage(
     telegram: opts.telegram,
     memoryManager,
     embeddingModel: env.EMBEDDING_MODEL,
-  }
-  const builtinTools = selectAndCreateTools(queryEmbedding, toolCtx)
-  const dynamicTools = selectAndCreateDynamicTools(queryEmbedding, toolCtx)
-  const tools = [...builtinTools, ...dynamicTools]
-  agent.setTools(tools.map((t) => createPiTool(t)))
+  };
+  const builtinTools = selectAndCreateTools(queryEmbedding, toolCtx);
+  const dynamicTools = selectAndCreateDynamicTools(queryEmbedding, toolCtx);
+  const tools = [...builtinTools, ...dynamicTools];
+  agent.setTools(tools.map((t) => createPiTool(t)));
 
   // 9. Replay conversation history so the agent has multi-turn context
   for (const msg of historyMessages) {
-    const tgPrefix = msg.telegram_message_id ? `[tg:${msg.telegram_message_id}] ` : ''
-    if (msg.role === 'user') {
-      agent.appendMessage({ role: 'user', content: tgPrefix + msg.content, timestamp: Date.now() })
-    } else if (msg.role === 'assistant') {
+    const tgPrefix = msg.telegram_message_id ? `[tg:${msg.telegram_message_id}] ` : "";
+    if (msg.role === "user") {
+      agent.appendMessage({ role: "user", content: tgPrefix + msg.content, timestamp: Date.now() });
+    } else if (msg.role === "assistant") {
       agent.appendMessage({
-        role: 'assistant',
-        content: [{ type: 'text', text: tgPrefix + msg.content }],
-        api: 'openrouter',
-        provider: 'openrouter',
+        role: "assistant",
+        content: [{ type: "text", text: tgPrefix + msg.content }],
+        api: "openrouter",
+        provider: "openrouter",
         model: env.OPENROUTER_MODEL,
-        usage: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0, totalTokens: 0, cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0, total: 0 } },
-        stopReason: 'stop',
+        usage: {
+          input: 0,
+          output: 0,
+          cacheRead: 0,
+          cacheWrite: 0,
+          totalTokens: 0,
+          cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0, total: 0 },
+        },
+        stopReason: "stop",
         timestamp: Date.now(),
-      })
+      });
     }
   }
 
   // 10. Track tool calls, response text, and usage
-  let responseText = ''
-  const toolCalls: AgentResponse['toolCalls'] = []
-  const totalUsage = { input: 0, output: 0, cost: 0 }
-  let hasUsage = false
+  let responseText = "";
+  const toolCalls: AgentResponse["toolCalls"] = [];
+  const totalUsage = { input: 0, output: 0, cost: 0 };
+  let hasUsage = false;
 
   agent.subscribe((event) => {
-    if (event.type === 'message_update') {
-      if (event.assistantMessageEvent.type === 'text_delta') {
-        responseText += event.assistantMessageEvent.delta
+    if (event.type === "message_update") {
+      if (event.assistantMessageEvent.type === "text_delta") {
+        responseText += event.assistantMessageEvent.delta;
       }
     }
-    if (event.type === 'message_end') {
-      const msg = event.message
-      if ('usage' in msg) {
-        const u = msg.usage as Usage
-        totalUsage.input += u.input
-        totalUsage.output += u.output
-        totalUsage.cost += u.cost.total
-        hasUsage = true
+    if (event.type === "message_end") {
+      const msg = event.message;
+      if ("usage" in msg) {
+        const u = msg.usage as Usage;
+        totalUsage.input += u.input;
+        totalUsage.output += u.output;
+        totalUsage.cost += u.cost.total;
+        hasUsage = true;
       }
     }
-    if (event.type === 'tool_execution_end') {
+    if (event.type === "tool_execution_end") {
       toolCalls.push({
         name: event.toolName,
         args: undefined,
         result: String(event.result),
-      })
+      });
     }
-  })
+  });
 
   // 11. Save user message
   await saveMessage(db, {
     conversation_id: conversationId,
-    role: 'user',
+    role: "user",
     content: message,
     telegram_message_id: opts.incomingTelegramMessageId ?? null,
-  })
+  });
 
   // 12. Log context breakdown for auditing
-  const systemPromptText = getSystemPrompt(identity)
-  const systemTokens = estimateTokens(systemPromptText)
-  const observationTokens = observationsText ? estimateTokens(observationsText) : 0
-  const recentMemTokens = recentMemories.reduce((sum, m) => sum + estimateTokens(m.content), 0)
-  const relevantMemTokens = relevantMemories.reduce((sum, m) => sum + estimateTokens(m.content), 0)
-  const skillTokens = selectedSkills.reduce((sum, s) => sum + estimateTokens(s.body), 0)
-  const historyTokens = historyMessages.reduce((sum, m) => sum + estimateTokens(m.content), 0)
-  const toolCount = tools.length
-  const preambleTokens = estimateTokens(preamble)
-  const totalContextTokens = systemTokens + preambleTokens + historyTokens
-  agentLog.info`Context breakdown: system=${systemTokens} observations=${observationTokens} recentMem=${recentMemTokens}(${recentMemories.length}) relevantMem=${relevantMemTokens}(${relevantMemories.length}) skills=${skillTokens}(${selectedSkills.length}) history=${historyTokens}(${historyMessages.length}msgs) tools=${toolCount} preamble=${preambleTokens} total=${totalContextTokens}`
+  const systemPromptText = getSystemPrompt(identity);
+  const systemTokens = estimateTokens(systemPromptText);
+  const observationTokens = observationsText ? estimateTokens(observationsText) : 0;
+  const recentMemTokens = recentMemories.reduce((sum, m) => sum + estimateTokens(m.content), 0);
+  const relevantMemTokens = relevantMemories.reduce((sum, m) => sum + estimateTokens(m.content), 0);
+  const skillTokens = selectedSkills.reduce((sum, s) => sum + estimateTokens(s.body), 0);
+  const historyTokens = historyMessages.reduce((sum, m) => sum + estimateTokens(m.content), 0);
+  const toolCount = tools.length;
+  const preambleTokens = estimateTokens(preamble);
+  const totalContextTokens = systemTokens + preambleTokens + historyTokens;
+  agentLog.info`Context breakdown: system=${systemTokens} observations=${observationTokens} recentMem=${recentMemTokens}(${recentMemories.length}) relevantMem=${relevantMemTokens}(${relevantMemories.length}) skills=${skillTokens}(${selectedSkills.length}) history=${historyTokens}(${historyMessages.length}msgs) tools=${toolCount} preamble=${preambleTokens} total=${totalContextTokens}`;
 
   // 13. Run agent — prepend context preamble to first message
   // (subsequent steps renumbered: save=14, usage=15, observer=16)
-  agentLog.debug`Prompting agent`
-  await agent.prompt(preamble + message)
-  await agent.waitForIdle()
-  agentLog.info`Agent finished. Response length: ${responseText.length}, tool calls: ${toolCalls.length}`
+  agentLog.debug`Prompting agent`;
+  await agent.prompt(preamble + message);
+  await agent.waitForIdle();
+  agentLog.info`Agent finished. Response length: ${responseText.length}, tool calls: ${toolCalls.length}`;
 
   // Strip leaked [tg:ID] prefixes from response (LLM sometimes echoes them from history)
-  responseText = responseText.replace(/\[tg:\d+\]\s*/g, '')
+  responseText = responseText.replace(/\[tg:\d+\]\s*/g, "");
 
   // 13. Save assistant response
   const assistantMessageId = await saveMessage(db, {
     conversation_id: conversationId,
-    role: 'assistant',
+    role: "assistant",
     content: responseText,
     tool_calls: toolCalls.length > 0 ? JSON.stringify(toolCalls) : null,
-  })
+  });
 
   // 14. Track usage
   if (hasUsage) {
-    agentLog.info`Usage: ${totalUsage.input} in / ${totalUsage.output} out / $${totalUsage.cost.toFixed(4)}`
+    agentLog.info`Usage: ${totalUsage.input} in / ${totalUsage.output} out / $${totalUsage.cost.toFixed(4)}`;
     await trackUsage(db, {
       model: env.OPENROUTER_MODEL,
       input_tokens: totalUsage.input,
       output_tokens: totalUsage.output,
       cost_usd: totalUsage.cost,
       source: opts.source,
-    })
+    });
   }
 
   // 15. Run observer async after response (next turn benefits)
   // Non-blocking — fires and forgets. Observer only runs if un-observed
   // messages exceed the token threshold.
-  memoryManager.runObserver(conversationId)
+  memoryManager
+    .runObserver(conversationId)
     .then(async (ran: boolean) => {
       if (ran) {
         // Promote novel observations to searchable memories before reflector condenses them
-        await memoryManager.promoteObservations(conversationId)
+        await memoryManager.promoteObservations(conversationId);
         // Then check if reflector should condense
-        return memoryManager.runReflector(conversationId)
+        return memoryManager.runReflector(conversationId);
       }
     })
-    .catch((err: unknown) => agentLog.error`Post-response observation failed: ${err}`)
+    .catch((err: unknown) => agentLog.error`Post-response observation failed: ${err}`);
 
-  const usage = hasUsage ? totalUsage : undefined
+  const usage = hasUsage ? totalUsage : undefined;
 
-  return { text: responseText, toolCalls, usage, messageId: assistantMessageId }
+  return { text: responseText, toolCalls, usage, messageId: assistantMessageId };
 }
diff --git a/apps/construct/src/cli/index.ts b/apps/construct/src/cli/index.ts
index 36a559d..b98401d 100644
--- a/apps/construct/src/cli/index.ts
+++ b/apps/construct/src/cli/index.ts
@@ -1,121 +1,118 @@
-import { defineCommand, runMain } from 'citty'
-import { createInterface } from 'node:readline'
-import type { Kysely } from 'kysely'
-import { createDb } from '@repo/db'
-import { generateEmbedding, MemoryManager, processMemoryForGraph } from '@repo/cairn'
-import type { Database } from '../db/schema.js'
-import { runMigrations } from '../db/migrate.js'
-import { env } from '../env.js'
-import { processMessage, isDev } from '../agent.js'
-import { selectAndCreateTools } from '../tools/packs.js'
-import { initExtensions, selectAndCreateDynamicTools } from '../extensions/index.js'
+import { defineCommand, runMain } from "citty";
+import { createInterface } from "node:readline";
+import type { Kysely } from "kysely";
+import { createDb } from "@repo/db";
+import { generateEmbedding, MemoryManager, processMemoryForGraph } from "@repo/cairn";
+import type { Database } from "../db/schema.js";
+import { runMigrations } from "../db/migrate.js";
+import { env } from "../env.js";
+import { processMessage, isDev } from "../agent.js";
+import { selectAndCreateTools } from "../tools/packs.js";
+import { initExtensions, selectAndCreateDynamicTools } from "../extensions/index.js";
 
 const main = defineCommand({
   meta: {
-    name: 'construct',
-    description: 'Construct CLI — personal braindump companion',
+    name: "construct",
+    description: "Construct CLI — personal braindump companion",
   },
   args: {
     message: {
-      type: 'positional',
-      description: 'One-shot message to send to the agent',
+      type: "positional",
+      description: "One-shot message to send to the agent",
       required: false,
     },
     tool: {
-      type: 'string',
-      description: 'Invoke a specific tool directly (for testing)',
+      type: "string",
+      description: "Invoke a specific tool directly (for testing)",
     },
     args: {
-      type: 'string',
-      alias: 'a',
-      description: 'JSON arguments for --tool',
+      type: "string",
+      alias: "a",
+      description: "JSON arguments for --tool",
     },
     reembed: {
-      type: 'boolean',
-      description: 'Re-embed all memories and graph nodes using the current EMBEDDING_MODEL',
+      type: "boolean",
+      description: "Re-embed all memories and graph nodes using the current EMBEDDING_MODEL",
     },
     backfill: {
-      type: 'boolean',
-      description: 'Backfill graph memory, embeddings, observer, and reflector for all existing data',
+      type: "boolean",
+      description:
+        "Backfill graph memory, embeddings, observer, and reflector for all existing data",
     },
   },
   async run({ args }) {
     // Run migrations
-    await runMigrations(env.DATABASE_URL)
+    await runMigrations(env.DATABASE_URL);
 
-    const { db } = createDb(env.DATABASE_URL)
+    const { db } = createDb(env.DATABASE_URL);
 
     // Re-embed all memories
     if (args.reembed) {
-      await reembedAll(db)
-      process.exit(0)
+      await reembedAll(db);
+      process.exit(0);
     }
 
     // Backfill graph memory, observer, reflector
     if (args.backfill) {
-      await backfillAll(db)
-      process.exit(0)
+      await backfillAll(db);
+      process.exit(0);
     }
 
     // Direct tool invocation mode
     if (args.tool) {
-      await runTool(db, args.tool, args.args)
-      process.exit(0)
+      await runTool(db, args.tool, args.args);
+      process.exit(0);
     }
 
     // One-shot mode
     if (args.message) {
       const response = await processMessage(db, args.message, {
-        source: 'cli',
-        externalId: 'cli',
-      })
-      console.log(response.text)
-      process.exit(0)
+        source: "cli",
+        externalId: "cli",
+      });
+      console.log(response.text);
+      process.exit(0);
     }
 
     // Interactive REPL mode
-    console.log('Construct interactive mode. Type "exit" or Ctrl+C to quit.\n')
+    console.log('Construct interactive mode. Type "exit" or Ctrl+C to quit.\n');
 
     const rl = createInterface({
       input: process.stdin,
       output: process.stdout,
-    })
+    });
 
     const prompt = () => {
-      rl.question('you> ', async (input) => {
-        const trimmed = input.trim()
-        if (!trimmed || trimmed === 'exit' || trimmed === 'quit') {
-          rl.close()
-          process.exit(0)
+      rl.question("you> ", async (input) => {
+        const trimmed = input.trim();
+        if (!trimmed || trimmed === "exit" || trimmed === "quit") {
+          rl.close();
+          process.exit(0);
         }
 
         try {
           const response = await processMessage(db, trimmed, {
-            source: 'cli',
-            externalId: 'cli',
-          })
-          console.log(`\nconstruct> ${response.text}\n`)
+            source: "cli",
+            externalId: "cli",
+          });
+          console.log(`\nconstruct> ${response.text}\n`);
         } catch (err) {
-          console.error('Error:', err)
+          console.error("Error:", err);
         }
 
-        prompt()
-      })
-    }
+        prompt();
+      });
+    };
 
-    prompt()
+    prompt();
   },
-})
+});
 
-async function runTool(
-  db: Kysely,
-  toolName: string,
-  argsJson?: string,
-) {
+async function runTool(db: Kysely, toolName: string, argsJson?: string) {
   // Load all tools (no query embedding → all packs selected)
   const ctx = {
     db,
-    chatId: 'cli',
+    chatId: "cli",
     apiKey: env.OPENROUTER_API_KEY,
     projectRoot: env.PROJECT_ROOT,
     dbPath: env.DATABASE_URL,
@@ -123,116 +120,116 @@ async function runTool(
     tavilyApiKey: env.TAVILY_API_KEY,
     logFile: env.LOG_FILE,
     isDev,
-  }
-  const builtinTools = selectAndCreateTools(undefined, ctx)
+  };
+  const builtinTools = selectAndCreateTools(undefined, ctx);
 
   // Also load dynamic extension tools
-  await initExtensions(env.EXTENSIONS_DIR, env.OPENROUTER_API_KEY, db, env.EMBEDDING_MODEL)
-  const dynamicTools = selectAndCreateDynamicTools(undefined, ctx)
+  await initExtensions(env.EXTENSIONS_DIR, env.OPENROUTER_API_KEY, db, env.EMBEDDING_MODEL);
+  const dynamicTools = selectAndCreateDynamicTools(undefined, ctx);
 
-  const tools = [...builtinTools, ...dynamicTools]
-  const tool = tools.find((t) => t.name === toolName)
+  const tools = [...builtinTools, ...dynamicTools];
+  const tool = tools.find((t) => t.name === toolName);
 
   if (!tool) {
-    const available = tools.map((t) => t.name).join(', ')
-    console.error(`Unknown tool: ${toolName}`)
-    console.error(`Available tools: ${available}`)
-    process.exit(1)
+    const available = tools.map((t) => t.name).join(", ");
+    console.error(`Unknown tool: ${toolName}`);
+    console.error(`Available tools: ${available}`);
+    process.exit(1);
   }
 
-  const parsedArgs = argsJson ? JSON.parse(argsJson) : {}
+  const parsedArgs = argsJson ? JSON.parse(argsJson) : {};
 
-  console.log(`Running tool: ${toolName}`)
-  console.log(`Args: ${JSON.stringify(parsedArgs, null, 2)}\n`)
+  console.log(`Running tool: ${toolName}`);
+  console.log(`Args: ${JSON.stringify(parsedArgs, null, 2)}\n`);
 
-  const result = await tool.execute(`cli-${Date.now()}`, parsedArgs)
-  console.log(result.output)
+  const result = await tool.execute(`cli-${Date.now()}`, parsedArgs);
+  console.log(result.output);
 }
 
 async function reembedAll(db: Kysely) {
-  const model = env.EMBEDDING_MODEL
-  console.log(`Re-embedding all data using model: ${model}\n`)
+  const model = env.EMBEDDING_MODEL;
+  console.log(`Re-embedding all data using model: ${model}\n`);
 
   // Phase 1 — Memories
-  console.log('=== Phase 1: Memories ===')
+  console.log("=== Phase 1: Memories ===");
 
   const memories = await db
-    .selectFrom('memories')
-    .select(['id', 'content'])
-    .where('archived_at', 'is', null)
-    .execute()
+    .selectFrom("memories")
+    .select(["id", "content"])
+    .where("archived_at", "is", null)
+    .execute();
 
-  console.log(`Found ${memories.length} memories to re-embed`)
+  console.log(`Found ${memories.length} memories to re-embed`);
 
-  let m_success = 0
-  let m_failed = 0
+  let m_success = 0;
+  let m_failed = 0;
 
   await pooled(memories, 10, async (memory) => {
     try {
-      const embedding = await generateEmbedding(env.OPENROUTER_API_KEY, memory.content, model)
+      const embedding = await generateEmbedding(env.OPENROUTER_API_KEY, memory.content, model);
       await db
-        .updateTable('memories')
+        .updateTable("memories")
         .set({ embedding: JSON.stringify(embedding) })
-        .where('id', '=', memory.id)
-        .execute()
-      m_success++
+        .where("id", "=", memory.id)
+        .execute();
+      m_success++;
     } catch (err) {
-      m_failed++
-      console.error(`\n  Failed memory ${memory.id}: ${err instanceof Error ? err.message : err}`)
+      m_failed++;
+      console.error(`\n  Failed memory ${memory.id}: ${err instanceof Error ? err.message : err}`);
     }
-    process.stdout.write(`\r  ${m_success + m_failed}/${memories.length} (${m_failed} failed)`)
-  })
+    process.stdout.write(`\r  ${m_success + m_failed}/${memories.length} (${m_failed} failed)`);
+  });
 
-  if (memories.length > 0) console.log()
-  console.log(`Done: ${m_success} re-embedded, ${m_failed} failed`)
+  if (memories.length > 0) console.log();
+  console.log(`Done: ${m_success} re-embedded, ${m_failed} failed`);
 
   // Phase 2 — Graph nodes
-  console.log('\n=== Phase 2: Graph nodes ===')
+  console.log("\n=== Phase 2: Graph nodes ===");
 
   const nodes = await db
-    .selectFrom('graph_nodes')
-    .select(['id', 'display_name', 'description'])
-    .execute()
+    .selectFrom("graph_nodes")
+    .select(["id", "display_name", "description"])
+    .execute();
 
-  console.log(`Found ${nodes.length} graph nodes to re-embed`)
+  console.log(`Found ${nodes.length} graph nodes to re-embed`);
 
-  let n_success = 0
-  let n_failed = 0
+  let n_success = 0;
+  let n_failed = 0;
 
   await pooled(nodes, 10, async (node) => {
     try {
       const text = node.description
         ? `${node.display_name}: ${node.description}`
-        : node.display_name
-      const embedding = await generateEmbedding(env.OPENROUTER_API_KEY, text, model)
+        : node.display_name;
+      const embedding = await generateEmbedding(env.OPENROUTER_API_KEY, text, model);
       await db
-        .updateTable('graph_nodes')
+        .updateTable("graph_nodes")
         .set({ embedding: JSON.stringify(embedding) })
-        .where('id', '=', node.id)
-        .execute()
-      n_success++
+        .where("id", "=", node.id)
+        .execute();
+      n_success++;
     } catch (err) {
-      n_failed++
-      console.error(`\n  Failed node ${node.id}: ${err instanceof Error ? err.message : err}`)
+      n_failed++;
+      console.error(`\n  Failed node ${node.id}: ${err instanceof Error ? err.message : err}`);
     }
-    process.stdout.write(`\r  ${n_success + n_failed}/${nodes.length} (${n_failed} failed)`)
-  })
+    process.stdout.write(`\r  ${n_success + n_failed}/${nodes.length} (${n_failed} failed)`);
+  });
 
-  if (nodes.length > 0) console.log()
-  console.log(`Done: ${n_success} re-embedded, ${n_failed} failed`)
+  if (nodes.length > 0) console.log();
+  console.log(`Done: ${n_success} re-embedded, ${n_failed} failed`);
 
-  console.log('\n=== Re-embed complete ===')
+  console.log("\n=== Re-embed complete ===");
 }
 
-const sleep = (ms: number) => new Promise((r) => setTimeout(r, ms))
+const sleep = (ms: number) => new Promise((r) => setTimeout(r, ms));
 
 async function withRetry(fn: () => Promise, retries = 2, delayMs = 2000): Promise {
   for (let attempt = 0; ; attempt++) {
     try {
-      return await fn()
+      return await fn();
     } catch (err) {
-      if (attempt >= retries) throw err
-      await sleep(delayMs)
+      if (attempt >= retries) throw err;
+      await sleep(delayMs);
     }
   }
 }
@@ -243,156 +240,177 @@ async function pooled(
   concurrency: number,
   fn: (item: T) => Promise,
 ): Promise {
-  let i = 0
+  let i = 0;
   async function next(): Promise {
     while (i < items.length) {
-      const idx = i++
-      await fn(items[idx])
+      const idx = i++;
+      await fn(items[idx]);
     }
   }
-  await Promise.all(Array.from({ length: Math.min(concurrency, items.length) }, () => next()))
+  await Promise.all(Array.from({ length: Math.min(concurrency, items.length) }, () => next()));
 }
 
 async function backfillAll(db: Kysely) {
   if (!env.MEMORY_WORKER_MODEL) {
-    console.error('Error: MEMORY_WORKER_MODEL must be set for backfill')
-    process.exit(1)
+    console.error("Error: MEMORY_WORKER_MODEL must be set for backfill");
+    process.exit(1);
   }
 
-  const workerConfig = { apiKey: env.OPENROUTER_API_KEY, model: env.MEMORY_WORKER_MODEL, extraBody: { reasoning: { max_tokens: 1 } } }
-  const embeddingOpts = { apiKey: env.OPENROUTER_API_KEY, embeddingModel: env.EMBEDDING_MODEL }
+  const workerConfig = {
+    apiKey: env.OPENROUTER_API_KEY,
+    model: env.MEMORY_WORKER_MODEL,
+    extraBody: { reasoning: { max_tokens: 1 } },
+  };
+  const embeddingOpts = { apiKey: env.OPENROUTER_API_KEY, embeddingModel: env.EMBEDDING_MODEL };
   const mm = new MemoryManager(db, {
     workerConfig,
     embeddingModel: env.EMBEDDING_MODEL,
     apiKey: env.OPENROUTER_API_KEY,
-  })
+  });
 
   // Phase 1 — Graph extraction for memories with no edges
-  console.log('\n=== Phase 1: Graph extraction ===')
+  console.log("\n=== Phase 1: Graph extraction ===");
 
   const memoriesWithoutEdges = await db
-    .selectFrom('memories')
-    .select(['id', 'content'])
-    .where('archived_at', 'is', null)
+    .selectFrom("memories")
+    .select(["id", "content"])
+    .where("archived_at", "is", null)
     .where(({ not, exists, selectFrom }) =>
-      not(exists(
-        selectFrom('graph_edges')
-          .select('id')
-          .whereRef('graph_edges.memory_id', '=', 'memories.id'),
-      )),
+      not(
+        exists(
+          selectFrom("graph_edges")
+            .select("id")
+            .whereRef("graph_edges.memory_id", "=", "memories.id"),
+        ),
+      ),
     )
-    .execute()
+    .execute();
 
-  console.log(`Found ${memoriesWithoutEdges.length} memories without graph edges`)
+  console.log(`Found ${memoriesWithoutEdges.length} memories without graph edges`);
 
-  let p1Success = 0
-  let p1Failed = 0
+  let p1Success = 0;
+  let p1Failed = 0;
 
   await pooled(memoriesWithoutEdges, 5, async (mem) => {
     try {
-      await withRetry(() => processMemoryForGraph(db, workerConfig, mem.id, mem.content, embeddingOpts))
-      p1Success++
+      await withRetry(() =>
+        processMemoryForGraph(db, workerConfig, mem.id, mem.content, embeddingOpts),
+      );
+      p1Success++;
     } catch (err) {
-      p1Failed++
-      console.error(`\n  Failed memory ${mem.id}: ${err instanceof Error ? err.message : err}`)
+      p1Failed++;
+      console.error(`\n  Failed memory ${mem.id}: ${err instanceof Error ? err.message : err}`);
     }
-    process.stdout.write(`\r  ${p1Success + p1Failed}/${memoriesWithoutEdges.length} (${p1Failed} failed)`)
-  })
+    process.stdout.write(
+      `\r  ${p1Success + p1Failed}/${memoriesWithoutEdges.length} (${p1Failed} failed)`,
+    );
+  });
 
-  if (memoriesWithoutEdges.length > 0) console.log()
-  console.log(`Done: ${p1Success} extracted, ${p1Failed} failed`)
+  if (memoriesWithoutEdges.length > 0) console.log();
+  console.log(`Done: ${p1Success} extracted, ${p1Failed} failed`);
 
   // Phase 2 — Graph node embeddings
-  console.log('\n=== Phase 2: Graph node embeddings ===')
+  console.log("\n=== Phase 2: Graph node embeddings ===");
 
   const nodesWithoutEmbeddings = await db
-    .selectFrom('graph_nodes')
-    .select(['id', 'display_name', 'description'])
-    .where('embedding', 'is', null)
-    .execute()
+    .selectFrom("graph_nodes")
+    .select(["id", "display_name", "description"])
+    .where("embedding", "is", null)
+    .execute();
 
-  console.log(`Found ${nodesWithoutEmbeddings.length} nodes without embeddings`)
+  console.log(`Found ${nodesWithoutEmbeddings.length} nodes without embeddings`);
 
-  let p2Success = 0
-  let p2Failed = 0
+  let p2Success = 0;
+  let p2Failed = 0;
 
   await pooled(nodesWithoutEmbeddings, 10, async (node) => {
     try {
       const text = node.description
         ? `${node.display_name}: ${node.description}`
-        : node.display_name
-      const embedding = await generateEmbedding(env.OPENROUTER_API_KEY, text, env.EMBEDDING_MODEL)
+        : node.display_name;
+      const embedding = await generateEmbedding(env.OPENROUTER_API_KEY, text, env.EMBEDDING_MODEL);
       await db
-        .updateTable('graph_nodes')
+        .updateTable("graph_nodes")
         .set({ embedding: JSON.stringify(embedding) })
-        .where('id', '=', node.id)
-        .execute()
-      p2Success++
+        .where("id", "=", node.id)
+        .execute();
+      p2Success++;
     } catch (err) {
-      p2Failed++
-      console.error(`\n  Failed node ${node.id}: ${err instanceof Error ? err.message : err}`)
+      p2Failed++;
+      console.error(`\n  Failed node ${node.id}: ${err instanceof Error ? err.message : err}`);
     }
-    process.stdout.write(`\r  ${p2Success + p2Failed}/${nodesWithoutEmbeddings.length} (${p2Failed} failed)`)
-  })
+    process.stdout.write(
+      `\r  ${p2Success + p2Failed}/${nodesWithoutEmbeddings.length} (${p2Failed} failed)`,
+    );
+  });
 
-  if (nodesWithoutEmbeddings.length > 0) console.log()
-  console.log(`Done: ${p2Success} embedded, ${p2Failed} failed`)
+  if (nodesWithoutEmbeddings.length > 0) console.log();
+  console.log(`Done: ${p2Success} embedded, ${p2Failed} failed`);
 
   // Phase 3 — Observer
-  console.log('\n=== Phase 3: Observer ===')
+  console.log("\n=== Phase 3: Observer ===");
 
-  const conversations = await db
-    .selectFrom('conversations')
-    .select('id')
-    .execute()
+  const conversations = await db.selectFrom("conversations").select("id").execute();
 
-  console.log(`Found ${conversations.length} conversations`)
+  console.log(`Found ${conversations.length} conversations`);
 
-  let p3Triggered = 0
-  let p3Skipped = 0
-  let p3Failed = 0
+  let p3Triggered = 0;
+  let p3Skipped = 0;
+  let p3Failed = 0;
 
   for (let i = 0; i < conversations.length; i++) {
     try {
-      const ran = await mm.runObserver(conversations[i].id)
-      if (ran) p3Triggered++
-      else p3Skipped++
-      process.stdout.write(`\r  ${i + 1}/${conversations.length} (${p3Triggered} triggered, ${p3Skipped} skipped, ${p3Failed} failed)`)
+      const ran = await mm.runObserver(conversations[i].id);
+      if (ran) p3Triggered++;
+      else p3Skipped++;
+      process.stdout.write(
+        `\r  ${i + 1}/${conversations.length} (${p3Triggered} triggered, ${p3Skipped} skipped, ${p3Failed} failed)`,
+      );
     } catch (err) {
-      p3Failed++
-      process.stdout.write(`\r  ${i + 1}/${conversations.length} (${p3Triggered} triggered, ${p3Skipped} skipped, ${p3Failed} failed)`)
-      console.error(`\n  Failed conversation ${conversations[i].id}: ${err instanceof Error ? err.message : err}`)
+      p3Failed++;
+      process.stdout.write(
+        `\r  ${i + 1}/${conversations.length} (${p3Triggered} triggered, ${p3Skipped} skipped, ${p3Failed} failed)`,
+      );
+      console.error(
+        `\n  Failed conversation ${conversations[i].id}: ${err instanceof Error ? err.message : err}`,
+      );
     }
   }
 
-  if (conversations.length > 0) console.log()
-  console.log(`Done: ${p3Triggered} triggered, ${p3Skipped} below threshold, ${p3Failed} failed`)
+  if (conversations.length > 0) console.log();
+  console.log(`Done: ${p3Triggered} triggered, ${p3Skipped} below threshold, ${p3Failed} failed`);
 
   // Phase 4 — Reflector
-  console.log('\n=== Phase 4: Reflector ===')
-  console.log(`Processing ${conversations.length} conversations`)
+  console.log("\n=== Phase 4: Reflector ===");
+  console.log(`Processing ${conversations.length} conversations`);
 
-  let p4Triggered = 0
-  let p4Skipped = 0
-  let p4Failed = 0
+  let p4Triggered = 0;
+  let p4Skipped = 0;
+  let p4Failed = 0;
 
   for (let i = 0; i < conversations.length; i++) {
     try {
-      const ran = await mm.runReflector(conversations[i].id)
-      if (ran) p4Triggered++
-      else p4Skipped++
-      process.stdout.write(`\r  ${i + 1}/${conversations.length} (${p4Triggered} triggered, ${p4Skipped} skipped, ${p4Failed} failed)`)
+      const ran = await mm.runReflector(conversations[i].id);
+      if (ran) p4Triggered++;
+      else p4Skipped++;
+      process.stdout.write(
+        `\r  ${i + 1}/${conversations.length} (${p4Triggered} triggered, ${p4Skipped} skipped, ${p4Failed} failed)`,
+      );
     } catch (err) {
-      p4Failed++
-      process.stdout.write(`\r  ${i + 1}/${conversations.length} (${p4Triggered} triggered, ${p4Skipped} skipped, ${p4Failed} failed)`)
-      console.error(`\n  Failed conversation ${conversations[i].id}: ${err instanceof Error ? err.message : err}`)
+      p4Failed++;
+      process.stdout.write(
+        `\r  ${i + 1}/${conversations.length} (${p4Triggered} triggered, ${p4Skipped} skipped, ${p4Failed} failed)`,
+      );
+      console.error(
+        `\n  Failed conversation ${conversations[i].id}: ${err instanceof Error ? err.message : err}`,
+      );
     }
   }
 
-  if (conversations.length > 0) console.log()
-  console.log(`Done: ${p4Triggered} triggered, ${p4Skipped} below threshold, ${p4Failed} failed`)
+  if (conversations.length > 0) console.log();
+  console.log(`Done: ${p4Triggered} triggered, ${p4Skipped} below threshold, ${p4Failed} failed`);
 
-  console.log('\n=== Backfill complete ===')
+  console.log("\n=== Backfill complete ===");
 }
 
-runMain(main)
+runMain(main);
diff --git a/apps/construct/src/db/migrate.ts b/apps/construct/src/db/migrate.ts
index eb2d018..a952e68 100644
--- a/apps/construct/src/db/migrate.ts
+++ b/apps/construct/src/db/migrate.ts
@@ -1,26 +1,25 @@
-import { fileURLToPath } from 'node:url'
-import { dirname, join } from 'node:path'
-import { createDb } from '@repo/db'
-import { runMigrations as runMigrationsGeneric } from '@repo/db/migrate'
-import { env } from '../env.js'
-import type { Database } from './schema.js'
+import { fileURLToPath } from "node:url";
+import { dirname, join } from "node:path";
+import { createDb } from "@repo/db";
+import { runMigrations as runMigrationsGeneric } from "@repo/db/migrate";
+import { env } from "../env.js";
+import type { Database } from "./schema.js";
 
-const __dirname = dirname(fileURLToPath(import.meta.url))
+const __dirname = dirname(fileURLToPath(import.meta.url));
 
 export async function runMigrations(databaseUrl?: string) {
-  const { db } = createDb(databaseUrl ?? env.DATABASE_URL)
+  const { db } = createDb(databaseUrl ?? env.DATABASE_URL);
 
-  await runMigrationsGeneric(db, join(__dirname, 'migrations'))
+  await runMigrationsGeneric(db, join(__dirname, "migrations"));
 
-  await db.destroy()
+  await db.destroy();
 }
 
 // Run directly: tsx src/db/migrate.ts
 const isDirectRun =
   process.argv[1] &&
-  (process.argv[1].endsWith('migrate.ts') ||
-    process.argv[1].endsWith('migrate.js'))
+  (process.argv[1].endsWith("migrate.ts") || process.argv[1].endsWith("migrate.js"));
 
 if (isDirectRun) {
-  runMigrations()
+  runMigrations();
 }
diff --git a/apps/construct/src/db/migrations/001-initial.ts b/apps/construct/src/db/migrations/001-initial.ts
index fca0ef1..2c5ec14 100644
--- a/apps/construct/src/db/migrations/001-initial.ts
+++ b/apps/construct/src/db/migrations/001-initial.ts
@@ -1,124 +1,98 @@
-import type { Kysely } from 'kysely'
-import { sql } from 'kysely'
+import type { Kysely } from "kysely";
+import { sql } from "kysely";
 
 export async function up(db: Kysely): Promise {
   await db.schema
-    .createTable('memories')
-    .addColumn('id', 'text', (col) => col.primaryKey())
-    .addColumn('content', 'text', (col) => col.notNull())
-    .addColumn('category', 'text', (col) => col.notNull().defaultTo('general'))
-    .addColumn('tags', 'text')
-    .addColumn('source', 'text', (col) => col.notNull().defaultTo('user'))
-    .addColumn('created_at', 'text', (col) =>
-      col.notNull().defaultTo(sql`(datetime('now'))`),
-    )
-    .addColumn('updated_at', 'text', (col) =>
-      col.notNull().defaultTo(sql`(datetime('now'))`),
-    )
-    .addColumn('archived_at', 'text')
-    .execute()
+    .createTable("memories")
+    .addColumn("id", "text", (col) => col.primaryKey())
+    .addColumn("content", "text", (col) => col.notNull())
+    .addColumn("category", "text", (col) => col.notNull().defaultTo("general"))
+    .addColumn("tags", "text")
+    .addColumn("source", "text", (col) => col.notNull().defaultTo("user"))
+    .addColumn("created_at", "text", (col) => col.notNull().defaultTo(sql`(datetime('now'))`))
+    .addColumn("updated_at", "text", (col) => col.notNull().defaultTo(sql`(datetime('now'))`))
+    .addColumn("archived_at", "text")
+    .execute();
 
   await db.schema
-    .createTable('conversations')
-    .addColumn('id', 'text', (col) => col.primaryKey())
-    .addColumn('source', 'text', (col) => col.notNull())
-    .addColumn('external_id', 'text')
-    .addColumn('created_at', 'text', (col) =>
-      col.notNull().defaultTo(sql`(datetime('now'))`),
-    )
-    .addColumn('updated_at', 'text', (col) =>
-      col.notNull().defaultTo(sql`(datetime('now'))`),
-    )
-    .execute()
+    .createTable("conversations")
+    .addColumn("id", "text", (col) => col.primaryKey())
+    .addColumn("source", "text", (col) => col.notNull())
+    .addColumn("external_id", "text")
+    .addColumn("created_at", "text", (col) => col.notNull().defaultTo(sql`(datetime('now'))`))
+    .addColumn("updated_at", "text", (col) => col.notNull().defaultTo(sql`(datetime('now'))`))
+    .execute();
 
   await db.schema
-    .createTable('messages')
-    .addColumn('id', 'text', (col) => col.primaryKey())
-    .addColumn('conversation_id', 'text', (col) =>
-      col.notNull().references('conversations.id'),
-    )
-    .addColumn('role', 'text', (col) => col.notNull())
-    .addColumn('content', 'text', (col) => col.notNull())
-    .addColumn('tool_calls', 'text')
-    .addColumn('created_at', 'text', (col) =>
-      col.notNull().defaultTo(sql`(datetime('now'))`),
-    )
-    .execute()
+    .createTable("messages")
+    .addColumn("id", "text", (col) => col.primaryKey())
+    .addColumn("conversation_id", "text", (col) => col.notNull().references("conversations.id"))
+    .addColumn("role", "text", (col) => col.notNull())
+    .addColumn("content", "text", (col) => col.notNull())
+    .addColumn("tool_calls", "text")
+    .addColumn("created_at", "text", (col) => col.notNull().defaultTo(sql`(datetime('now'))`))
+    .execute();
 
   await db.schema
-    .createTable('schedules')
-    .addColumn('id', 'text', (col) => col.primaryKey())
-    .addColumn('description', 'text', (col) => col.notNull())
-    .addColumn('cron_expression', 'text')
-    .addColumn('run_at', 'text')
-    .addColumn('message', 'text', (col) => col.notNull())
-    .addColumn('chat_id', 'text', (col) => col.notNull())
-    .addColumn('active', 'integer', (col) => col.notNull().defaultTo(1))
-    .addColumn('last_run_at', 'text')
-    .addColumn('created_at', 'text', (col) =>
-      col.notNull().defaultTo(sql`(datetime('now'))`),
-    )
-    .execute()
+    .createTable("schedules")
+    .addColumn("id", "text", (col) => col.primaryKey())
+    .addColumn("description", "text", (col) => col.notNull())
+    .addColumn("cron_expression", "text")
+    .addColumn("run_at", "text")
+    .addColumn("message", "text", (col) => col.notNull())
+    .addColumn("chat_id", "text", (col) => col.notNull())
+    .addColumn("active", "integer", (col) => col.notNull().defaultTo(1))
+    .addColumn("last_run_at", "text")
+    .addColumn("created_at", "text", (col) => col.notNull().defaultTo(sql`(datetime('now'))`))
+    .execute();
 
   await db.schema
-    .createTable('ai_usage')
-    .addColumn('id', 'text', (col) => col.primaryKey())
-    .addColumn('model', 'text', (col) => col.notNull())
-    .addColumn('input_tokens', 'integer')
-    .addColumn('output_tokens', 'integer')
-    .addColumn('cost_usd', 'real')
-    .addColumn('source', 'text', (col) => col.notNull())
-    .addColumn('created_at', 'text', (col) =>
-      col.notNull().defaultTo(sql`(datetime('now'))`),
-    )
-    .execute()
+    .createTable("ai_usage")
+    .addColumn("id", "text", (col) => col.primaryKey())
+    .addColumn("model", "text", (col) => col.notNull())
+    .addColumn("input_tokens", "integer")
+    .addColumn("output_tokens", "integer")
+    .addColumn("cost_usd", "real")
+    .addColumn("source", "text", (col) => col.notNull())
+    .addColumn("created_at", "text", (col) => col.notNull().defaultTo(sql`(datetime('now'))`))
+    .execute();
 
   await db.schema
-    .createTable('settings')
-    .addColumn('key', 'text', (col) => col.primaryKey())
-    .addColumn('value', 'text', (col) => col.notNull())
-    .addColumn('updated_at', 'text', (col) =>
-      col.notNull().defaultTo(sql`(datetime('now'))`),
-    )
-    .execute()
+    .createTable("settings")
+    .addColumn("key", "text", (col) => col.primaryKey())
+    .addColumn("value", "text", (col) => col.notNull())
+    .addColumn("updated_at", "text", (col) => col.notNull().defaultTo(sql`(datetime('now'))`))
+    .execute();
 
   // Indexes for common queries
-  await db.schema
-    .createIndex('idx_memories_category')
-    .on('memories')
-    .column('category')
-    .execute()
+  await db.schema.createIndex("idx_memories_category").on("memories").column("category").execute();
 
   await db.schema
-    .createIndex('idx_memories_archived')
-    .on('memories')
-    .column('archived_at')
-    .execute()
+    .createIndex("idx_memories_archived")
+    .on("memories")
+    .column("archived_at")
+    .execute();
 
   await db.schema
-    .createIndex('idx_messages_conversation')
-    .on('messages')
-    .column('conversation_id')
-    .execute()
+    .createIndex("idx_messages_conversation")
+    .on("messages")
+    .column("conversation_id")
+    .execute();
 
   await db.schema
-    .createIndex('idx_conversations_external')
-    .on('conversations')
-    .column('external_id')
-    .execute()
+    .createIndex("idx_conversations_external")
+    .on("conversations")
+    .column("external_id")
+    .execute();
 
-  await db.schema
-    .createIndex('idx_schedules_active')
-    .on('schedules')
-    .column('active')
-    .execute()
+  await db.schema.createIndex("idx_schedules_active").on("schedules").column("active").execute();
 }
 
 export async function down(db: Kysely): Promise {
-  await db.schema.dropTable('settings').execute()
-  await db.schema.dropTable('ai_usage').execute()
-  await db.schema.dropTable('schedules').execute()
-  await db.schema.dropTable('messages').execute()
-  await db.schema.dropTable('conversations').execute()
-  await db.schema.dropTable('memories').execute()
+  await db.schema.dropTable("settings").execute();
+  await db.schema.dropTable("ai_usage").execute();
+  await db.schema.dropTable("schedules").execute();
+  await db.schema.dropTable("messages").execute();
+  await db.schema.dropTable("conversations").execute();
+  await db.schema.dropTable("memories").execute();
 }
diff --git a/apps/construct/src/db/migrations/002-fts5-and-embeddings.ts b/apps/construct/src/db/migrations/002-fts5-and-embeddings.ts
index 156ce5c..ba0e038 100644
--- a/apps/construct/src/db/migrations/002-fts5-and-embeddings.ts
+++ b/apps/construct/src/db/migrations/002-fts5-and-embeddings.ts
@@ -1,5 +1,5 @@
-import type { Kysely } from 'kysely'
-import { sql } from 'kysely'
+import type { Kysely } from "kysely";
+import { sql } from "kysely";
 
 export async function up(db: Kysely): Promise {
   // FTS5 virtual table for full-text search on memories
@@ -12,7 +12,7 @@ export async function up(db: Kysely): Promise {
       content=memories,
       content_rowid=rowid
     )
-  `.execute(db)
+  `.execute(db);
 
   // Triggers to keep FTS5 in sync with memories table
   await sql`
@@ -20,14 +20,14 @@ export async function up(db: Kysely): Promise {
       INSERT INTO memories_fts(rowid, id, content, tags, category)
       VALUES (NEW.rowid, NEW.id, NEW.content, NEW.tags, NEW.category);
     END
-  `.execute(db)
+  `.execute(db);
 
   await sql`
     CREATE TRIGGER IF NOT EXISTS memories_ad AFTER DELETE ON memories BEGIN
       INSERT INTO memories_fts(memories_fts, rowid, id, content, tags, category)
       VALUES ('delete', OLD.rowid, OLD.id, OLD.content, OLD.tags, OLD.category);
     END
-  `.execute(db)
+  `.execute(db);
 
   await sql`
     CREATE TRIGGER IF NOT EXISTS memories_au AFTER UPDATE ON memories BEGIN
@@ -36,25 +36,22 @@ export async function up(db: Kysely): Promise {
       INSERT INTO memories_fts(rowid, id, content, tags, category)
       VALUES (NEW.rowid, NEW.id, NEW.content, NEW.tags, NEW.category);
     END
-  `.execute(db)
+  `.execute(db);
 
   // Populate FTS5 from existing memories
   await sql`
     INSERT INTO memories_fts(rowid, id, content, tags, category)
     SELECT rowid, id, content, tags, category FROM memories
-  `.execute(db)
+  `.execute(db);
 
   // Embedding column on memories table
-  await db.schema
-    .alterTable('memories')
-    .addColumn('embedding', 'text')
-    .execute()
+  await db.schema.alterTable("memories").addColumn("embedding", "text").execute();
 }
 
 export async function down(db: Kysely): Promise {
-  await sql`DROP TRIGGER IF EXISTS memories_au`.execute(db)
-  await sql`DROP TRIGGER IF EXISTS memories_ad`.execute(db)
-  await sql`DROP TRIGGER IF EXISTS memories_ai`.execute(db)
-  await sql`DROP TABLE IF EXISTS memories_fts`.execute(db)
+  await sql`DROP TRIGGER IF EXISTS memories_au`.execute(db);
+  await sql`DROP TRIGGER IF EXISTS memories_ad`.execute(db);
+  await sql`DROP TRIGGER IF EXISTS memories_ai`.execute(db);
+  await sql`DROP TABLE IF EXISTS memories_fts`.execute(db);
   // SQLite doesn't support DROP COLUMN before 3.35.0, so we leave embedding
 }
diff --git a/apps/construct/src/db/migrations/003-secrets.ts b/apps/construct/src/db/migrations/003-secrets.ts
index 8baa810..f0172c7 100644
--- a/apps/construct/src/db/migrations/003-secrets.ts
+++ b/apps/construct/src/db/migrations/003-secrets.ts
@@ -1,21 +1,17 @@
-import type { Kysely } from 'kysely'
-import { sql } from 'kysely'
+import type { Kysely } from "kysely";
+import { sql } from "kysely";
 
 export async function up(db: Kysely): Promise {
   await db.schema
-    .createTable('secrets')
-    .addColumn('key', 'text', (col) => col.primaryKey())
-    .addColumn('value', 'text', (col) => col.notNull())
-    .addColumn('source', 'text', (col) => col.notNull().defaultTo('agent'))
-    .addColumn('created_at', 'text', (col) =>
-      col.notNull().defaultTo(sql`(datetime('now'))`),
-    )
-    .addColumn('updated_at', 'text', (col) =>
-      col.notNull().defaultTo(sql`(datetime('now'))`),
-    )
-    .execute()
+    .createTable("secrets")
+    .addColumn("key", "text", (col) => col.primaryKey())
+    .addColumn("value", "text", (col) => col.notNull())
+    .addColumn("source", "text", (col) => col.notNull().defaultTo("agent"))
+    .addColumn("created_at", "text", (col) => col.notNull().defaultTo(sql`(datetime('now'))`))
+    .addColumn("updated_at", "text", (col) => col.notNull().defaultTo(sql`(datetime('now'))`))
+    .execute();
 }
 
 export async function down(db: Kysely): Promise {
-  await db.schema.dropTable('secrets').execute()
+  await db.schema.dropTable("secrets").execute();
 }
diff --git a/apps/construct/src/db/migrations/004-telegram-message-ids.ts b/apps/construct/src/db/migrations/004-telegram-message-ids.ts
index b39ff63..30b6c44 100644
--- a/apps/construct/src/db/migrations/004-telegram-message-ids.ts
+++ b/apps/construct/src/db/migrations/004-telegram-message-ids.ts
@@ -1,25 +1,17 @@
-import type { Kysely } from 'kysely'
+import type { Kysely } from "kysely";
 
 export async function up(db: Kysely): Promise {
-  await db.schema
-    .alterTable('messages')
-    .addColumn('telegram_message_id', 'integer')
-    .execute()
+  await db.schema.alterTable("messages").addColumn("telegram_message_id", "integer").execute();
 
   await db.schema
-    .createIndex('idx_messages_telegram_message_id')
-    .on('messages')
-    .column('telegram_message_id')
-    .execute()
+    .createIndex("idx_messages_telegram_message_id")
+    .on("messages")
+    .column("telegram_message_id")
+    .execute();
 }
 
 export async function down(db: Kysely): Promise {
-  await db.schema
-    .dropIndex('idx_messages_telegram_message_id')
-    .execute()
+  await db.schema.dropIndex("idx_messages_telegram_message_id").execute();
 
-  await db.schema
-    .alterTable('messages')
-    .dropColumn('telegram_message_id')
-    .execute()
+  await db.schema.alterTable("messages").dropColumn("telegram_message_id").execute();
 }
diff --git a/apps/construct/src/db/migrations/005-graph-memory.ts b/apps/construct/src/db/migrations/005-graph-memory.ts
index abacd67..3039404 100644
--- a/apps/construct/src/db/migrations/005-graph-memory.ts
+++ b/apps/construct/src/db/migrations/005-graph-memory.ts
@@ -1,72 +1,52 @@
-import type { Kysely } from 'kysely'
-import { sql } from 'kysely'
+import type { Kysely } from "kysely";
+import { sql } from "kysely";
 
 export async function up(db: Kysely): Promise {
   await db.schema
-    .createTable('graph_nodes')
-    .addColumn('id', 'text', (col) => col.primaryKey())
-    .addColumn('name', 'text', (col) => col.notNull())
-    .addColumn('display_name', 'text', (col) => col.notNull())
-    .addColumn('node_type', 'text', (col) => col.notNull().defaultTo('entity'))
-    .addColumn('description', 'text')
-    .addColumn('embedding', 'text')
-    .addColumn('created_at', 'text', (col) =>
-      col.notNull().defaultTo(sql`(datetime('now'))`),
-    )
-    .addColumn('updated_at', 'text', (col) =>
-      col.notNull().defaultTo(sql`(datetime('now'))`),
-    )
-    .execute()
+    .createTable("graph_nodes")
+    .addColumn("id", "text", (col) => col.primaryKey())
+    .addColumn("name", "text", (col) => col.notNull())
+    .addColumn("display_name", "text", (col) => col.notNull())
+    .addColumn("node_type", "text", (col) => col.notNull().defaultTo("entity"))
+    .addColumn("description", "text")
+    .addColumn("embedding", "text")
+    .addColumn("created_at", "text", (col) => col.notNull().defaultTo(sql`(datetime('now'))`))
+    .addColumn("updated_at", "text", (col) => col.notNull().defaultTo(sql`(datetime('now'))`))
+    .execute();
 
   await db.schema
-    .createIndex('idx_gn_name_type')
-    .on('graph_nodes')
-    .columns(['name', 'node_type'])
+    .createIndex("idx_gn_name_type")
+    .on("graph_nodes")
+    .columns(["name", "node_type"])
     .unique()
-    .execute()
+    .execute();
 
   await db.schema
-    .createTable('graph_edges')
-    .addColumn('id', 'text', (col) => col.primaryKey())
-    .addColumn('source_id', 'text', (col) =>
-      col.notNull().references('graph_nodes.id'),
-    )
-    .addColumn('target_id', 'text', (col) =>
-      col.notNull().references('graph_nodes.id'),
-    )
-    .addColumn('relation', 'text', (col) => col.notNull())
-    .addColumn('weight', 'real', (col) => col.defaultTo(1.0))
-    .addColumn('properties', 'text')
-    .addColumn('memory_id', 'text', (col) => col.references('memories.id'))
-    .addColumn('created_at', 'text', (col) =>
-      col.notNull().defaultTo(sql`(datetime('now'))`),
-    )
-    .addColumn('updated_at', 'text', (col) =>
-      col.notNull().defaultTo(sql`(datetime('now'))`),
-    )
-    .execute()
+    .createTable("graph_edges")
+    .addColumn("id", "text", (col) => col.primaryKey())
+    .addColumn("source_id", "text", (col) => col.notNull().references("graph_nodes.id"))
+    .addColumn("target_id", "text", (col) => col.notNull().references("graph_nodes.id"))
+    .addColumn("relation", "text", (col) => col.notNull())
+    .addColumn("weight", "real", (col) => col.defaultTo(1.0))
+    .addColumn("properties", "text")
+    .addColumn("memory_id", "text", (col) => col.references("memories.id"))
+    .addColumn("created_at", "text", (col) => col.notNull().defaultTo(sql`(datetime('now'))`))
+    .addColumn("updated_at", "text", (col) => col.notNull().defaultTo(sql`(datetime('now'))`))
+    .execute();
 
-  await db.schema
-    .createIndex('idx_ge_source')
-    .on('graph_edges')
-    .column('source_id')
-    .execute()
+  await db.schema.createIndex("idx_ge_source").on("graph_edges").column("source_id").execute();
 
-  await db.schema
-    .createIndex('idx_ge_target')
-    .on('graph_edges')
-    .column('target_id')
-    .execute()
+  await db.schema.createIndex("idx_ge_target").on("graph_edges").column("target_id").execute();
 
   await db.schema
-    .createIndex('idx_ge_unique')
-    .on('graph_edges')
-    .columns(['source_id', 'target_id', 'relation'])
+    .createIndex("idx_ge_unique")
+    .on("graph_edges")
+    .columns(["source_id", "target_id", "relation"])
     .unique()
-    .execute()
+    .execute();
 }
 
 export async function down(db: Kysely): Promise {
-  await db.schema.dropTable('graph_edges').execute()
-  await db.schema.dropTable('graph_nodes').execute()
+  await db.schema.dropTable("graph_edges").execute();
+  await db.schema.dropTable("graph_nodes").execute();
 }
diff --git a/apps/construct/src/db/migrations/006-observational-memory.ts b/apps/construct/src/db/migrations/006-observational-memory.ts
index cdcffd3..6f03ff5 100644
--- a/apps/construct/src/db/migrations/006-observational-memory.ts
+++ b/apps/construct/src/db/migrations/006-observational-memory.ts
@@ -1,60 +1,50 @@
-import type { Kysely } from 'kysely'
-import { sql } from 'kysely'
+import type { Kysely } from "kysely";
+import { sql } from "kysely";
 
 export async function up(db: Kysely): Promise {
   await db.schema
-    .createTable('observations')
-    .addColumn('id', 'text', (col) => col.primaryKey())
-    .addColumn('conversation_id', 'text', (col) =>
-      col.notNull().references('conversations.id'),
-    )
-    .addColumn('content', 'text', (col) => col.notNull())
-    .addColumn('priority', 'text', (col) => col.defaultTo('medium'))
-    .addColumn('observation_date', 'text', (col) => col.notNull())
-    .addColumn('source_message_ids', 'text')
-    .addColumn('token_count', 'integer')
-    .addColumn('generation', 'integer', (col) => col.defaultTo(0))
-    .addColumn('superseded_at', 'text')
-    .addColumn('created_at', 'text', (col) =>
-      col.notNull().defaultTo(sql`(datetime('now'))`),
-    )
-    .execute()
+    .createTable("observations")
+    .addColumn("id", "text", (col) => col.primaryKey())
+    .addColumn("conversation_id", "text", (col) => col.notNull().references("conversations.id"))
+    .addColumn("content", "text", (col) => col.notNull())
+    .addColumn("priority", "text", (col) => col.defaultTo("medium"))
+    .addColumn("observation_date", "text", (col) => col.notNull())
+    .addColumn("source_message_ids", "text")
+    .addColumn("token_count", "integer")
+    .addColumn("generation", "integer", (col) => col.defaultTo(0))
+    .addColumn("superseded_at", "text")
+    .addColumn("created_at", "text", (col) => col.notNull().defaultTo(sql`(datetime('now'))`))
+    .execute();
 
   await db.schema
-    .createIndex('idx_obs_conv')
-    .on('observations')
-    .column('conversation_id')
-    .execute()
+    .createIndex("idx_obs_conv")
+    .on("observations")
+    .column("conversation_id")
+    .execute();
 
   await db.schema
-    .createIndex('idx_obs_active')
-    .on('observations')
-    .columns(['conversation_id', 'superseded_at'])
-    .execute()
+    .createIndex("idx_obs_active")
+    .on("observations")
+    .columns(["conversation_id", "superseded_at"])
+    .execute();
 
   // Add observation tracking columns to conversations
   await db.schema
-    .alterTable('conversations')
-    .addColumn('observed_up_to_message_id', 'text')
-    .execute()
+    .alterTable("conversations")
+    .addColumn("observed_up_to_message_id", "text")
+    .execute();
 
   await db.schema
-    .alterTable('conversations')
-    .addColumn('observation_token_count', 'integer', (col) => col.defaultTo(0))
-    .execute()
+    .alterTable("conversations")
+    .addColumn("observation_token_count", "integer", (col) => col.defaultTo(0))
+    .execute();
 }
 
 export async function down(db: Kysely): Promise {
-  await db.schema.dropTable('observations').execute()
+  await db.schema.dropTable("observations").execute();
 
   // SQLite doesn't support DROP COLUMN before 3.35.0, but Kysely handles it
-  await db.schema
-    .alterTable('conversations')
-    .dropColumn('observed_up_to_message_id')
-    .execute()
+  await db.schema.alterTable("conversations").dropColumn("observed_up_to_message_id").execute();
 
-  await db.schema
-    .alterTable('conversations')
-    .dropColumn('observation_token_count')
-    .execute()
+  await db.schema.alterTable("conversations").dropColumn("observation_token_count").execute();
 }
diff --git a/apps/construct/src/db/migrations/007-observation-promoted-at.ts b/apps/construct/src/db/migrations/007-observation-promoted-at.ts
index e1f5eb1..2086c5a 100644
--- a/apps/construct/src/db/migrations/007-observation-promoted-at.ts
+++ b/apps/construct/src/db/migrations/007-observation-promoted-at.ts
@@ -1,15 +1,9 @@
-import type { Kysely } from 'kysely'
+import type { Kysely } from "kysely";
 
 export async function up(db: Kysely): Promise {
-  await db.schema
-    .alterTable('observations')
-    .addColumn('promoted_at', 'text')
-    .execute()
+  await db.schema.alterTable("observations").addColumn("promoted_at", "text").execute();
 }
 
 export async function down(db: Kysely): Promise {
-  await db.schema
-    .alterTable('observations')
-    .dropColumn('promoted_at')
-    .execute()
+  await db.schema.alterTable("observations").dropColumn("promoted_at").execute();
 }
diff --git a/apps/construct/src/db/migrations/008-schedule-prompts.ts b/apps/construct/src/db/migrations/008-schedule-prompts.ts
index 8fbf4b2..7ca4685 100644
--- a/apps/construct/src/db/migrations/008-schedule-prompts.ts
+++ b/apps/construct/src/db/migrations/008-schedule-prompts.ts
@@ -1,15 +1,9 @@
-import type { Kysely } from 'kysely'
+import type { Kysely } from "kysely";
 
 export async function up(db: Kysely): Promise {
-  await db.schema
-    .alterTable('schedules')
-    .addColumn('prompt', 'text')
-    .execute()
+  await db.schema.alterTable("schedules").addColumn("prompt", "text").execute();
 }
 
 export async function down(db: Kysely): Promise {
-  await db.schema
-    .alterTable('schedules')
-    .dropColumn('prompt')
-    .execute()
+  await db.schema.alterTable("schedules").dropColumn("prompt").execute();
 }
diff --git a/apps/construct/src/db/migrations/009-pending-asks.ts b/apps/construct/src/db/migrations/009-pending-asks.ts
index cc62740..b31e75a 100644
--- a/apps/construct/src/db/migrations/009-pending-asks.ts
+++ b/apps/construct/src/db/migrations/009-pending-asks.ts
@@ -1,28 +1,26 @@
-import { type Kysely, sql } from 'kysely'
+import { type Kysely, sql } from "kysely";
 
 export async function up(db: Kysely): Promise {
   await db.schema
-    .createTable('pending_asks')
-    .addColumn('id', 'text', (col) => col.primaryKey())
-    .addColumn('conversation_id', 'text', (col) => col.notNull())
-    .addColumn('chat_id', 'text', (col) => col.notNull())
-    .addColumn('question', 'text', (col) => col.notNull())
-    .addColumn('options', 'text') // JSON string[] | null
-    .addColumn('telegram_message_id', 'integer')
-    .addColumn('created_at', 'text', (col) =>
-      col.notNull().defaultTo(sql`(datetime('now'))`),
-    )
-    .addColumn('resolved_at', 'text')
-    .addColumn('response', 'text')
-    .execute()
+    .createTable("pending_asks")
+    .addColumn("id", "text", (col) => col.primaryKey())
+    .addColumn("conversation_id", "text", (col) => col.notNull())
+    .addColumn("chat_id", "text", (col) => col.notNull())
+    .addColumn("question", "text", (col) => col.notNull())
+    .addColumn("options", "text") // JSON string[] | null
+    .addColumn("telegram_message_id", "integer")
+    .addColumn("created_at", "text", (col) => col.notNull().defaultTo(sql`(datetime('now'))`))
+    .addColumn("resolved_at", "text")
+    .addColumn("response", "text")
+    .execute();
 
   await db.schema
-    .createIndex('idx_pending_asks_chat_resolved')
-    .on('pending_asks')
-    .columns(['chat_id', 'resolved_at'])
-    .execute()
+    .createIndex("idx_pending_asks_chat_resolved")
+    .on("pending_asks")
+    .columns(["chat_id", "resolved_at"])
+    .execute();
 }
 
 export async function down(db: Kysely): Promise {
-  await db.schema.dropTable('pending_asks').execute()
+  await db.schema.dropTable("pending_asks").execute();
 }
diff --git a/apps/construct/src/db/migrations/010-observation-expires-at.ts b/apps/construct/src/db/migrations/010-observation-expires-at.ts
index a9506d6..72f18cc 100644
--- a/apps/construct/src/db/migrations/010-observation-expires-at.ts
+++ b/apps/construct/src/db/migrations/010-observation-expires-at.ts
@@ -1,7 +1,7 @@
-import { type Kysely, sql } from 'kysely'
+import { type Kysely, sql } from "kysely";
 
 export async function up(db: Kysely): Promise {
-  await sql`ALTER TABLE observations ADD COLUMN expires_at TEXT`.execute(db)
+  await sql`ALTER TABLE observations ADD COLUMN expires_at TEXT`.execute(db);
 }
 
 export async function down(_db: Kysely): Promise {
diff --git a/apps/construct/src/db/queries.ts b/apps/construct/src/db/queries.ts
index 859fdb1..ed00d35 100644
--- a/apps/construct/src/db/queries.ts
+++ b/apps/construct/src/db/queries.ts
@@ -1,15 +1,20 @@
-import type { Kysely } from 'kysely'
-import { sql } from 'kysely'
-import { nanoid } from 'nanoid'
-import type {
-  Database,
-  NewSchedule,
-} from './schema.js'
+import type { Kysely } from "kysely";
+import { sql } from "kysely";
+import { nanoid } from "nanoid";
+import type { Database, NewSchedule } from "./schema.js";
 
-type DB = Kysely
+type DB = Kysely;
 
 // Re-export cairn query functions that construct code needs
-export { storeMemory, recallMemories, getRecentMemories, forgetMemory, searchMemoriesForForget, trackUsage, updateMemoryEmbedding } from '@repo/cairn'
+export {
+  storeMemory,
+  recallMemories,
+  getRecentMemories,
+  forgetMemory,
+  searchMemoriesForForget,
+  trackUsage,
+  updateMemoryEmbedding,
+} from "@repo/cairn";
 
 // --- Conversations ---
 
@@ -20,53 +25,52 @@ export async function getOrCreateConversation(
 ): Promise {
   if (externalId) {
     const existing = await db
-      .selectFrom('conversations')
-      .select('id')
-      .where('source', '=', source)
-      .where('external_id', '=', externalId)
-      .executeTakeFirst()
+      .selectFrom("conversations")
+      .select("id")
+      .where("source", "=", source)
+      .where("external_id", "=", externalId)
+      .executeTakeFirst();
 
     if (existing) {
       await db
-        .updateTable('conversations')
+        .updateTable("conversations")
         .set({ updated_at: sql`datetime('now')` })
-        .where('id', '=', existing.id)
-        .execute()
-      return existing.id
+        .where("id", "=", existing.id)
+        .execute();
+      return existing.id;
     }
   }
 
-  const id = nanoid()
-  await db
-    .insertInto('conversations')
-    .values({ id, source, external_id: externalId })
-    .execute()
+  const id = nanoid();
+  await db.insertInto("conversations").values({ id, source, external_id: externalId }).execute();
 
-  return id
+  return id;
 }
 
-export async function getRecentMessages(
-  db: DB,
-  conversationId: string,
-  limit = 20,
-) {
+export async function getRecentMessages(db: DB, conversationId: string, limit = 20) {
   return db
-    .selectFrom('messages')
+    .selectFrom("messages")
     .selectAll()
-    .where('conversation_id', '=', conversationId)
-    .orderBy('created_at', 'desc')
+    .where("conversation_id", "=", conversationId)
+    .orderBy("created_at", "desc")
     .limit(limit)
     .execute()
-    .then((msgs) => msgs.reverse())
+    .then((msgs) => msgs.reverse());
 }
 
 export async function saveMessage(
   db: DB,
-  message: { conversation_id: string; role: string; content: string; tool_calls?: string | null; telegram_message_id?: number | null },
+  message: {
+    conversation_id: string;
+    role: string;
+    content: string;
+    tool_calls?: string | null;
+    telegram_message_id?: number | null;
+  },
 ) {
-  const id = nanoid()
+  const id = nanoid();
   await db
-    .insertInto('messages')
+    .insertInto("messages")
     .values({
       id,
       conversation_id: message.conversation_id,
@@ -75,20 +79,16 @@ export async function saveMessage(
       tool_calls: message.tool_calls ?? null,
       telegram_message_id: message.telegram_message_id ?? null,
     })
-    .execute()
-  return id
+    .execute();
+  return id;
 }
 
-export async function updateTelegramMessageId(
-  db: DB,
-  internalId: string,
-  telegramMsgId: number,
-) {
+export async function updateTelegramMessageId(db: DB, internalId: string, telegramMsgId: number) {
   await db
-    .updateTable('messages')
+    .updateTable("messages")
     .set({ telegram_message_id: telegramMsgId })
-    .where('id', '=', internalId)
-    .execute()
+    .where("id", "=", internalId)
+    .execute();
 }
 
 export async function getMessageByTelegramId(
@@ -97,22 +97,19 @@ export async function getMessageByTelegramId(
   telegramMsgId: number,
 ) {
   return db
-    .selectFrom('messages')
+    .selectFrom("messages")
     .selectAll()
-    .where('conversation_id', '=', conversationId)
-    .where('telegram_message_id', '=', telegramMsgId)
-    .executeTakeFirst()
+    .where("conversation_id", "=", conversationId)
+    .where("telegram_message_id", "=", telegramMsgId)
+    .executeTakeFirst();
 }
 
 // --- Schedules ---
 
-export async function createSchedule(
-  db: DB,
-  schedule: Omit,
-) {
-  const id = nanoid()
+export async function createSchedule(db: DB, schedule: Omit) {
+  const id = nanoid();
   await db
-    .insertInto('schedules')
+    .insertInto("schedules")
     .values({
       id,
       description: schedule.description,
@@ -123,89 +120,85 @@ export async function createSchedule(
       chat_id: schedule.chat_id,
       last_run_at: null,
     })
-    .execute()
+    .execute();
 
-  return db
-    .selectFrom('schedules')
-    .selectAll()
-    .where('id', '=', id)
-    .executeTakeFirstOrThrow()
+  return db.selectFrom("schedules").selectAll().where("id", "=", id).executeTakeFirstOrThrow();
 }
 
 export async function listSchedules(db: DB, activeOnly = true) {
-  let qb = db.selectFrom('schedules').selectAll()
+  let qb = db.selectFrom("schedules").selectAll();
   if (activeOnly) {
-    qb = qb.where('active', '=', 1)
+    qb = qb.where("active", "=", 1);
   }
-  return qb.orderBy('created_at', 'desc').execute()
+  return qb.orderBy("created_at", "desc").execute();
 }
 
 export async function cancelSchedule(db: DB, id: string) {
   const result = await db
-    .updateTable('schedules')
+    .updateTable("schedules")
     .set({ active: 0 })
-    .where('id', '=', id)
-    .where('active', '=', 1)
-    .executeTakeFirst()
+    .where("id", "=", id)
+    .where("active", "=", 1)
+    .executeTakeFirst();
 
-  return (result.numUpdatedRows ?? 0n) > 0n
+  return (result.numUpdatedRows ?? 0n) > 0n;
 }
 
 export async function markScheduleRun(db: DB, id: string) {
   await db
-    .updateTable('schedules')
+    .updateTable("schedules")
     .set({ last_run_at: sql`datetime('now')` })
-    .where('id', '=', id)
-    .execute()
+    .where("id", "=", id)
+    .execute();
 }
 
 // --- AI Usage ---
 
 export interface UsageStats {
-  total_cost: number
-  total_input_tokens: number
-  total_output_tokens: number
-  message_count: number
-  daily: { date: string; cost: number; messages: number }[]
+  total_cost: number;
+  total_input_tokens: number;
+  total_output_tokens: number;
+  message_count: number;
+  daily: { date: string; cost: number; messages: number }[];
 }
 
 export async function getUsageStats(
   db: DB,
   opts?: { days?: number; source?: string },
 ): Promise {
-  const days = opts?.days ?? 30
-  const cutoff = sql`datetime('now', ${`-${days} days`})`
+  const days = opts?.days ?? 30;
+  const cutoff = sql`datetime('now', ${`-${days} days`})`;
 
   let totalsQuery = db
-    .selectFrom('ai_usage')
+    .selectFrom("ai_usage")
     .select([
-      sql`coalesce(sum(cost_usd), 0)`.as('total_cost'),
-      sql`coalesce(sum(input_tokens), 0)`.as('total_input_tokens'),
-      sql`coalesce(sum(output_tokens), 0)`.as('total_output_tokens'),
-      sql`count(*)`.as('message_count'),
+      sql`coalesce(sum(cost_usd), 0)`.as("total_cost"),
+      sql`coalesce(sum(input_tokens), 0)`.as("total_input_tokens"),
+      sql`coalesce(sum(output_tokens), 0)`.as("total_output_tokens"),
+      sql`count(*)`.as("message_count"),
     ])
-    .where('created_at', '>=', cutoff)
+    .where("created_at", ">=", cutoff);
 
   let dailyQuery = db
-    .selectFrom('ai_usage')
+    .selectFrom("ai_usage")
     .select([
-      sql`date(created_at)`.as('date'),
-      sql`coalesce(sum(cost_usd), 0)`.as('cost'),
-      sql`count(*)`.as('messages'),
+      sql`date(created_at)`.as("date"),
+      sql`coalesce(sum(cost_usd), 0)`.as("cost"),
+      sql`count(*)`.as("messages"),
     ])
-    .where('created_at', '>=', cutoff)
+    .where("created_at", ">=", cutoff)
     .groupBy(sql`date(created_at)`)
-    .orderBy(sql`date(created_at)`, 'desc')
+    .orderBy(sql`date(created_at)`, "desc");
 
   if (opts?.source) {
-    totalsQuery = totalsQuery.where('source', '=', opts.source)
-    dailyQuery = dailyQuery.where('source', '=', opts.source)
+    totalsQuery = totalsQuery.where("source", "=", opts.source);
+    dailyQuery = dailyQuery.where("source", "=", opts.source);
   }
 
   const [totals, daily] = await Promise.all([
     totalsQuery.executeTakeFirstOrThrow(),
     dailyQuery.execute(),
-  ])
+  ]);
 
   return {
     total_cost: Number(totals.total_cost),
@@ -217,7 +210,7 @@ export async function getUsageStats(
       cost: Number(d.cost),
       messages: Number(d.messages),
     })),
-  }
+  };
 }
 
 // --- Pending Asks ---
@@ -228,14 +221,14 @@ export async function createPendingAsk(
 ) {
   // Auto-resolve any existing pending ask for this chat
   await db
-    .updateTable('pending_asks')
-    .set({ resolved_at: sql`datetime('now')`, response: '[superseded]' })
-    .where('chat_id', '=', ask.chatId)
-    .where('resolved_at', 'is', null)
-    .execute()
+    .updateTable("pending_asks")
+    .set({ resolved_at: sql`datetime('now')`, response: "[superseded]" })
+    .where("chat_id", "=", ask.chatId)
+    .where("resolved_at", "is", null)
+    .execute();
 
   await db
-    .insertInto('pending_asks')
+    .insertInto("pending_asks")
     .values({
       id: ask.id,
       conversation_id: ask.conversationId,
@@ -243,41 +236,37 @@ export async function createPendingAsk(
       question: ask.question,
       options: ask.options ? JSON.stringify(ask.options) : null,
     })
-    .execute()
+    .execute();
 }
 
 export async function getPendingAsk(db: DB, chatId: string) {
   return db
-    .selectFrom('pending_asks')
+    .selectFrom("pending_asks")
     .selectAll()
-    .where('chat_id', '=', chatId)
-    .where('resolved_at', 'is', null)
-    .where('created_at', '>=', sql`datetime('now', '-10 minutes')`)
-    .executeTakeFirst()
+    .where("chat_id", "=", chatId)
+    .where("resolved_at", "is", null)
+    .where("created_at", ">=", sql`datetime('now', '-10 minutes')`)
+    .executeTakeFirst();
 }
 
 export async function getPendingAskById(db: DB, askId: string) {
-  return db
-    .selectFrom('pending_asks')
-    .selectAll()
-    .where('id', '=', askId)
-    .executeTakeFirst()
+  return db.selectFrom("pending_asks").selectAll().where("id", "=", askId).executeTakeFirst();
 }
 
 export async function resolvePendingAsk(db: DB, askId: string, response: string) {
   await db
-    .updateTable('pending_asks')
+    .updateTable("pending_asks")
     .set({ resolved_at: sql`datetime('now')`, response })
-    .where('id', '=', askId)
-    .execute()
+    .where("id", "=", askId)
+    .execute();
 }
 
 export async function setPendingAskTelegramId(db: DB, askId: string, msgId: number) {
   await db
-    .updateTable('pending_asks')
+    .updateTable("pending_asks")
     .set({ telegram_message_id: msgId })
-    .where('id', '=', askId)
-    .execute()
+    .where("id", "=", askId)
+    .execute();
 }
 
 /**
@@ -286,34 +275,32 @@ export async function setPendingAskTelegramId(db: DB, askId: string, msgId: numb
  */
 export async function getLastResolvedAsk(db: DB, chatId: string) {
   return db
-    .selectFrom('pending_asks')
+    .selectFrom("pending_asks")
     .selectAll()
-    .where('chat_id', '=', chatId)
-    .where('resolved_at', 'is not', null)
-    .where('resolved_at', '>=', sql`datetime('now', '-5 minutes')`)
-    .orderBy('resolved_at', 'desc')
-    .executeTakeFirst()
+    .where("chat_id", "=", chatId)
+    .where("resolved_at", "is not", null)
+    .where("resolved_at", ">=", sql`datetime('now', '-5 minutes')`)
+    .orderBy("resolved_at", "desc")
+    .executeTakeFirst();
 }
 
 // --- Settings ---
 
 export async function getSetting(db: DB, key: string) {
   const row = await db
-    .selectFrom('settings')
-    .select('value')
-    .where('key', '=', key)
-    .executeTakeFirst()
+    .selectFrom("settings")
+    .select("value")
+    .where("key", "=", key)
+    .executeTakeFirst();
 
-  return row?.value ?? null
+  return row?.value ?? null;
 }
 
 export async function setSetting(db: DB, key: string, value: string) {
-  const now = new Date().toISOString()
+  const now = new Date().toISOString();
   await db
-    .insertInto('settings')
+    .insertInto("settings")
     .values({ key, value, updated_at: now })
-    .onConflict((oc) =>
-      oc.column('key').doUpdateSet({ value, updated_at: now }),
-    )
-    .execute()
+    .onConflict((oc) => oc.column("key").doUpdateSet({ value, updated_at: now }))
+    .execute();
 }
diff --git a/apps/construct/src/db/schema.ts b/apps/construct/src/db/schema.ts
index 9c792e7..87c4abc 100644
--- a/apps/construct/src/db/schema.ts
+++ b/apps/construct/src/db/schema.ts
@@ -1,66 +1,66 @@
-import type { Generated, Insertable, Selectable, Updateable } from 'kysely'
-import type { CairnDatabase, MessageTable as CairnMessageTable } from '@repo/cairn'
+import type { Generated, Insertable, Selectable, Updateable } from "kysely";
+import type { CairnDatabase, MessageTable as CairnMessageTable } from "@repo/cairn";
 
 interface ConstructMessageTable extends CairnMessageTable {
-  telegram_message_id: number | null
+  telegram_message_id: number | null;
 }
 
 // Index signature needed so Kysely is assignable to cairn's
 // Kysely> (Kysely is invariant).
 export interface Database extends CairnDatabase {
-  messages: ConstructMessageTable
-  schedules: ScheduleTable
-  settings: SettingTable
-  secrets: SecretTable
-  pending_asks: PendingAskTable
-  [key: string]: any  // eslint-disable-line @typescript-eslint/no-explicit-any
+  messages: ConstructMessageTable;
+  schedules: ScheduleTable;
+  settings: SettingTable;
+  secrets: SecretTable;
+  pending_asks: PendingAskTable;
+  [key: string]: any; // eslint-disable-line @typescript-eslint/no-explicit-any
 }
 
 export interface ScheduleTable {
-  id: string
-  description: string
-  cron_expression: string | null
-  run_at: string | null
-  message: string
-  prompt: string | null
-  chat_id: string
-  active: Generated
-  last_run_at: string | null
-  created_at: Generated
+  id: string;
+  description: string;
+  cron_expression: string | null;
+  run_at: string | null;
+  message: string;
+  prompt: string | null;
+  chat_id: string;
+  active: Generated;
+  last_run_at: string | null;
+  created_at: Generated;
 }
 
-export type Schedule = Selectable
-export type NewSchedule = Insertable
-export type ScheduleUpdate = Updateable
+export type Schedule = Selectable;
+export type NewSchedule = Insertable;
+export type ScheduleUpdate = Updateable;
 
 export interface SettingTable {
-  key: string
-  value: string
-  updated_at: Generated
+  key: string;
+  value: string;
+  updated_at: Generated;
 }
 
-export type Setting = Selectable
+export type Setting = Selectable;
 
 export interface SecretTable {
-  key: string
-  value: string
-  source: Generated
-  created_at: Generated
-  updated_at: Generated
+  key: string;
+  value: string;
+  source: Generated;
+  created_at: Generated;
+  updated_at: Generated;
 }
 
-export type Secret = Selectable
+export type Secret = Selectable;
 
 export interface PendingAskTable {
-  id: string
-  conversation_id: string
-  chat_id: string
-  question: string
-  options: string | null // JSON string[]
-  telegram_message_id: number | null
-  created_at: Generated
-  resolved_at: string | null
-  response: string | null
+  id: string;
+  conversation_id: string;
+  chat_id: string;
+  question: string;
+  options: string | null; // JSON string[]
+  telegram_message_id: number | null;
+  created_at: Generated;
+  resolved_at: string | null;
+  response: string | null;
 }
 
-export type PendingAsk = Selectable
+export type PendingAsk = Selectable;
diff --git a/apps/construct/src/env.ts b/apps/construct/src/env.ts
index bfc3ec3..527dcfe 100644
--- a/apps/construct/src/env.ts
+++ b/apps/construct/src/env.ts
@@ -1,10 +1,11 @@
-import { z } from 'zod'
-import { resolve, join } from 'node:path'
+import { z } from "zod";
+import { csvFromEnv } from "@repo/env";
+import { resolve, join } from "node:path";
 
 function defaultExtensionsDir(): string {
-  if (process.env.NODE_ENV === 'development') return './data'
-  const xdg = process.env.XDG_DATA_HOME || join(process.env.HOME || '~', '.local', 'share')
-  return join(xdg, 'construct')
+  if (process.env.NODE_ENV === "development") return "./data";
+  const xdg = process.env.XDG_DATA_HOME || join(process.env.HOME || "~", ".local", "share");
+  return join(xdg, "construct");
 }
 
 const envSchema = z.object({
@@ -13,27 +14,24 @@ const envSchema = z.object({
   TELEGRAM_BOT_TOKEN: z.string(),
 
   // Optional
-  NODE_ENV: z.string().default('production'),
+  NODE_ENV: z.string().default("production"),
   TAVILY_API_KEY: z.string().optional(),
-  OPENROUTER_MODEL: z.string().default('google/gemini-3-flash-preview'),
-  DATABASE_URL: z.string().default('./data/construct.db'),
-  ALLOWED_TELEGRAM_IDS: z
-    .string()
-    .default('')
-    .transform((s) => s.split(',').filter(Boolean)),
-  TIMEZONE: z.string().default('UTC'),
-  LOG_LEVEL: z.string().default('info'),
-  LOG_FILE: z.string().default('./data/construct.log'),
+  OPENROUTER_MODEL: z.string().default("google/gemini-3-flash-preview"),
+  DATABASE_URL: z.string().default("./data/construct.db"),
+  ALLOWED_TELEGRAM_IDS: csvFromEnv(""),
+  TIMEZONE: z.string().default("UTC"),
+  LOG_LEVEL: z.string().default("info"),
+  LOG_FILE: z.string().default("./data/construct.log"),
   PROJECT_ROOT: z
     .string()
-    .default('.')
+    .default(".")
     .transform((p) => resolve(p)),
   EXTENSIONS_DIR: z
     .string()
     .default(defaultExtensionsDir())
     .transform((p) => resolve(p)),
-  EMBEDDING_MODEL: z.string().default('qwen/qwen3-embedding-4b'),
+  EMBEDDING_MODEL: z.string().default("qwen/qwen3-embedding-4b"),
   MEMORY_WORKER_MODEL: z.string().optional(),
-})
+});
 
-export const env = envSchema.parse(process.env)
+export const env = envSchema.parse(process.env);
diff --git a/apps/construct/src/errors.ts b/apps/construct/src/errors.ts
new file mode 100644
index 0000000..27f2ce0
--- /dev/null
+++ b/apps/construct/src/errors.ts
@@ -0,0 +1,19 @@
+/** Error during tool execution. */
+export class ToolError extends Error {
+  override name = "ToolError" as const;
+}
+
+/** Error loading or running extensions (skills, dynamic tools). */
+export class ExtensionError extends Error {
+  override name = "ExtensionError" as const;
+}
+
+/** Error in agent message processing pipeline. */
+export class AgentError extends Error {
+  override name = "AgentError" as const;
+}
+
+/** Error due to missing or invalid configuration. */
+export class ConfigError extends Error {
+  override name = "ConfigError" as const;
+}
diff --git a/apps/construct/src/extensions/__tests__/dynamic-tools.test.ts b/apps/construct/src/extensions/__tests__/dynamic-tools.test.ts
index e71c5bb..e95a9de 100644
--- a/apps/construct/src/extensions/__tests__/dynamic-tools.test.ts
+++ b/apps/construct/src/extensions/__tests__/dynamic-tools.test.ts
@@ -1,38 +1,38 @@
-import { describe, it, expect, beforeEach, afterEach } from 'vitest'
-import { mkdtemp, writeFile, mkdir, rm } from 'node:fs/promises'
-import { join } from 'node:path'
-import { tmpdir } from 'node:os'
-import { loadDynamicTools } from '../loader.js'
-import type { DynamicToolContext } from '../types.js'
-
-describe('loadDynamicTools', () => {
-  let tmpDir: string
-  const toolCtx: DynamicToolContext = { secrets: new Map() }
-  const availableSecrets = new Set()
+import { describe, it, expect, beforeEach, afterEach } from "vitest";
+import { mkdtemp, writeFile, mkdir, rm } from "node:fs/promises";
+import { join } from "node:path";
+import { tmpdir } from "node:os";
+import { loadDynamicTools } from "../loader.js";
+import type { DynamicToolContext } from "../types.js";
+
+describe("loadDynamicTools", () => {
+  let tmpDir: string;
+  const toolCtx: DynamicToolContext = { secrets: new Map() };
+  const availableSecrets = new Set();
 
   beforeEach(async () => {
-    tmpDir = await mkdtemp(join(tmpdir(), 'ext-dyn-'))
-    await mkdir(join(tmpDir, 'tools'), { recursive: true })
-  })
+    tmpDir = await mkdtemp(join(tmpdir(), "ext-dyn-"));
+    await mkdir(join(tmpDir, "tools"), { recursive: true });
+  });
 
   afterEach(async () => {
-    await rm(tmpDir, { recursive: true })
-  })
+    await rm(tmpDir, { recursive: true });
+  });
 
-  it('returns empty when tools/ has no files', async () => {
-    const packs = await loadDynamicTools(tmpDir, toolCtx, availableSecrets)
-    expect(packs).toEqual([])
-  })
+  it("returns empty when tools/ has no files", async () => {
+    const packs = await loadDynamicTools(tmpDir, toolCtx, availableSecrets);
+    expect(packs).toEqual([]);
+  });
 
-  it('returns empty when tools/ does not exist', async () => {
-    await rm(join(tmpDir, 'tools'), { recursive: true })
-    const packs = await loadDynamicTools(tmpDir, toolCtx, availableSecrets)
-    expect(packs).toEqual([])
-  })
+  it("returns empty when tools/ does not exist", async () => {
+    await rm(join(tmpDir, "tools"), { recursive: true });
+    const packs = await loadDynamicTools(tmpDir, toolCtx, availableSecrets);
+    expect(packs).toEqual([]);
+  });
 
-  it('loads a standalone tool file as a single-tool pack', async () => {
+  it("loads a standalone tool file as a single-tool pack", async () => {
     await writeFile(
-      join(tmpDir, 'tools', 'echo.ts'),
+      join(tmpDir, "tools", "echo.ts"),
       `import { Type } from '@sinclair/typebox'
 
 export default function create(ctx) {
@@ -47,23 +47,23 @@ export default function create(ctx) {
     },
   }
 }`,
-    )
+    );
 
-    const packs = await loadDynamicTools(tmpDir, toolCtx, availableSecrets)
-    expect(packs).toHaveLength(1)
-    expect(packs[0].name).toBe('ext:echo')
-    expect(packs[0].factories).toHaveLength(1)
+    const packs = await loadDynamicTools(tmpDir, toolCtx, availableSecrets);
+    expect(packs).toHaveLength(1);
+    expect(packs[0].name).toBe("ext:echo");
+    expect(packs[0].factories).toHaveLength(1);
 
     // Instantiate and test the tool
-    const tool = packs[0].factories[0]({} as any)
-    expect(tool).not.toBeNull()
-    expect(tool!.name).toBe('echo')
-  })
+    const tool = packs[0].factories[0]({} as any);
+    expect(tool).not.toBeNull();
+    expect(tool!.name).toBe("echo");
+  });
 
-  it('groups directory tools into a pack', async () => {
-    await mkdir(join(tmpDir, 'tools', 'utils'), { recursive: true })
+  it("groups directory tools into a pack", async () => {
+    await mkdir(join(tmpDir, "tools", "utils"), { recursive: true });
     await writeFile(
-      join(tmpDir, 'tools', 'utils', 'upper.ts'),
+      join(tmpDir, "tools", "utils", "upper.ts"),
       `import { Type } from '@sinclair/typebox'
 
 export default {
@@ -72,18 +72,18 @@ export default {
   parameters: Type.Object({ text: Type.String() }),
   execute: async (_id, args) => ({ output: args.text.toUpperCase() }),
 }`,
-    )
+    );
 
-    const packs = await loadDynamicTools(tmpDir, toolCtx, availableSecrets)
-    expect(packs).toHaveLength(1)
-    expect(packs[0].name).toBe('ext:utils')
-  })
+    const packs = await loadDynamicTools(tmpDir, toolCtx, availableSecrets);
+    expect(packs).toHaveLength(1);
+    expect(packs[0].name).toBe("ext:utils");
+  });
 
-  it('uses pack.md for description when present', async () => {
-    await mkdir(join(tmpDir, 'tools', 'mypack'), { recursive: true })
-    await writeFile(join(tmpDir, 'tools', 'mypack', 'pack.md'), 'Custom pack description')
+  it("uses pack.md for description when present", async () => {
+    await mkdir(join(tmpDir, "tools", "mypack"), { recursive: true });
+    await writeFile(join(tmpDir, "tools", "mypack", "pack.md"), "Custom pack description");
     await writeFile(
-      join(tmpDir, 'tools', 'mypack', 'tool.ts'),
+      join(tmpDir, "tools", "mypack", "tool.ts"),
       `import { Type } from '@sinclair/typebox'
 
 export default {
@@ -92,15 +92,15 @@ export default {
   parameters: Type.Object({}),
   execute: async () => ({ output: 'ok' }),
 }`,
-    )
+    );
 
-    const packs = await loadDynamicTools(tmpDir, toolCtx, availableSecrets)
-    expect(packs[0].description).toBe('Custom pack description')
-  })
+    const packs = await loadDynamicTools(tmpDir, toolCtx, availableSecrets);
+    expect(packs[0].description).toBe("Custom pack description");
+  });
 
-  it('skips tools with unmet secret requirements', async () => {
+  it("skips tools with unmet secret requirements", async () => {
     await writeFile(
-      join(tmpDir, 'tools', 'needs-secret.ts'),
+      join(tmpDir, "tools", "needs-secret.ts"),
       `import { Type } from '@sinclair/typebox'
 
 export const meta = {
@@ -113,15 +113,15 @@ export default {
   parameters: Type.Object({}),
   execute: async () => ({ output: 'ok' }),
 }`,
-    )
+    );
 
-    const packs = await loadDynamicTools(tmpDir, toolCtx, availableSecrets)
-    expect(packs).toEqual([])
-  })
+    const packs = await loadDynamicTools(tmpDir, toolCtx, availableSecrets);
+    expect(packs).toEqual([]);
+  });
 
-  it('loads tools when secret requirements are met', async () => {
+  it("loads tools when secret requirements are met", async () => {
     await writeFile(
-      join(tmpDir, 'tools', 'has-secret.ts'),
+      join(tmpDir, "tools", "has-secret.ts"),
       `import { Type } from '@sinclair/typebox'
 
 export const meta = {
@@ -136,11 +136,11 @@ export default function create(ctx) {
     execute: async () => ({ output: ctx.secrets.get('MY_KEY') || 'missing' }),
   }
 }`,
-    )
-
-    const secretsMap = new Map([['MY_KEY', 'secret_value']])
-    const secrets = new Set(['MY_KEY'])
-    const packs = await loadDynamicTools(tmpDir, { secrets: secretsMap }, secrets)
-    expect(packs).toHaveLength(1)
-  })
-})
+    );
+
+    const secretsMap = new Map([["MY_KEY", "secret_value"]]);
+    const secrets = new Set(["MY_KEY"]);
+    const packs = await loadDynamicTools(tmpDir, { secrets: secretsMap }, secrets);
+    expect(packs).toHaveLength(1);
+  });
+});
diff --git a/apps/construct/src/extensions/__tests__/loader.test.ts b/apps/construct/src/extensions/__tests__/loader.test.ts
index 60c8619..efe0308 100644
--- a/apps/construct/src/extensions/__tests__/loader.test.ts
+++ b/apps/construct/src/extensions/__tests__/loader.test.ts
@@ -1,11 +1,17 @@
-import { describe, it, expect, beforeEach, afterEach } from 'vitest'
-import { mkdtemp, writeFile, mkdir, rm } from 'node:fs/promises'
-import { join } from 'node:path'
-import { tmpdir } from 'node:os'
-import { parseSkillFile, checkRequirements, loadSoul, loadIdentityFiles, loadSkills } from '../loader.js'
-
-describe('parseSkillFile', () => {
-  it('parses valid skill with frontmatter', () => {
+import { describe, it, expect, beforeEach, afterEach } from "vitest";
+import { mkdtemp, writeFile, mkdir, rm } from "node:fs/promises";
+import { join } from "node:path";
+import { tmpdir } from "node:os";
+import {
+  parseSkillFile,
+  checkRequirements,
+  loadSoul,
+  loadIdentityFiles,
+  loadSkills,
+} from "../loader.js";
+
+describe("parseSkillFile", () => {
+  it("parses valid skill with frontmatter", () => {
     const content = `---
 name: daily-standup
 description: Guide the user through a daily standup check-in
@@ -17,236 +23,230 @@ requires:
 When the user asks for a standup, guide them through:
 1. What did you do yesterday?
 2. What are you doing today?
-3. Any blockers?`
-
-    const skill = parseSkillFile(content, '/test/standup.md')
-    expect(skill).not.toBeNull()
-    expect(skill!.name).toBe('daily-standup')
-    expect(skill!.description).toBe('Guide the user through a daily standup check-in')
-    expect(skill!.requires).toEqual({ env: [], secrets: [] })
-    expect(skill!.body).toContain('What did you do yesterday?')
-    expect(skill!.filePath).toBe('/test/standup.md')
-  })
-
-  it('returns null for missing frontmatter', () => {
-    const content = 'Just some text without frontmatter'
-    expect(parseSkillFile(content, '/test/bad.md')).toBeNull()
-  })
-
-  it('returns null for missing name', () => {
+3. Any blockers?`;
+
+    const skill = parseSkillFile(content, "/test/standup.md");
+    expect(skill).not.toBeNull();
+    expect(skill!.name).toBe("daily-standup");
+    expect(skill!.description).toBe("Guide the user through a daily standup check-in");
+    expect(skill!.requires).toEqual({ env: [], secrets: [] });
+    expect(skill!.body).toContain("What did you do yesterday?");
+    expect(skill!.filePath).toBe("/test/standup.md");
+  });
+
+  it("returns null for missing frontmatter", () => {
+    const content = "Just some text without frontmatter";
+    expect(parseSkillFile(content, "/test/bad.md")).toBeNull();
+  });
+
+  it("returns null for missing name", () => {
     const content = `---
 description: Has a description but no name
 ---
 
-Some body text.`
-    expect(parseSkillFile(content, '/test/noname.md')).toBeNull()
-  })
+Some body text.`;
+    expect(parseSkillFile(content, "/test/noname.md")).toBeNull();
+  });
 
-  it('returns null for missing description', () => {
+  it("returns null for missing description", () => {
     const content = `---
 name: nodesc
 ---
 
-Some body text.`
-    expect(parseSkillFile(content, '/test/nodesc.md')).toBeNull()
-  })
+Some body text.`;
+    expect(parseSkillFile(content, "/test/nodesc.md")).toBeNull();
+  });
 
-  it('defaults requires to empty object', () => {
+  it("defaults requires to empty object", () => {
     const content = `---
 name: minimal
 description: A minimal skill
 ---
 
-Body.`
-    const skill = parseSkillFile(content, '/test/minimal.md')
-    expect(skill!.requires).toEqual({})
-  })
-})
-
-describe('checkRequirements', () => {
-  it('returns empty array when all requirements met', () => {
-    const secrets = new Set(['API_KEY'])
-    process.env.TEST_ENV_VAR = '1'
-    const unmet = checkRequirements(
-      { env: ['TEST_ENV_VAR'], secrets: ['API_KEY'] },
-      secrets,
-    )
-    expect(unmet).toEqual([])
-    delete process.env.TEST_ENV_VAR
-  })
-
-  it('reports unmet env vars', () => {
-    const unmet = checkRequirements(
-      { env: ['NONEXISTENT_VAR'] },
-      new Set(),
-    )
-    expect(unmet).toEqual(['env: NONEXISTENT_VAR'])
-  })
-
-  it('reports unmet secrets', () => {
-    const unmet = checkRequirements(
-      { secrets: ['MISSING_SECRET'] },
-      new Set(['OTHER_SECRET']),
-    )
-    expect(unmet).toEqual(['secret: MISSING_SECRET'])
-  })
-
-  it('returns empty for no requirements', () => {
-    const unmet = checkRequirements({}, new Set())
-    expect(unmet).toEqual([])
-  })
-})
-
-describe('loadSoul', () => {
-  let tmpDir: string
+Body.`;
+    const skill = parseSkillFile(content, "/test/minimal.md");
+    expect(skill!.requires).toEqual({});
+  });
+});
+
+describe("checkRequirements", () => {
+  it("returns empty array when all requirements met", () => {
+    const secrets = new Set(["API_KEY"]);
+    process.env.TEST_ENV_VAR = "1";
+    const unmet = checkRequirements({ env: ["TEST_ENV_VAR"], secrets: ["API_KEY"] }, secrets);
+    expect(unmet).toEqual([]);
+    delete process.env.TEST_ENV_VAR;
+  });
+
+  it("reports unmet env vars", () => {
+    const unmet = checkRequirements({ env: ["NONEXISTENT_VAR"] }, new Set());
+    expect(unmet).toEqual(["env: NONEXISTENT_VAR"]);
+  });
+
+  it("reports unmet secrets", () => {
+    const unmet = checkRequirements({ secrets: ["MISSING_SECRET"] }, new Set(["OTHER_SECRET"]));
+    expect(unmet).toEqual(["secret: MISSING_SECRET"]);
+  });
+
+  it("returns empty for no requirements", () => {
+    const unmet = checkRequirements({}, new Set());
+    expect(unmet).toEqual([]);
+  });
+});
+
+describe("loadSoul", () => {
+  let tmpDir: string;
 
   beforeEach(async () => {
-    tmpDir = await mkdtemp(join(tmpdir(), 'ext-test-'))
-  })
+    tmpDir = await mkdtemp(join(tmpdir(), "ext-test-"));
+  });
 
   afterEach(async () => {
-    await rm(tmpDir, { recursive: true })
-  })
-
-  it('loads SOUL.md when present', async () => {
-    await writeFile(join(tmpDir, 'SOUL.md'), 'I am a friendly bot.')
-    const soul = await loadSoul(tmpDir)
-    expect(soul).toBe('I am a friendly bot.')
-  })
-
-  it('returns null when SOUL.md is missing', async () => {
-    const soul = await loadSoul(tmpDir)
-    expect(soul).toBeNull()
-  })
-
-  it('returns null for empty SOUL.md', async () => {
-    await writeFile(join(tmpDir, 'SOUL.md'), '  \n  ')
-    const soul = await loadSoul(tmpDir)
-    expect(soul).toBeNull()
-  })
-})
-
-describe('loadIdentityFiles', () => {
-  let tmpDir: string
+    await rm(tmpDir, { recursive: true });
+  });
+
+  it("loads SOUL.md when present", async () => {
+    await writeFile(join(tmpDir, "SOUL.md"), "I am a friendly bot.");
+    const soul = await loadSoul(tmpDir);
+    expect(soul).toBe("I am a friendly bot.");
+  });
+
+  it("returns null when SOUL.md is missing", async () => {
+    const soul = await loadSoul(tmpDir);
+    expect(soul).toBeNull();
+  });
+
+  it("returns null for empty SOUL.md", async () => {
+    await writeFile(join(tmpDir, "SOUL.md"), "  \n  ");
+    const soul = await loadSoul(tmpDir);
+    expect(soul).toBeNull();
+  });
+});
+
+describe("loadIdentityFiles", () => {
+  let tmpDir: string;
 
   beforeEach(async () => {
-    tmpDir = await mkdtemp(join(tmpdir(), 'ext-test-'))
-  })
+    tmpDir = await mkdtemp(join(tmpdir(), "ext-test-"));
+  });
 
   afterEach(async () => {
-    await rm(tmpDir, { recursive: true })
-  })
-
-  it('loads all three files when present', async () => {
-    await writeFile(join(tmpDir, 'SOUL.md'), 'I am a friendly bot.')
-    await writeFile(join(tmpDir, 'IDENTITY.md'), 'Name: Construct')
-    await writeFile(join(tmpDir, 'USER.md'), 'Name: Reed')
-
-    const result = await loadIdentityFiles(tmpDir)
-    expect(result.soul).toBe('I am a friendly bot.')
-    expect(result.identity).toBe('Name: Construct')
-    expect(result.user).toBe('Name: Reed')
-  })
-
-  it('returns nulls when no files exist', async () => {
-    const result = await loadIdentityFiles(tmpDir)
-    expect(result.soul).toBeNull()
-    expect(result.identity).toBeNull()
-    expect(result.user).toBeNull()
-  })
-
-  it('handles partial files (only SOUL.md)', async () => {
-    await writeFile(join(tmpDir, 'SOUL.md'), 'Soul content')
-
-    const result = await loadIdentityFiles(tmpDir)
-    expect(result.soul).toBe('Soul content')
-    expect(result.identity).toBeNull()
-    expect(result.user).toBeNull()
-  })
-
-  it('handles partial files (only USER.md)', async () => {
-    await writeFile(join(tmpDir, 'USER.md'), 'User content')
-
-    const result = await loadIdentityFiles(tmpDir)
-    expect(result.soul).toBeNull()
-    expect(result.identity).toBeNull()
-    expect(result.user).toBe('User content')
-  })
-
-  it('returns null for empty files', async () => {
-    await writeFile(join(tmpDir, 'SOUL.md'), '  \n  ')
-    await writeFile(join(tmpDir, 'IDENTITY.md'), '')
-    await writeFile(join(tmpDir, 'USER.md'), '   ')
-
-    const result = await loadIdentityFiles(tmpDir)
-    expect(result.soul).toBeNull()
-    expect(result.identity).toBeNull()
-    expect(result.user).toBeNull()
-  })
-})
-
-describe('loadSkills', () => {
-  let tmpDir: string
+    await rm(tmpDir, { recursive: true });
+  });
+
+  it("loads all three files when present", async () => {
+    await writeFile(join(tmpDir, "SOUL.md"), "I am a friendly bot.");
+    await writeFile(join(tmpDir, "IDENTITY.md"), "Name: Construct");
+    await writeFile(join(tmpDir, "USER.md"), "Name: Reed");
+
+    const result = await loadIdentityFiles(tmpDir);
+    expect(result.soul).toBe("I am a friendly bot.");
+    expect(result.identity).toBe("Name: Construct");
+    expect(result.user).toBe("Name: Reed");
+  });
+
+  it("returns nulls when no files exist", async () => {
+    const result = await loadIdentityFiles(tmpDir);
+    expect(result.soul).toBeNull();
+    expect(result.identity).toBeNull();
+    expect(result.user).toBeNull();
+  });
+
+  it("handles partial files (only SOUL.md)", async () => {
+    await writeFile(join(tmpDir, "SOUL.md"), "Soul content");
+
+    const result = await loadIdentityFiles(tmpDir);
+    expect(result.soul).toBe("Soul content");
+    expect(result.identity).toBeNull();
+    expect(result.user).toBeNull();
+  });
+
+  it("handles partial files (only USER.md)", async () => {
+    await writeFile(join(tmpDir, "USER.md"), "User content");
+
+    const result = await loadIdentityFiles(tmpDir);
+    expect(result.soul).toBeNull();
+    expect(result.identity).toBeNull();
+    expect(result.user).toBe("User content");
+  });
+
+  it("returns null for empty files", async () => {
+    await writeFile(join(tmpDir, "SOUL.md"), "  \n  ");
+    await writeFile(join(tmpDir, "IDENTITY.md"), "");
+    await writeFile(join(tmpDir, "USER.md"), "   ");
+
+    const result = await loadIdentityFiles(tmpDir);
+    expect(result.soul).toBeNull();
+    expect(result.identity).toBeNull();
+    expect(result.user).toBeNull();
+  });
+});
+
+describe("loadSkills", () => {
+  let tmpDir: string;
 
   beforeEach(async () => {
-    tmpDir = await mkdtemp(join(tmpdir(), 'ext-test-'))
-    await mkdir(join(tmpDir, 'skills'), { recursive: true })
-  })
+    tmpDir = await mkdtemp(join(tmpdir(), "ext-test-"));
+    await mkdir(join(tmpDir, "skills"), { recursive: true });
+  });
 
   afterEach(async () => {
-    await rm(tmpDir, { recursive: true })
-  })
+    await rm(tmpDir, { recursive: true });
+  });
 
-  it('loads skills from skills/ directory', async () => {
+  it("loads skills from skills/ directory", async () => {
     await writeFile(
-      join(tmpDir, 'skills', 'test.md'),
+      join(tmpDir, "skills", "test.md"),
       `---
 name: test-skill
 description: A test skill
 ---
 
 Do the test thing.`,
-    )
+    );
 
-    const skills = await loadSkills(tmpDir)
-    expect(skills).toHaveLength(1)
-    expect(skills[0].name).toBe('test-skill')
-  })
+    const skills = await loadSkills(tmpDir);
+    expect(skills).toHaveLength(1);
+    expect(skills[0].name).toBe("test-skill");
+  });
 
-  it('loads nested skills from subdirectories', async () => {
-    await mkdir(join(tmpDir, 'skills', 'coding'), { recursive: true })
+  it("loads nested skills from subdirectories", async () => {
+    await mkdir(join(tmpDir, "skills", "coding"), { recursive: true });
     await writeFile(
-      join(tmpDir, 'skills', 'coding', 'review.md'),
+      join(tmpDir, "skills", "coding", "review.md"),
       `---
 name: code-review
 description: Review code changes
 ---
 
 Look at the diff and provide feedback.`,
-    )
+    );
 
-    const skills = await loadSkills(tmpDir)
-    expect(skills).toHaveLength(1)
-    expect(skills[0].name).toBe('code-review')
-  })
+    const skills = await loadSkills(tmpDir);
+    expect(skills).toHaveLength(1);
+    expect(skills[0].name).toBe("code-review");
+  });
 
-  it('returns empty array when skills/ does not exist', async () => {
-    await rm(join(tmpDir, 'skills'), { recursive: true })
-    const skills = await loadSkills(tmpDir)
-    expect(skills).toEqual([])
-  })
+  it("returns empty array when skills/ does not exist", async () => {
+    await rm(join(tmpDir, "skills"), { recursive: true });
+    const skills = await loadSkills(tmpDir);
+    expect(skills).toEqual([]);
+  });
 
-  it('skips invalid skill files', async () => {
-    await writeFile(join(tmpDir, 'skills', 'valid.md'), `---
+  it("skips invalid skill files", async () => {
+    await writeFile(
+      join(tmpDir, "skills", "valid.md"),
+      `---
 name: valid
 description: Valid skill
 ---
 
-Body.`)
-    await writeFile(join(tmpDir, 'skills', 'invalid.md'), 'No frontmatter')
+Body.`,
+    );
+    await writeFile(join(tmpDir, "skills", "invalid.md"), "No frontmatter");
 
-    const skills = await loadSkills(tmpDir)
-    expect(skills).toHaveLength(1)
-    expect(skills[0].name).toBe('valid')
-  })
-})
+    const skills = await loadSkills(tmpDir);
+    expect(skills).toHaveLength(1);
+    expect(skills[0].name).toBe("valid");
+  });
+});
diff --git a/apps/construct/src/extensions/__tests__/secrets.test.ts b/apps/construct/src/extensions/__tests__/secrets.test.ts
index aa914b7..62071fe 100644
--- a/apps/construct/src/extensions/__tests__/secrets.test.ts
+++ b/apps/construct/src/extensions/__tests__/secrets.test.ts
@@ -1,10 +1,10 @@
-import { describe, it, expect, beforeEach, afterEach } from 'vitest'
-import { Kysely } from 'kysely'
-import { createDb } from '@repo/db'
-import type { Database } from '../../db/schema.js'
-import * as migration001 from '../../db/migrations/001-initial.js'
-import * as migration002 from '../../db/migrations/002-fts5-and-embeddings.js'
-import * as migration003 from '../../db/migrations/003-secrets.js'
+import { describe, it, expect, beforeEach, afterEach } from "vitest";
+import { Kysely } from "kysely";
+import { createDb } from "@repo/db";
+import type { Database } from "../../db/schema.js";
+import * as migration001 from "../../db/migrations/001-initial.js";
+import * as migration002 from "../../db/migrations/002-fts5-and-embeddings.js";
+import * as migration003 from "../../db/migrations/003-secrets.js";
 import {
   syncEnvSecrets,
   getSecret,
@@ -12,114 +12,114 @@ import {
   listSecretKeys,
   deleteSecret,
   buildSecretsMap,
-} from '../secrets.js'
+} from "../secrets.js";
 
-describe('secrets', () => {
-  let db: Kysely
+describe("secrets", () => {
+  let db: Kysely;
 
   beforeEach(async () => {
-    const result = createDb(':memory:')
-    db = result.db
-    await migration001.up(db as Kysely)
-    await migration002.up(db as Kysely)
-    await migration003.up(db as Kysely)
-  })
+    const result = createDb(":memory:");
+    db = result.db;
+    await migration001.up(db as Kysely);
+    await migration002.up(db as Kysely);
+    await migration003.up(db as Kysely);
+  });
 
   afterEach(async () => {
-    await db.destroy()
-  })
-
-  describe('setSecret / getSecret', () => {
-    it('stores and retrieves a secret', async () => {
-      await setSecret(db, 'MY_KEY', 'my_value')
-      const value = await getSecret(db, 'MY_KEY')
-      expect(value).toBe('my_value')
-    })
-
-    it('returns null for missing secret', async () => {
-      const value = await getSecret(db, 'NONEXISTENT')
-      expect(value).toBeNull()
-    })
-
-    it('upserts on duplicate key', async () => {
-      await setSecret(db, 'MY_KEY', 'v1')
-      await setSecret(db, 'MY_KEY', 'v2')
-      const value = await getSecret(db, 'MY_KEY')
-      expect(value).toBe('v2')
-    })
-  })
-
-  describe('listSecretKeys', () => {
-    it('lists keys and sources', async () => {
-      await setSecret(db, 'KEY_A', 'a', 'agent')
-      await setSecret(db, 'KEY_B', 'b', 'env')
-      const keys = await listSecretKeys(db)
+    await db.destroy();
+  });
+
+  describe("setSecret / getSecret", () => {
+    it("stores and retrieves a secret", async () => {
+      await setSecret(db, "MY_KEY", "my_value");
+      const value = await getSecret(db, "MY_KEY");
+      expect(value).toBe("my_value");
+    });
+
+    it("returns null for missing secret", async () => {
+      const value = await getSecret(db, "NONEXISTENT");
+      expect(value).toBeNull();
+    });
+
+    it("upserts on duplicate key", async () => {
+      await setSecret(db, "MY_KEY", "v1");
+      await setSecret(db, "MY_KEY", "v2");
+      const value = await getSecret(db, "MY_KEY");
+      expect(value).toBe("v2");
+    });
+  });
+
+  describe("listSecretKeys", () => {
+    it("lists keys and sources", async () => {
+      await setSecret(db, "KEY_A", "a", "agent");
+      await setSecret(db, "KEY_B", "b", "env");
+      const keys = await listSecretKeys(db);
       expect(keys).toEqual([
-        { key: 'KEY_A', source: 'agent' },
-        { key: 'KEY_B', source: 'env' },
-      ])
-    })
-
-    it('returns empty for no secrets', async () => {
-      const keys = await listSecretKeys(db)
-      expect(keys).toEqual([])
-    })
-  })
-
-  describe('deleteSecret', () => {
-    it('deletes an existing secret', async () => {
-      await setSecret(db, 'TO_DELETE', 'val')
-      const deleted = await deleteSecret(db, 'TO_DELETE')
-      expect(deleted).toBe(true)
-      const value = await getSecret(db, 'TO_DELETE')
-      expect(value).toBeNull()
-    })
-
-    it('returns false for nonexistent key', async () => {
-      const deleted = await deleteSecret(db, 'NOPE')
-      expect(deleted).toBe(false)
-    })
-  })
-
-  describe('syncEnvSecrets', () => {
-    it('syncs EXT_* env vars', async () => {
-      process.env.EXT_TEST_KEY = 'test_val'
-      process.env.EXT_ANOTHER = 'another_val'
-
-      const count = await syncEnvSecrets(db)
-      expect(count).toBe(2)
-
-      const val = await getSecret(db, 'TEST_KEY')
-      expect(val).toBe('test_val')
-      const val2 = await getSecret(db, 'ANOTHER')
-      expect(val2).toBe('another_val')
+        { key: "KEY_A", source: "agent" },
+        { key: "KEY_B", source: "env" },
+      ]);
+    });
+
+    it("returns empty for no secrets", async () => {
+      const keys = await listSecretKeys(db);
+      expect(keys).toEqual([]);
+    });
+  });
+
+  describe("deleteSecret", () => {
+    it("deletes an existing secret", async () => {
+      await setSecret(db, "TO_DELETE", "val");
+      const deleted = await deleteSecret(db, "TO_DELETE");
+      expect(deleted).toBe(true);
+      const value = await getSecret(db, "TO_DELETE");
+      expect(value).toBeNull();
+    });
+
+    it("returns false for nonexistent key", async () => {
+      const deleted = await deleteSecret(db, "NOPE");
+      expect(deleted).toBe(false);
+    });
+  });
+
+  describe("syncEnvSecrets", () => {
+    it("syncs EXT_* env vars", async () => {
+      process.env.EXT_TEST_KEY = "test_val";
+      process.env.EXT_ANOTHER = "another_val";
+
+      const count = await syncEnvSecrets(db);
+      expect(count).toBe(2);
+
+      const val = await getSecret(db, "TEST_KEY");
+      expect(val).toBe("test_val");
+      const val2 = await getSecret(db, "ANOTHER");
+      expect(val2).toBe("another_val");
 
       // Cleanup
-      delete process.env.EXT_TEST_KEY
-      delete process.env.EXT_ANOTHER
-    })
-
-    it('env source overwrites agent source on sync', async () => {
-      await setSecret(db, 'SHARED', 'agent_value', 'agent')
-
-      process.env.EXT_SHARED = 'env_value'
-      await syncEnvSecrets(db)
-
-      const val = await getSecret(db, 'SHARED')
-      expect(val).toBe('env_value')
-
-      delete process.env.EXT_SHARED
-    })
-  })
-
-  describe('buildSecretsMap', () => {
-    it('builds a Map from all secrets', async () => {
-      await setSecret(db, 'A', 'va')
-      await setSecret(db, 'B', 'vb')
-      const map = await buildSecretsMap(db)
-      expect(map.get('A')).toBe('va')
-      expect(map.get('B')).toBe('vb')
-      expect(map.size).toBe(2)
-    })
-  })
-})
+      delete process.env.EXT_TEST_KEY;
+      delete process.env.EXT_ANOTHER;
+    });
+
+    it("env source overwrites agent source on sync", async () => {
+      await setSecret(db, "SHARED", "agent_value", "agent");
+
+      process.env.EXT_SHARED = "env_value";
+      await syncEnvSecrets(db);
+
+      const val = await getSecret(db, "SHARED");
+      expect(val).toBe("env_value");
+
+      delete process.env.EXT_SHARED;
+    });
+  });
+
+  describe("buildSecretsMap", () => {
+    it("builds a Map from all secrets", async () => {
+      await setSecret(db, "A", "va");
+      await setSecret(db, "B", "vb");
+      const map = await buildSecretsMap(db);
+      expect(map.get("A")).toBe("va");
+      expect(map.get("B")).toBe("vb");
+      expect(map.size).toBe(2);
+    });
+  });
+});
diff --git a/apps/construct/src/extensions/__tests__/skills.test.ts b/apps/construct/src/extensions/__tests__/skills.test.ts
index 3ab4883..44ccb6c 100644
--- a/apps/construct/src/extensions/__tests__/skills.test.ts
+++ b/apps/construct/src/extensions/__tests__/skills.test.ts
@@ -1,52 +1,52 @@
-import { describe, it, expect } from 'vitest'
-import { selectSkills } from '../embeddings.js'
-import type { Skill } from '../types.js'
+import { describe, it, expect } from "vitest";
+import { selectSkills } from "../embeddings.js";
+import type { Skill } from "../types.js";
 
 // Synthetic skills for testing
 const skills: Skill[] = [
   {
-    name: 'standup',
-    description: 'Guide through daily standup',
+    name: "standup",
+    description: "Guide through daily standup",
     requires: {},
-    body: 'Ask about yesterday, today, blockers.',
-    filePath: '/test/standup.md',
+    body: "Ask about yesterday, today, blockers.",
+    filePath: "/test/standup.md",
   },
   {
-    name: 'haiku',
-    description: 'Help write haikus',
+    name: "haiku",
+    description: "Help write haikus",
     requires: {},
-    body: 'Write a 5-7-5 syllable poem.',
-    filePath: '/test/haiku.md',
+    body: "Write a 5-7-5 syllable poem.",
+    filePath: "/test/haiku.md",
   },
   {
-    name: 'workout',
-    description: 'Plan a workout routine',
+    name: "workout",
+    description: "Plan a workout routine",
     requires: {},
-    body: 'Create a balanced exercise plan.',
-    filePath: '/test/workout.md',
+    body: "Create a balanced exercise plan.",
+    filePath: "/test/workout.md",
   },
-]
+];
 
-describe('selectSkills', () => {
-  it('returns empty when no query embedding', () => {
-    const result = selectSkills(undefined, skills)
-    expect(result).toEqual([])
-  })
+describe("selectSkills", () => {
+  it("returns empty when no query embedding", () => {
+    const result = selectSkills(undefined, skills);
+    expect(result).toEqual([]);
+  });
 
-  it('returns empty when no skills', () => {
-    const result = selectSkills([1, 0, 0], [])
-    expect(result).toEqual([])
-  })
+  it("returns empty when no skills", () => {
+    const result = selectSkills([1, 0, 0], []);
+    expect(result).toEqual([]);
+  });
 
-  it('returns empty when skill embeddings are not initialized', () => {
+  it("returns empty when skill embeddings are not initialized", () => {
     // Skills exist but no embeddings computed → all get score 0
-    const result = selectSkills([1, 0, 0], skills, 0.3)
-    expect(result).toEqual([])
-  })
+    const result = selectSkills([1, 0, 0], skills, 0.3);
+    expect(result).toEqual([]);
+  });
 
-  it('respects maxSkills limit', () => {
+  it("respects maxSkills limit", () => {
     // Even with no embeddings (score 0), if threshold is 0, all match
-    const result = selectSkills([1, 0, 0], skills, 0, 2)
-    expect(result.length).toBeLessThanOrEqual(2)
-  })
-})
+    const result = selectSkills([1, 0, 0], skills, 0, 2);
+    expect(result.length).toBeLessThanOrEqual(2);
+  });
+});
diff --git a/apps/construct/src/extensions/embeddings.ts b/apps/construct/src/extensions/embeddings.ts
index 1343929..1e045ec 100644
--- a/apps/construct/src/extensions/embeddings.ts
+++ b/apps/construct/src/extensions/embeddings.ts
@@ -1,13 +1,13 @@
-import { generateEmbedding, cosineSimilarity } from '@repo/cairn'
-import { agentLog } from '../logger.js'
-import type { Skill } from './types.js'
-import type { ToolPack } from '../tools/packs.js'
+import { generateEmbedding, cosineSimilarity, SIMILARITY } from "@repo/cairn";
+import { agentLog } from "../logger.js";
+import type { Skill } from "./types.js";
+import type { ToolPack } from "../tools/packs.js";
 
 /** Skill embedding cache: skill name → embedding vector */
-const skillEmbeddings = new Map()
+const skillEmbeddings = new Map();
 
 /** Dynamic pack embedding cache: pack name → embedding vector */
-const dynamicPackEmbeddings = new Map()
+const dynamicPackEmbeddings = new Map();
 
 /** Compute embeddings for all skills. Non-fatal on failure. */
 export async function initSkillEmbeddings(
@@ -15,24 +15,24 @@ export async function initSkillEmbeddings(
   skills: Skill[],
   embeddingModel?: string,
 ): Promise {
-  skillEmbeddings.clear()
+  skillEmbeddings.clear();
 
-  if (skills.length === 0) return
+  if (skills.length === 0) return;
 
   const results = await Promise.allSettled(
     skills.map(async (skill) => {
-      const text = `${skill.name}: ${skill.description}`
-      const embedding = await generateEmbedding(apiKey, text, embeddingModel)
-      skillEmbeddings.set(skill.name, embedding)
+      const text = `${skill.name}: ${skill.description}`;
+      const embedding = await generateEmbedding(apiKey, text, embeddingModel);
+      skillEmbeddings.set(skill.name, embedding);
     }),
-  )
+  );
 
-  let failed = 0
+  let failed = 0;
   for (const r of results) {
-    if (r.status === 'rejected') failed++
+    if (r.status === "rejected") failed++;
   }
 
-  agentLog.info`Skill embeddings: ${skillEmbeddings.size}/${skills.length} cached${failed > 0 ? `, ${failed} failed` : ''}`
+  agentLog.info`Skill embeddings: ${skillEmbeddings.size}/${skills.length} cached${failed > 0 ? `, ${failed} failed` : ""}`;
 }
 
 /** Compute embeddings for dynamic tool packs. Non-fatal on failure. */
@@ -41,24 +41,24 @@ export async function initDynamicPackEmbeddings(
   packs: ToolPack[],
   embeddingModel?: string,
 ): Promise {
-  dynamicPackEmbeddings.clear()
+  dynamicPackEmbeddings.clear();
 
-  const toEmbed = packs.filter((p) => !p.alwaysLoad)
-  if (toEmbed.length === 0) return
+  const toEmbed = packs.filter((p) => !p.alwaysLoad);
+  if (toEmbed.length === 0) return;
 
   const results = await Promise.allSettled(
     toEmbed.map(async (pack) => {
-      const embedding = await generateEmbedding(apiKey, pack.description, embeddingModel)
-      dynamicPackEmbeddings.set(pack.name, embedding)
+      const embedding = await generateEmbedding(apiKey, pack.description, embeddingModel);
+      dynamicPackEmbeddings.set(pack.name, embedding);
     }),
-  )
+  );
 
-  let failed = 0
+  let failed = 0;
   for (const r of results) {
-    if (r.status === 'rejected') failed++
+    if (r.status === "rejected") failed++;
   }
 
-  agentLog.info`Dynamic pack embeddings: ${dynamicPackEmbeddings.size}/${toEmbed.length} cached${failed > 0 ? `, ${failed} failed` : ''}`
+  agentLog.info`Dynamic pack embeddings: ${dynamicPackEmbeddings.size}/${toEmbed.length} cached${failed > 0 ? `, ${failed} failed` : ""}`;
 }
 
 /**
@@ -68,25 +68,25 @@ export async function initDynamicPackEmbeddings(
 export function selectSkills(
   queryEmbedding: number[] | undefined,
   skills: Skill[],
-  threshold = 0.35,
+  threshold = SIMILARITY.SKILL_SELECTION,
   maxSkills = 3,
 ): Skill[] {
-  if (skills.length === 0) return []
+  if (skills.length === 0) return [];
 
   // No query embedding → return nothing (skills are optional context)
-  if (!queryEmbedding) return []
+  if (!queryEmbedding) return [];
 
   const scored = skills
     .map((skill) => {
-      const emb = skillEmbeddings.get(skill.name)
-      if (!emb) return { skill, score: 0 }
-      return { skill, score: cosineSimilarity(queryEmbedding, emb) }
+      const emb = skillEmbeddings.get(skill.name);
+      if (!emb) return { skill, score: 0 };
+      return { skill, score: cosineSimilarity(queryEmbedding, emb) };
     })
     .filter((s) => s.score >= threshold)
-    .sort((a, b) => b.score - a.score)
-    .slice(0, maxSkills)
+    .toSorted((a, b) => b.score - a.score)
+    .slice(0, maxSkills);
 
-  return scored.map((s) => s.skill)
+  return scored.map((s) => s.skill);
 }
 
 /**
@@ -96,24 +96,24 @@ export function selectSkills(
 export function selectDynamicPacks(
   queryEmbedding: number[] | undefined,
   packs: ToolPack[],
-  threshold = 0.3,
+  threshold = SIMILARITY.PACK_SELECTION,
 ): ToolPack[] {
   // No query embedding → load all dynamic packs (graceful fallback)
-  if (!queryEmbedding) return packs
+  if (!queryEmbedding) return packs;
 
   return packs.filter((pack) => {
-    if (pack.alwaysLoad) return true
+    if (pack.alwaysLoad) return true;
 
-    const emb = dynamicPackEmbeddings.get(pack.name)
+    const emb = dynamicPackEmbeddings.get(pack.name);
     // No embedding → load it (graceful fallback)
-    if (!emb) return true
+    if (!emb) return true;
 
-    return cosineSimilarity(queryEmbedding, emb) >= threshold
-  })
+    return cosineSimilarity(queryEmbedding, emb) >= threshold;
+  });
 }
 
 /** Clear all extension embedding caches. Called on reload. */
 export function clearExtensionEmbeddings(): void {
-  skillEmbeddings.clear()
-  dynamicPackEmbeddings.clear()
+  skillEmbeddings.clear();
+  dynamicPackEmbeddings.clear();
 }
diff --git a/apps/construct/src/extensions/index.ts b/apps/construct/src/extensions/index.ts
index 33ed61e..9fccf82 100644
--- a/apps/construct/src/extensions/index.ts
+++ b/apps/construct/src/extensions/index.ts
@@ -1,38 +1,39 @@
-import { mkdir } from 'node:fs/promises'
-import { join } from 'node:path'
-import type { Kysely } from 'kysely'
-import type { Database } from '../db/schema.js'
-import type { ExtensionRegistry, Skill } from './types.js'
-import type { ToolContext, InternalTool } from '../tools/packs.js'
-import type { TSchema } from '@sinclair/typebox'
-import { loadIdentityFiles, loadSkills, loadDynamicTools } from './loader.js'
-import { buildSecretsMap } from './secrets.js'
+import { mkdir } from "node:fs/promises";
+import { join } from "node:path";
+import type { Kysely } from "kysely";
+import type { Database } from "../db/schema.js";
+import type { ExtensionRegistry, Skill } from "./types.js";
+import type { ToolContext, InternalTool } from "../tools/packs.js";
+import type { TSchema } from "@sinclair/typebox";
+import { loadIdentityFiles, loadSkills, loadDynamicTools } from "./loader.js";
+import { buildSecretsMap } from "./secrets.js";
 import {
   initSkillEmbeddings,
   initDynamicPackEmbeddings,
   selectSkills as selectSkillsByEmbedding,
   selectDynamicPacks as selectDynamicPacksByEmbedding,
   clearExtensionEmbeddings,
-} from './embeddings.js'
-import { agentLog } from '../logger.js'
+} from "./embeddings.js";
+import { agentLog } from "../logger.js";
+import { ExtensionError } from "../errors.js";
 
 // Singleton registry
 let registry: ExtensionRegistry = {
   identity: { soul: null, identity: null, user: null },
   skills: [],
   dynamicPacks: [],
-}
+};
 
-let extensionsDir: string = ''
-let apiKey: string = ''
-let embeddingModel: string | undefined
-let dbRef: Kysely | null = null
-let reloadLock: Promise | null = null
+let extensionsDir: string = "";
+let apiKey: string = "";
+let embeddingModel: string | undefined;
+let dbRef: Kysely | null = null;
+let reloadLock: Promise | null = null;
 
 /** Ensure extensions directory and subdirectories exist. */
 async function ensureDirs(dir: string): Promise {
-  await mkdir(join(dir, 'skills'), { recursive: true })
-  await mkdir(join(dir, 'tools'), { recursive: true })
+  await mkdir(join(dir, "skills"), { recursive: true });
+  await mkdir(join(dir, "tools"), { recursive: true });
 }
 
 /**
@@ -44,14 +45,14 @@ export async function initExtensions(
   db: Kysely,
   model?: string,
 ): Promise {
-  extensionsDir = dir
-  apiKey = key
-  embeddingModel = model
-  dbRef = db
+  extensionsDir = dir;
+  apiKey = key;
+  embeddingModel = model;
+  dbRef = db;
 
-  await ensureDirs(dir)
+  await ensureDirs(dir);
 
-  return reloadExtensions()
+  return reloadExtensions();
 }
 
 /**
@@ -60,67 +61,67 @@ export async function initExtensions(
  */
 export async function reloadExtensions(): Promise {
   if (reloadLock) {
-    agentLog.info`Reload already in progress, waiting…`
-    return reloadLock
+    agentLog.info`Reload already in progress, waiting…`;
+    return reloadLock;
   }
 
-  reloadLock = doReload()
+  reloadLock = doReload();
   try {
-    return await reloadLock
+    return await reloadLock;
   } finally {
-    reloadLock = null
+    reloadLock = null;
   }
 }
 
 async function doReload(): Promise {
   if (!extensionsDir || !dbRef) {
-    throw new Error('Extensions not initialized — call initExtensions() first')
+    throw new ExtensionError("Extensions not initialized — call initExtensions() first");
   }
 
-  agentLog.info`Reloading extensions from ${extensionsDir}`
+  agentLog.info`Reloading extensions from ${extensionsDir}`;
 
   // Clear embedding caches
-  clearExtensionEmbeddings()
+  clearExtensionEmbeddings();
 
   // Load identity files (SOUL.md, IDENTITY.md, USER.md)
-  const identity = await loadIdentityFiles(extensionsDir)
+  const identity = await loadIdentityFiles(extensionsDir);
 
   // Load skills
-  const skills = await loadSkills(extensionsDir)
-  agentLog.info`Loaded ${skills.length} skill(s)`
+  const skills = await loadSkills(extensionsDir);
+  agentLog.info`Loaded ${skills.length} skill(s)`;
 
   // Build secrets map for dynamic tool context
-  const secretsMap = await buildSecretsMap(dbRef!)
-  const availableSecrets = new Set(secretsMap.keys())
+  const secretsMap = await buildSecretsMap(dbRef!);
+  const availableSecrets = new Set(secretsMap.keys());
 
   // Load dynamic tools
   const dynamicPacks = await loadDynamicTools(
     extensionsDir,
     { secrets: secretsMap },
     availableSecrets,
-  )
-  agentLog.info`Loaded ${dynamicPacks.length} dynamic pack(s)`
+  );
+  agentLog.info`Loaded ${dynamicPacks.length} dynamic pack(s)`;
 
   // Update registry
-  registry = { identity, skills, dynamicPacks }
+  registry = { identity, skills, dynamicPacks };
 
   // Compute embeddings (non-blocking failures)
   await Promise.all([
     initSkillEmbeddings(apiKey, skills, embeddingModel),
     initDynamicPackEmbeddings(apiKey, dynamicPacks, embeddingModel),
-  ])
+  ]);
 
-  return registry
+  return registry;
 }
 
 /** Get the current extension registry. */
 export function getExtensionRegistry(): ExtensionRegistry {
-  return registry
+  return registry;
 }
 
 /** Select skills relevant to a query using embedding similarity. */
 export function selectSkills(queryEmbedding: number[] | undefined): Skill[] {
-  return selectSkillsByEmbedding(queryEmbedding, registry.skills)
+  return selectSkillsByEmbedding(queryEmbedding, registry.skills);
 }
 
 /**
@@ -131,18 +132,18 @@ export function selectAndCreateDynamicTools(
   queryEmbedding: number[] | undefined,
   ctx: ToolContext,
 ): InternalTool[] {
-  const selected = selectDynamicPacksByEmbedding(queryEmbedding, registry.dynamicPacks)
+  const selected = selectDynamicPacksByEmbedding(queryEmbedding, registry.dynamicPacks);
 
   if (selected.length > 0) {
-    agentLog.info`Selected dynamic packs: ${selected.map((p) => p.name).join(', ')}`
+    agentLog.info`Selected dynamic packs: ${selected.map((p) => p.name).join(", ")}`;
   }
 
-  const tools: InternalTool[] = []
+  const tools: InternalTool[] = [];
   for (const pack of selected) {
     for (const factory of pack.factories) {
-      const tool = factory(ctx)
-      if (tool) tools.push(tool)
+      const tool = factory(ctx);
+      if (tool) tools.push(tool);
     }
   }
-  return tools
+  return tools;
 }
diff --git a/apps/construct/src/extensions/loader.ts b/apps/construct/src/extensions/loader.ts
index 01931fb..e81e018 100644
--- a/apps/construct/src/extensions/loader.ts
+++ b/apps/construct/src/extensions/loader.ts
@@ -1,33 +1,38 @@
-import { readFile, readdir, access, symlink, lstat } from 'node:fs/promises'
-import { join, basename, extname, dirname } from 'node:path'
-import { fileURLToPath } from 'node:url'
-import { parse as parseYaml } from 'yaml'
-import type { TSchema } from '@sinclair/typebox'
-import type { Skill, ExtensionRequirements, DynamicToolContext, DynamicToolExport } from './types.js'
-import type { InternalTool, ToolPack } from '../tools/packs.js'
-import { toolLog } from '../logger.js'
+import { readFile, readdir, access, symlink, lstat } from "node:fs/promises";
+import { join, basename, extname, dirname } from "node:path";
+import { fileURLToPath } from "node:url";
+import { parse as parseYaml } from "yaml";
+import type { TSchema } from "@sinclair/typebox";
+import type {
+  Skill,
+  ExtensionRequirements,
+  DynamicToolContext,
+  DynamicToolExport,
+} from "./types.js";
+import type { InternalTool, ToolPack } from "../tools/packs.js";
+import { toolLog } from "../logger.js";
 
 /**
  * Parse a skill markdown file with YAML frontmatter.
  * Returns null if the file is invalid.
  */
 export function parseSkillFile(content: string, filePath: string): Skill | null {
-  const fmMatch = content.match(/^---\n([\s\S]*?)\n---\n([\s\S]*)$/)
+  const fmMatch = content.match(/^---\n([\s\S]*?)\n---\n([\s\S]*)$/);
   if (!fmMatch) {
-    toolLog.warning`Skill file has no frontmatter: ${filePath}`
-    return null
+    toolLog.warning`Skill file has no frontmatter: ${filePath}`;
+    return null;
   }
 
   try {
     const frontmatter = parseYaml(fmMatch[1]) as {
-      name?: string
-      description?: string
-      requires?: ExtensionRequirements
-    }
+      name?: string;
+      description?: string;
+      requires?: ExtensionRequirements;
+    };
 
     if (!frontmatter.name || !frontmatter.description) {
-      toolLog.warning`Skill missing name or description: ${filePath}`
-      return null
+      toolLog.warning`Skill missing name or description: ${filePath}`;
+      return null;
     }
 
     return {
@@ -36,10 +41,10 @@ export function parseSkillFile(content: string, filePath: string): Skill | null
       requires: frontmatter.requires ?? {},
       body: fmMatch[2].trim(),
       filePath,
-    }
+    };
   } catch (err) {
-    toolLog.warning`Failed to parse skill frontmatter in ${filePath}: ${err}`
-    return null
+    toolLog.warning`Failed to parse skill frontmatter in ${filePath}: ${err}`;
+    return null;
   }
 }
 
@@ -51,12 +56,12 @@ export function checkRequirements(
   requires: ExtensionRequirements,
   availableSecrets: Set,
 ): string[] {
-  const unmet: string[] = []
+  const unmet: string[] = [];
 
   if (requires.env) {
     for (const key of requires.env) {
       if (!process.env[key]) {
-        unmet.push(`env: ${key}`)
+        unmet.push(`env: ${key}`);
       }
     }
   }
@@ -64,7 +69,7 @@ export function checkRequirements(
   if (requires.secrets) {
     for (const key of requires.secrets) {
       if (!availableSecrets.has(key)) {
-        unmet.push(`secret: ${key}`)
+        unmet.push(`secret: ${key}`);
       }
     }
   }
@@ -73,75 +78,79 @@ export function checkRequirements(
   // For now we just log a note
   if (requires.bins && requires.bins.length > 0) {
     // We don't block on bins, just note it
-    toolLog.debug`Extension requires binaries: ${requires.bins.join(', ')}`
+    toolLog.debug`Extension requires binaries: ${requires.bins.join(", ")}`;
   }
 
-  return unmet
+  return unmet;
 }
 
 /** Load a single markdown file from the extensions directory. Returns null if not found or empty. */
 async function loadMarkdownFile(extensionsDir: string, filename: string): Promise {
   try {
-    const filePath = join(extensionsDir, filename)
-    const content = await readFile(filePath, 'utf-8')
-    return content.trim() || null
+    const filePath = join(extensionsDir, filename);
+    const content = await readFile(filePath, "utf-8");
+    return content.trim() || null;
   } catch {
-    return null
+    return null;
   }
 }
 
 /** Load SOUL.md from the extensions directory. Returns null if not found. */
 export async function loadSoul(extensionsDir: string): Promise {
-  return loadMarkdownFile(extensionsDir, 'SOUL.md')
+  return loadMarkdownFile(extensionsDir, "SOUL.md");
 }
 
 /** Load all identity files (SOUL.md, IDENTITY.md, USER.md) from the extensions directory. */
 export async function loadIdentityFiles(extensionsDir: string): Promise<{
-  soul: string | null
-  identity: string | null
-  user: string | null
+  soul: string | null;
+  identity: string | null;
+  user: string | null;
 }> {
   const [soul, identity, user] = await Promise.all([
-    loadMarkdownFile(extensionsDir, 'SOUL.md'),
-    loadMarkdownFile(extensionsDir, 'IDENTITY.md'),
-    loadMarkdownFile(extensionsDir, 'USER.md'),
-  ])
-  return { soul, identity, user }
+    loadMarkdownFile(extensionsDir, "SOUL.md"),
+    loadMarkdownFile(extensionsDir, "IDENTITY.md"),
+    loadMarkdownFile(extensionsDir, "USER.md"),
+  ]);
+  return { soul, identity, user };
 }
 
-/** Recursively discover and parse all skill .md files under skills/. */
+/**
+ * Recursively discover and parse all skill .md files under skills/.
+ * Each skill file must have YAML frontmatter with name + description.
+ * @param extensionsDir - Root extensions directory (contains skills/ subdirectory).
+ */
 export async function loadSkills(extensionsDir: string): Promise {
-  const skillsDir = join(extensionsDir, 'skills')
-  const skills: Skill[] = []
+  const skillsDir = join(extensionsDir, "skills");
+  const skills: Skill[] = [];
 
   try {
-    await access(skillsDir)
+    await access(skillsDir);
   } catch {
-    return skills
+    return skills;
   }
 
   async function walk(dir: string) {
-    const entries = await readdir(dir, { withFileTypes: true })
+    const entries = await readdir(dir, { withFileTypes: true });
     for (const entry of entries) {
-      const fullPath = join(dir, entry.name)
+      const fullPath = join(dir, entry.name);
       if (entry.isDirectory()) {
-        await walk(fullPath)
-      } else if (entry.isFile() && extname(entry.name) === '.md') {
+        await walk(fullPath);
+      } else if (entry.isFile() && extname(entry.name) === ".md") {
         try {
-          const content = await readFile(fullPath, 'utf-8')
-          const skill = parseSkillFile(content, fullPath)
+          const content = await readFile(fullPath, "utf-8");
+          const skill = parseSkillFile(content, fullPath);
           if (skill) {
-            skills.push(skill)
+            skills.push(skill);
           }
         } catch (err) {
-          toolLog.warning`Failed to read skill file ${fullPath}: ${err}`
+          toolLog.warning`Failed to read skill file ${fullPath}: ${err}`;
         }
       }
     }
   }
 
-  await walk(skillsDir)
-  return skills
+  await walk(skillsDir);
+  return skills;
 }
 
 /**
@@ -150,129 +159,138 @@ export async function loadSkills(extensionsDir: string): Promise {
  * to import project dependencies like @sinclair/typebox.
  */
 async function ensureNodeModulesLink(extensionsDir: string): Promise {
-  const linkPath = join(extensionsDir, 'node_modules')
+  const linkPath = join(extensionsDir, "node_modules");
 
   // Already exists (real dir or symlink) — skip
   try {
-    await lstat(linkPath)
-    return
+    await lstat(linkPath);
+    return;
   } catch {
     // Doesn't exist — create it
   }
 
   // Walk up from the project source to find node_modules
-  const thisFile = fileURLToPath(import.meta.url)
-  let dir = dirname(thisFile)
+  const thisFile = fileURLToPath(import.meta.url);
+  let dir = dirname(thisFile);
   while (dir !== dirname(dir)) {
-    const candidate = join(dir, 'node_modules')
+    const candidate = join(dir, "node_modules");
     try {
-      await access(candidate)
-      await symlink(candidate, linkPath)
-      toolLog.debug`Symlinked node_modules from ${candidate} to ${linkPath}`
-      return
+      await access(candidate);
+      await symlink(candidate, linkPath);
+      toolLog.debug`Symlinked node_modules from ${candidate} to ${linkPath}`;
+      return;
     } catch {
-      dir = dirname(dir)
+      dir = dirname(dir);
     }
   }
 }
 
 /** Skip config, test, and spec files that aren't tool exports. */
 function isNonToolFile(filename: string): boolean {
-  return /\.(config|test|spec)\.[cm]?ts$/.test(filename)
+  return /\.(config|test|spec)\.[cm]?ts$/.test(filename);
 }
 
 /**
  * Load dynamic tool .ts files from the tools/ directory within extensions.
- * - Root-level .ts files → standalone packs (single tool)
- * - Subdirectories → grouped into packs (dir name = pack name)
+ * Root-level .ts files become standalone packs; subdirectories become grouped packs.
+ * Each file is hot-loaded via jiti (no compile step). Requirements are checked before loading.
+ * @param extensionsDir - Root extensions directory (contains tools/ subdirectory).
+ * @param toolCtx - Runtime context (db handle, secrets, API keys) passed to tool factories.
+ * @param availableSecrets - Set of secret names available for requirement checking.
  */
 export async function loadDynamicTools(
   extensionsDir: string,
   toolCtx: DynamicToolContext,
   availableSecrets: Set,
 ): Promise {
-  const toolsDir = join(extensionsDir, 'tools')
-  const packs: ToolPack[] = []
+  const toolsDir = join(extensionsDir, "tools");
+  const packs: ToolPack[] = [];
 
   try {
-    await access(toolsDir)
+    await access(toolsDir);
   } catch {
-    return packs
+    return packs;
   }
 
   // Ensure tool files can resolve project dependencies
-  await ensureNodeModulesLink(extensionsDir)
+  await ensureNodeModulesLink(extensionsDir);
 
-  const entries = await readdir(toolsDir, { withFileTypes: true })
+  const entries = await readdir(toolsDir, { withFileTypes: true });
 
   for (const entry of entries) {
-    const fullPath = join(toolsDir, entry.name)
+    const fullPath = join(toolsDir, entry.name);
 
-    if (entry.isFile() && extname(entry.name) === '.ts' && !isNonToolFile(entry.name)) {
+    if (entry.isFile() && extname(entry.name) === ".ts" && !isNonToolFile(entry.name)) {
       // Standalone tool file → single-tool pack
-      const tool = await loadSingleToolFile(fullPath, toolCtx, availableSecrets)
+      const tool = await loadSingleToolFile(fullPath, toolCtx, availableSecrets);
       if (tool) {
-        const packName = `ext:${basename(entry.name, '.ts')}`
+        const packName = `ext:${basename(entry.name, ".ts")}`;
         packs.push({
           name: packName,
           description: tool.description,
           alwaysLoad: false,
           factories: [() => tool],
-        })
+        });
       }
     } else if (entry.isDirectory()) {
       // Directory → grouped pack
-      const pack = await directoryToPack(fullPath, entry.name, toolCtx, availableSecrets)
+      const pack = await directoryToPack(fullPath, entry.name, toolCtx, availableSecrets);
       if (pack) {
-        packs.push(pack)
+        packs.push(pack);
       }
     }
   }
 
-  return packs
+  return packs;
 }
 
-/** Load a single .ts tool file using jiti. Returns null if invalid or requirements unmet. */
+/**
+ * Load a single .ts tool file using jiti. Returns null if invalid or requirements unmet.
+ * Supports both factory functions (receives DynamicToolContext) and plain tool objects.
+ * @param filePath - Absolute path to the .ts file.
+ * @param toolCtx - Runtime context passed to factory-style exports.
+ * @param availableSecrets - Secret names for requirement validation.
+ */
 export async function loadSingleToolFile(
   filePath: string,
   toolCtx: DynamicToolContext,
   availableSecrets: Set,
 ): Promise | null> {
   try {
-    const { createJiti } = await import('jiti')
-    const jiti = createJiti(import.meta.url, { interopDefault: true, moduleCache: false })
+    const { createJiti } = await import("jiti");
+    const jiti = createJiti(import.meta.url, { interopDefault: true, moduleCache: false });
 
-    const mod = (await jiti.import(filePath)) as DynamicToolExport
+    const mod = (await jiti.import(filePath)) as DynamicToolExport;
 
     // Check requirements
     if (mod.meta?.requires) {
-      const unmet = checkRequirements(mod.meta.requires, availableSecrets)
+      const unmet = checkRequirements(mod.meta.requires, availableSecrets);
       if (unmet.length > 0) {
-        toolLog.info`Skipping tool ${filePath}: unmet requirements: ${unmet.join(', ')}`
-        return null
+        toolLog.info`Skipping tool ${filePath}: unmet requirements: ${unmet.join(", ")}`;
+        return null;
       }
     }
 
     // Resolve tool: factory function or plain object
-    const exported = mod.default
-    let tool: InternalTool
+    const exported = mod.default;
+    let tool: InternalTool;
 
-    if (typeof exported === 'function') {
-      tool = exported(toolCtx) as InternalTool
+    if (typeof exported === "function") {
+      tool = exported(toolCtx) as InternalTool;
     } else {
-      tool = exported as InternalTool
+      tool = exported as InternalTool;
     }
 
     // Validate tool shape
     if (!tool?.name || !tool?.description || !tool?.parameters || !tool?.execute) {
-      toolLog.warning`Invalid tool export in ${filePath}: missing name/description/parameters/execute`
-      return null
+      toolLog.warning`Invalid tool export in ${filePath}: missing name/description/parameters/execute`;
+      return null;
     }
 
-    return tool
+    return tool;
   } catch (err) {
-    toolLog.warning`Failed to load dynamic tool ${filePath}: ${err}`
-    return null
+    toolLog.warning`Failed to load dynamic tool ${filePath}: ${err}`;
+    return null;
   }
 }
 
@@ -283,35 +301,31 @@ async function directoryToPack(
   toolCtx: DynamicToolContext,
   availableSecrets: Set,
 ): Promise {
-  const packName = `ext:${dirName}`
-  const tools: InternalTool[] = []
+  const packName = `ext:${dirName}`;
+  const tools: InternalTool[] = [];
 
   // Check for pack.md description override
-  let description = ''
+  let description = "";
   try {
-    const packMd = await readFile(join(dirPath, 'pack.md'), 'utf-8')
-    description = packMd.trim()
+    const packMd = await readFile(join(dirPath, "pack.md"), "utf-8");
+    description = packMd.trim();
   } catch {
     // No pack.md — auto-generate from tool descriptions below
   }
 
-  const entries = await readdir(dirPath, { withFileTypes: true })
+  const entries = await readdir(dirPath, { withFileTypes: true });
   for (const entry of entries) {
-    if (entry.isFile() && extname(entry.name) === '.ts' && !isNonToolFile(entry.name)) {
-      const tool = await loadSingleToolFile(
-        join(dirPath, entry.name),
-        toolCtx,
-        availableSecrets,
-      )
-      if (tool) tools.push(tool)
+    if (entry.isFile() && extname(entry.name) === ".ts" && !isNonToolFile(entry.name)) {
+      const tool = await loadSingleToolFile(join(dirPath, entry.name), toolCtx, availableSecrets);
+      if (tool) tools.push(tool);
     }
   }
 
-  if (tools.length === 0) return null
+  if (tools.length === 0) return null;
 
   // Auto-generate description from tool descriptions if no pack.md
   if (!description) {
-    description = tools.map((t) => t.description).join('. ')
+    description = tools.map((t) => t.description).join(". ");
   }
 
   return {
@@ -319,5 +333,5 @@ async function directoryToPack(
     description,
     alwaysLoad: false,
     factories: tools.map((t) => () => t),
-  }
+  };
 }
diff --git a/apps/construct/src/extensions/secrets.ts b/apps/construct/src/extensions/secrets.ts
index 97f1850..b0db86f 100644
--- a/apps/construct/src/extensions/secrets.ts
+++ b/apps/construct/src/extensions/secrets.ts
@@ -1,56 +1,54 @@
-import type { Kysely } from 'kysely'
-import { sql } from 'kysely'
-import type { Database } from '../db/schema.js'
-import { toolLog } from '../logger.js'
+import type { Kysely } from "kysely";
+import { sql } from "kysely";
+import type { Database } from "../db/schema.js";
+import { toolLog } from "../logger.js";
 
-type DB = Kysely
+type DB = Kysely;
 
 /**
  * Sync all EXT_* environment variables into the secrets table.
  * .env values always win on restart (source='env' overwrite).
  */
 export async function syncEnvSecrets(db: DB): Promise {
-  const extVars = Object.entries(process.env).filter(
-    ([key]) => key.startsWith('EXT_'),
-  )
+  const extVars = Object.entries(process.env).filter(([key]) => key.startsWith("EXT_"));
 
-  let synced = 0
+  let synced = 0;
   for (const [key, value] of extVars) {
-    if (!value) continue
+    if (!value) continue;
     // Strip the EXT_ prefix for the secret key name
-    const secretKey = key.slice(4)
+    const secretKey = key.slice(4);
     await db
-      .insertInto('secrets')
+      .insertInto("secrets")
       .values({
         key: secretKey,
         value,
-        source: 'env',
+        source: "env",
       })
       .onConflict((oc) =>
-        oc.column('key').doUpdateSet({
+        oc.column("key").doUpdateSet({
           value,
-          source: 'env',
+          source: "env",
           updated_at: sql`datetime('now')`,
         }),
       )
-      .execute()
-    synced++
+      .execute();
+    synced++;
   }
 
   if (synced > 0) {
-    toolLog.info`Synced ${synced} env secrets to database`
+    toolLog.info`Synced ${synced} env secrets to database`;
   }
-  return synced
+  return synced;
 }
 
 /** Get a secret value by key. Returns null if not found. */
 export async function getSecret(db: DB, key: string): Promise {
   const row = await db
-    .selectFrom('secrets')
-    .select('value')
-    .where('key', '=', key)
-    .executeTakeFirst()
-  return row?.value ?? null
+    .selectFrom("secrets")
+    .select("value")
+    .where("key", "=", key)
+    .executeTakeFirst();
+  return row?.value ?? null;
 }
 
 /** Store or update a secret. */
@@ -58,44 +56,34 @@ export async function setSecret(
   db: DB,
   key: string,
   value: string,
-  source: 'agent' | 'env' = 'agent',
+  source: "agent" | "env" = "agent",
 ): Promise {
   await db
-    .insertInto('secrets')
+    .insertInto("secrets")
     .values({ key, value, source })
     .onConflict((oc) =>
-      oc.column('key').doUpdateSet({
+      oc.column("key").doUpdateSet({
         value,
         source,
         updated_at: sql`datetime('now')`,
       }),
     )
-    .execute()
+    .execute();
 }
 
 /** List all secret key names (never exposes values). */
 export async function listSecretKeys(db: DB): Promise> {
-  return db
-    .selectFrom('secrets')
-    .select(['key', 'source'])
-    .orderBy('key')
-    .execute()
+  return db.selectFrom("secrets").select(["key", "source"]).orderBy("key").execute();
 }
 
 /** Delete a secret by key. Returns true if deleted. */
 export async function deleteSecret(db: DB, key: string): Promise {
-  const result = await db
-    .deleteFrom('secrets')
-    .where('key', '=', key)
-    .executeTakeFirst()
-  return (result.numDeletedRows ?? 0n) > 0n
+  const result = await db.deleteFrom("secrets").where("key", "=", key).executeTakeFirst();
+  return (result.numDeletedRows ?? 0n) > 0n;
 }
 
 /** Build a secrets Map for use in dynamic tool contexts. */
 export async function buildSecretsMap(db: DB): Promise> {
-  const rows = await db
-    .selectFrom('secrets')
-    .select(['key', 'value'])
-    .execute()
-  return new Map(rows.map((r) => [r.key, r.value]))
+  const rows = await db.selectFrom("secrets").select(["key", "value"]).execute();
+  return new Map(rows.map((r) => [r.key, r.value]));
 }
diff --git a/apps/construct/src/extensions/types.ts b/apps/construct/src/extensions/types.ts
index b32f646..d9d332e 100644
--- a/apps/construct/src/extensions/types.ts
+++ b/apps/construct/src/extensions/types.ts
@@ -1,46 +1,46 @@
-import type { TSchema } from '@sinclair/typebox'
-import type { InternalTool, ToolPack } from '../tools/packs.js'
+import type { TSchema } from "@sinclair/typebox";
+import type { InternalTool, ToolPack } from "../tools/packs.js";
 
 /** Requirements that an extension declares in its frontmatter / meta */
 export interface ExtensionRequirements {
-  env?: string[]
-  bins?: string[]
-  secrets?: string[]
+  env?: string[];
+  bins?: string[];
+  secrets?: string[];
 }
 
 /** Parsed skill from a markdown file with YAML frontmatter */
 export interface Skill {
-  name: string
-  description: string
-  requires: ExtensionRequirements
-  body: string
-  filePath: string
+  name: string;
+  description: string;
+  requires: ExtensionRequirements;
+  body: string;
+  filePath: string;
 }
 
 /** What a dynamic tool .ts file exports */
 export interface DynamicToolExport {
   meta?: {
-    requires?: ExtensionRequirements
-  }
+    requires?: ExtensionRequirements;
+  };
   /** Factory function (receives context) or plain tool object */
-  default: ((ctx: DynamicToolContext) => InternalTool) | InternalTool
+  default: ((ctx: DynamicToolContext) => InternalTool) | InternalTool;
 }
 
 /** Context passed to dynamic tool factory functions */
 export interface DynamicToolContext {
-  secrets: Map
+  secrets: Map;
 }
 
 /** Identity files loaded from the extensions directory */
 export interface IdentityFiles {
-  soul: string | null
-  identity: string | null
-  user: string | null
+  soul: string | null;
+  identity: string | null;
+  user: string | null;
 }
 
 /** The full loaded extension registry */
 export interface ExtensionRegistry {
-  identity: IdentityFiles
-  skills: Skill[]
-  dynamicPacks: ToolPack[]
+  identity: IdentityFiles;
+  skills: Skill[];
+  dynamicPacks: ToolPack[];
 }
diff --git a/apps/construct/src/logger.ts b/apps/construct/src/logger.ts
index 3084555..1f24eb1 100644
--- a/apps/construct/src/logger.ts
+++ b/apps/construct/src/logger.ts
@@ -1,116 +1,18 @@
-import { configure, getConsoleSink, getLogger } from '@logtape/logtape'
-import {
-  createWriteStream,
-  statSync,
-  renameSync,
-  unlinkSync,
-  mkdirSync,
-  type WriteStream,
-} from 'node:fs'
-import { dirname } from 'node:path'
+import { setupLogging as setup, rotateLogs, createLogger } from "@repo/log";
 
-const MAX_LOG_SIZE = 5 * 1024 * 1024 // 5 MB
-const MAX_ROTATED = 3
+export { rotateLogs };
 
-// --- Rotation ---
-
-function shiftFiles(filePath: string) {
-  for (let i = MAX_ROTATED; i >= 1; i--) {
-    const src = i === 1 ? filePath : `${filePath}.${i - 1}`
-    const dst = `${filePath}.${i}`
-    try { if (i === MAX_ROTATED) unlinkSync(dst) } catch {}
-    try { renameSync(src, dst) } catch {}
-  }
-}
-
-function rotateIfOversized(filePath: string) {
-  try {
-    if (statSync(filePath).size >= MAX_LOG_SIZE) shiftFiles(filePath)
-  } catch {}
-}
-
-// --- Custom file sink (swappable stream for runtime rotation) ---
-
-let logFilePath: string | undefined
-let logStream: WriteStream | undefined
-
-function openStream(filePath: string) {
-  mkdirSync(dirname(filePath), { recursive: true })
-  logStream = createWriteStream(filePath, { flags: 'a' })
-}
-
-/**
- * Rotate the current log file. Safe to call at any time.
- * Closes the current stream, shifts files, opens a fresh stream.
- */
-export async function rotateLogs(): Promise {
-  if (!logFilePath) return
-
-  // Close current stream
-  if (logStream) {
-    await new Promise((resolve, reject) => {
-      logStream!.end((err: Error | null) => (err ? reject(err) : resolve()))
-    })
-    logStream = undefined
-  }
-
-  shiftFiles(logFilePath)
-  openStream(logFilePath)
-}
-
-// --- Formatter (shared between console + file) ---
-
-const formatter = ({ level, category, message, timestamp, properties }: {
-  level: string
-  category: readonly string[]
-  message: readonly unknown[]
-  timestamp: number
-  properties: Record
-}) => {
-  const ts = new Date(timestamp).toISOString()
-  const cat = category.join('.')
-  const msg = message.map(String).join('')
-  const props = Object.keys(properties).length > 0
-    ? ' ' + JSON.stringify(properties)
-    : ''
-  return `${ts} [${level}] ${cat}: ${msg}${props}`
-}
-
-// --- Setup ---
-
-export async function setupLogging(level: string = 'info', logFile?: string) {
-  const sinks: Record = {
-    console: getConsoleSink({ formatter }),
-  }
-
-  if (logFile) {
-    logFilePath = logFile
-    mkdirSync(dirname(logFile), { recursive: true })
-    rotateIfOversized(logFile)
-    openStream(logFile)
-
-    // Custom sink that writes to the swappable logStream
-    sinks.file = (record: Parameters[0]) => {
-      logStream?.write(formatter(record) + '\n')
-    }
-  }
-
-  await configure({
-    sinks,
-    filters: {},
-    loggers: [
-      {
-        category: 'construct',
-        lowestLevel: level as 'debug' | 'info' | 'warning' | 'error' | 'fatal',
-        sinks: Object.keys(sinks),
-      },
-    ],
-  })
+export async function setupLogging(level?: string, logFile?: string): Promise {
+  await setup({
+    appName: "construct",
+    level: level as "debug" | "info" | "warning" | "error" | "fatal",
+    logFile,
+  });
 }
 
-export const log = getLogger(['construct'])
-export const agentLog = getLogger(['construct', 'agent'])
-export const toolLog = getLogger(['construct', 'tool'])
-export const telegramLog = getLogger(['construct', 'telegram'])
-export const schedulerLog = getLogger(['construct', 'scheduler'])
-export const dbLog = getLogger(['construct', 'db'])
+export const log = createLogger("construct");
+export const agentLog = createLogger("construct", "agent");
+export const toolLog = createLogger("construct", "tool");
+export const telegramLog = createLogger("construct", "telegram");
+export const schedulerLog = createLogger("construct", "scheduler");
+export const dbLog = createLogger("construct", "db");
diff --git a/apps/construct/src/main.ts b/apps/construct/src/main.ts
index 26cb615..bf4a615 100644
--- a/apps/construct/src/main.ts
+++ b/apps/construct/src/main.ts
@@ -1,61 +1,61 @@
-import { env } from './env.js'
-import { setupLogging, log } from './logger.js'
-import { createDb } from '@repo/db'
-import type { Database } from './db/schema.js'
-import { runMigrations } from './db/migrate.js'
-import { createBot } from './telegram/bot.js'
-import { startScheduler, stopScheduler } from './scheduler/index.js'
-import { initPackEmbeddings } from './tools/packs.js'
-import { syncEnvSecrets } from './extensions/secrets.js'
-import { initExtensions } from './extensions/index.js'
+import { env } from "./env.js";
+import { setupLogging, log } from "./logger.js";
+import { createDb } from "@repo/db";
+import type { Database } from "./db/schema.js";
+import { runMigrations } from "./db/migrate.js";
+import { createBot } from "./telegram/bot.js";
+import { startScheduler, stopScheduler } from "./scheduler/index.js";
+import { initPackEmbeddings } from "./tools/packs.js";
+import { syncEnvSecrets } from "./extensions/secrets.js";
+import { initExtensions } from "./extensions/index.js";
 
 async function main() {
-  await setupLogging(env.LOG_LEVEL, env.LOG_FILE)
+  await setupLogging(env.LOG_LEVEL, env.LOG_FILE);
 
-  log.info`Starting Construct`
-  log.info`Model: ${env.OPENROUTER_MODEL}`
-  log.info`Database: ${env.DATABASE_URL}`
-  log.info`Timezone: ${env.TIMEZONE}`
+  log.info`Starting Construct`;
+  log.info`Model: ${env.OPENROUTER_MODEL}`;
+  log.info`Database: ${env.DATABASE_URL}`;
+  log.info`Timezone: ${env.TIMEZONE}`;
 
   // Run migrations
-  await runMigrations(env.DATABASE_URL)
+  await runMigrations(env.DATABASE_URL);
 
   // Create database connection
-  const { db } = createDb(env.DATABASE_URL)
+  const { db } = createDb(env.DATABASE_URL);
 
   // Sync EXT_* env vars into secrets table
-  await syncEnvSecrets(db)
+  await syncEnvSecrets(db);
 
   // Initialize extensions (SOUL.md, skills, dynamic tools + their embeddings)
-  await initExtensions(env.EXTENSIONS_DIR, env.OPENROUTER_API_KEY, db, env.EMBEDDING_MODEL)
+  await initExtensions(env.EXTENSIONS_DIR, env.OPENROUTER_API_KEY, db, env.EMBEDDING_MODEL);
 
   // Pre-compute tool pack embeddings for semantic selection
-  await initPackEmbeddings(env.OPENROUTER_API_KEY, env.EMBEDDING_MODEL)
+  await initPackEmbeddings(env.OPENROUTER_API_KEY, env.EMBEDDING_MODEL);
 
   // Create Telegram bot
-  const bot = createBot(db)
+  const bot = createBot(db);
 
   // Start scheduler
-  await startScheduler(db, bot, env.TIMEZONE)
+  await startScheduler(db, bot, env.TIMEZONE);
 
   // Start Telegram long polling
-  log.info`Construct is running`
-  bot.start({ allowed_updates: ['message', 'message_reaction', 'callback_query'] })
+  log.info`Construct is running`;
+  bot.start({ allowed_updates: ["message", "message_reaction", "callback_query"] });
 
   // Graceful shutdown
   const shutdown = async () => {
-    log.info`Shutting down`
-    stopScheduler()
-    bot.stop()
-    await db.destroy()
-    process.exit(0)
-  }
-
-  process.on('SIGINT', shutdown)
-  process.on('SIGTERM', shutdown)
+    log.info`Shutting down`;
+    stopScheduler();
+    bot.stop();
+    await db.destroy();
+    process.exit(0);
+  };
+
+  process.on("SIGINT", shutdown);
+  process.on("SIGTERM", shutdown);
 }
 
 main().catch((err) => {
-  console.error('Fatal error:', err)
-  process.exit(1)
-})
+  console.error("Fatal error:", err);
+  process.exit(1);
+});
diff --git a/apps/construct/src/memory.ts b/apps/construct/src/memory.ts
index 07885ff..a6f2da4 100644
--- a/apps/construct/src/memory.ts
+++ b/apps/construct/src/memory.ts
@@ -1,12 +1,19 @@
-import { MemoryManager, type ObserverOutput, type CairnMessage, estimateTokens, DEFAULT_OBSERVER_PROMPT, DEFAULT_REFLECTOR_PROMPT } from '@repo/cairn'
-import type { Observation } from '@repo/cairn'
-import { sql } from 'kysely'
-import { nanoid } from 'nanoid'
+import {
+  MemoryManager,
+  type ObserverOutput,
+  type CairnMessage,
+  estimateTokens,
+  DEFAULT_OBSERVER_PROMPT,
+  DEFAULT_REFLECTOR_PROMPT,
+} from "@repo/cairn";
+import type { Observation } from "@repo/cairn";
+import { sql } from "kysely";
+import { nanoid } from "nanoid";
 
 // --- Custom observer prompt with expires_at extraction ---
 
 export const CONSTRUCT_OBSERVER_PROMPT = DEFAULT_OBSERVER_PROMPT.replace(
-  '## Output Format',
+  "## Output Format",
   `## Temporal Expiry
 
 For time-bound tasks, events, or reminders, include expires_at (ISO datetime, local timezone, no offset):
@@ -30,23 +37,23 @@ Single-occurrence events must include the specific date in content AND set expir
       "observation_date": "2025-01-15",
       "expires_at": "2025-03-05T10:00:00"
     },`,
-)
+);
 
 // --- Custom reflector prompt with expires_at preservation ---
 
 export const CONSTRUCT_REFLECTOR_PROMPT = DEFAULT_REFLECTOR_PROMPT.replace(
-  '## Temporal Handling',
+  "## Temporal Handling",
   `## Temporal Expiry
 
 Preserve expires_at on observations that have one. When merging, keep the earliest expires_at.
 If an observation's expires_at is in the past, it may be dropped regardless of priority.
 
 ## Temporal Handling`,
-)
+);
 
 /** Message shape with construct-specific telegram_message_id. */
 export interface ConstructMessage extends CairnMessage {
-  telegram_message_id: number | null
+  telegram_message_id: number | null;
 }
 
 /**
@@ -54,16 +61,14 @@ export interface ConstructMessage extends CairnMessage {
  * and includes telegram_message_id in message queries.
  */
 export class ConstructMemoryManager extends MemoryManager {
-  override async getUnobservedMessages(
-    conversationId: string,
-  ): Promise {
+  override async getUnobservedMessages(conversationId: string): Promise {
     const conv = await this.db
-      .selectFrom('conversations')
-      .select(['observed_up_to_message_id', 'observation_token_count'])
-      .where('id', '=', conversationId)
-      .executeTakeFirst()
+      .selectFrom("conversations")
+      .select(["observed_up_to_message_id", "observation_token_count"])
+      .where("id", "=", conversationId)
+      .executeTakeFirst();
 
-    const watermarkId = conv?.observed_up_to_message_id
+    const watermarkId = conv?.observed_up_to_message_id;
 
     if (watermarkId) {
       const rows = await sql`
@@ -72,8 +77,8 @@ export class ConstructMemoryManager extends MemoryManager {
         WHERE conversation_id = ${conversationId}
           AND rowid > (SELECT rowid FROM messages WHERE id = ${watermarkId})
         ORDER BY rowid ASC
-      `.execute(this.db)
-      return rows.rows
+      `.execute(this.db);
+      return rows.rows;
     }
 
     const rows = await sql`
@@ -81,8 +86,8 @@ export class ConstructMemoryManager extends MemoryManager {
       FROM messages
       WHERE conversation_id = ${conversationId}
       ORDER BY created_at ASC
-    `.execute(this.db)
-    return rows.rows
+    `.execute(this.db);
+    return rows.rows;
   }
 
   override async getActiveObservations(conversationId: string): Promise {
@@ -93,34 +98,34 @@ export class ConstructMemoryManager extends MemoryManager {
         AND superseded_at IS NULL
         AND (expires_at IS NULL OR expires_at > datetime('now'))
       ORDER BY created_at ASC
-    `.execute(this.db)
+    `.execute(this.db);
 
     return rows.rows.map((r) => ({
       id: r.id as string,
       conversation_id: r.conversation_id as string,
       content: r.content as string,
-      priority: r.priority as Observation['priority'],
+      priority: r.priority as Observation["priority"],
       observation_date: r.observation_date as string,
       source_message_ids: r.source_message_ids ? JSON.parse(r.source_message_ids as string) : [],
       token_count: (r.token_count as number) ?? 0,
       generation: (r.generation as number) ?? 0,
       superseded_at: r.superseded_at as string | null,
       created_at: r.created_at as string,
-    }))
+    }));
   }
 
   protected override async storeObservation(
     conversationId: string,
-    obs: ObserverOutput['observations'][0],
+    obs: ObserverOutput["observations"][0],
     messageIds: string[] | null,
     generation: number,
   ): Promise {
     // Use raw SQL to write expires_at (construct-specific column)
-    const id = nanoid()
-    const expiresAt = (obs.expires_at as string | undefined) ?? null
+    const id = nanoid();
+    const expiresAt = (obs.expires_at as string | undefined) ?? null;
     await sql`
       INSERT INTO observations (id, conversation_id, content, priority, observation_date, source_message_ids, token_count, generation, expires_at)
       VALUES (${id}, ${conversationId}, ${obs.content}, ${obs.priority}, ${obs.observation_date}, ${messageIds ? JSON.stringify(messageIds) : null}, ${estimateTokens(obs.content)}, ${generation}, ${expiresAt})
-    `.execute(this.db)
+    `.execute(this.db);
   }
 }
diff --git a/apps/construct/src/scheduler/__tests__/scheduler.test.ts b/apps/construct/src/scheduler/__tests__/scheduler.test.ts
index e925b6a..38c0217 100644
--- a/apps/construct/src/scheduler/__tests__/scheduler.test.ts
+++ b/apps/construct/src/scheduler/__tests__/scheduler.test.ts
@@ -1,361 +1,373 @@
-import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest'
-import type { Kysely } from 'kysely'
-import type { Database, Schedule } from '../../db/schema.js'
-import { setupDb } from '../../__tests__/fixtures.js'
-import { registerJob, stopScheduler } from '../index.js'
-import { listSchedules, cancelSchedule, markScheduleRun } from '../../db/queries.js'
-import { nanoid } from 'nanoid'
-import type { AgentResponse } from '../../agent.js'
-
-const mockProcessMessage = vi.fn<(...args: unknown[]) => Promise>()
-vi.mock('../../agent.js', () => ({
+import { describe, it, expect, beforeEach, afterEach, vi } from "vitest";
+import type { Kysely } from "kysely";
+import type { Database, Schedule } from "../../db/schema.js";
+import { setupDb } from "../../__tests__/fixtures.js";
+import { registerJob, stopScheduler } from "../index.js";
+import { listSchedules, cancelSchedule, markScheduleRun } from "../../db/queries.js";
+import { nanoid } from "nanoid";
+import type { AgentResponse } from "../../agent.js";
+
+const mockProcessMessage = vi.fn<(...args: unknown[]) => Promise>();
+vi.mock("../../agent.js", () => ({
   processMessage: (...args: unknown[]) => mockProcessMessage(...args),
-}))
+}));
 
-let db: Kysely
+let db: Kysely;
 
 /** Insert a schedule directly into the DB. */
 async function insertSchedule(
-  db: Kysely,
-  overrides: Partial & Pick,
+  database: Kysely,
+  overrides: Partial & Pick,
 ): Promise {
-  const id = overrides.id ?? nanoid()
-  await db
-    .insertInto('schedules')
+  const id = overrides.id ?? nanoid();
+  await database
+    .insertInto("schedules")
     .values({
       id,
-      description: overrides.description ?? 'test schedule',
+      description: overrides.description ?? "test schedule",
       cron_expression: overrides.cron_expression ?? null,
       run_at: overrides.run_at ?? null,
       message: overrides.message,
       prompt: overrides.prompt ?? null,
       chat_id: overrides.chat_id,
     })
-    .execute()
+    .execute();
 
-  return db
-    .selectFrom('schedules')
+  return database
+    .selectFrom("schedules")
     .selectAll()
-    .where('id', '=', id)
-    .executeTakeFirstOrThrow()
+    .where("id", "=", id)
+    .executeTakeFirstOrThrow();
 }
 
 /** Minimal mock bot with a spied sendMessage. */
 function makeMockBot() {
-  const sendMessage = vi.fn().mockResolvedValue({ message_id: 999 })
+  const sendMessage = vi.fn().mockResolvedValue({ message_id: 999 });
   return {
     api: { sendMessage },
     // registerJob only uses bot.api.sendMessage
-  } as any
+  } as any;
 }
 
-describe('scheduler', () => {
+describe("scheduler", () => {
   beforeEach(async () => {
-    db = await setupDb()
-    mockProcessMessage.mockReset()
-  })
+    db = await setupDb();
+    mockProcessMessage.mockReset();
+  });
 
   afterEach(async () => {
-    stopScheduler()
-    await db.destroy()
-  })
+    stopScheduler();
+    await db.destroy();
+  });
 
   // ── registerJob: past-due one-shot ─────────────────────────────
 
-  it('fires immediately for past-due one-shot schedule', async () => {
-    const bot = makeMockBot()
+  it("fires immediately for past-due one-shot schedule", async () => {
+    const bot = makeMockBot();
     mockProcessMessage.mockResolvedValueOnce({
-      text: 'overdue reminder',
+      text: "overdue reminder",
       toolCalls: [],
-    })
+    });
 
     const schedule = await insertSchedule(db, {
-      run_at: '2020-01-01T00:00:00Z',
-      message: 'overdue reminder',
-      chat_id: '12345',
-    })
+      run_at: "2020-01-01T00:00:00Z",
+      message: "overdue reminder",
+      chat_id: "12345",
+    });
 
-    registerJob(db, bot, schedule, 'UTC')
+    registerJob(db, bot, schedule, "UTC");
 
     // Past-due path is fire-and-forget (.then()), wait for microtask flush
-    await new Promise((r) => setTimeout(r, 100))
+    await new Promise((r) => setTimeout(r, 100));
 
     // All schedules now go through the agent pipeline with framed instruction
     expect(mockProcessMessage).toHaveBeenCalledWith(
       db,
-      expect.stringContaining('overdue reminder'),
+      expect.stringContaining("overdue reminder"),
       expect.objectContaining({
-        source: 'scheduler',
-        chatId: '12345',
+        source: "scheduler",
+        chatId: "12345",
       }),
-    )
+    );
 
     // Schedule should be cancelled after firing
-    const schedules = await listSchedules(db, true)
-    const found = schedules.find((s) => s.id === schedule.id)
-    expect(found).toBeUndefined() // cancelled = active=0, so filtered out
-  })
+    const schedules = await listSchedules(db, true);
+    const found = schedules.find((s) => s.id === schedule.id);
+    expect(found).toBeUndefined(); // cancelled = active=0, so filtered out
+  });
 
-  it('marks last_run_at after firing past-due schedule', async () => {
-    const bot = makeMockBot()
+  it("marks last_run_at after firing past-due schedule", async () => {
+    const bot = makeMockBot();
     const schedule = await insertSchedule(db, {
-      run_at: '2020-01-01T00:00:00Z',
-      message: 'check last_run',
-      chat_id: '12345',
-    })
+      run_at: "2020-01-01T00:00:00Z",
+      message: "check last_run",
+      chat_id: "12345",
+    });
 
-    registerJob(db, bot, schedule, 'UTC')
-    await new Promise((r) => setTimeout(r, 50))
+    registerJob(db, bot, schedule, "UTC");
+    await new Promise((r) => setTimeout(r, 50));
 
     const row = await db
-      .selectFrom('schedules')
-      .select('last_run_at')
-      .where('id', '=', schedule.id)
-      .executeTakeFirstOrThrow()
+      .selectFrom("schedules")
+      .select("last_run_at")
+      .where("id", "=", schedule.id)
+      .executeTakeFirstOrThrow();
 
-    expect(row.last_run_at).toBeTruthy()
-  })
+    expect(row.last_run_at).toBeTruthy();
+  });
 
   // ── registerJob: future one-shot ───────────────────────────────
 
-  it('does NOT fire for future one-shot schedule', async () => {
-    const bot = makeMockBot()
+  it("does NOT fire for future one-shot schedule", async () => {
+    const bot = makeMockBot();
     const schedule = await insertSchedule(db, {
-      run_at: '2099-12-31T23:59:59Z',
-      message: 'future reminder',
-      chat_id: '12345',
-    })
+      run_at: "2099-12-31T23:59:59Z",
+      message: "future reminder",
+      chat_id: "12345",
+    });
 
-    registerJob(db, bot, schedule, 'UTC')
-    await new Promise((r) => setTimeout(r, 50))
+    registerJob(db, bot, schedule, "UTC");
+    await new Promise((r) => setTimeout(r, 50));
 
-    expect(bot.api.sendMessage).not.toHaveBeenCalled()
+    expect(bot.api.sendMessage).not.toHaveBeenCalled();
 
     // Schedule should still be active
-    const schedules = await listSchedules(db, true)
-    expect(schedules.find((s) => s.id === schedule.id)).toBeDefined()
-  })
+    const schedules = await listSchedules(db, true);
+    expect(schedules.find((s) => s.id === schedule.id)).toBeDefined();
+  });
 
   // ── registerJob: cron ──────────────────────────────────────────
 
-  it('registers cron job without immediate fire', async () => {
-    const bot = makeMockBot()
+  it("registers cron job without immediate fire", async () => {
+    const bot = makeMockBot();
     const schedule = await insertSchedule(db, {
-      cron_expression: '0 0 * * *', // daily at midnight
-      message: 'daily check',
-      chat_id: '12345',
-    })
+      cron_expression: "0 0 * * *", // daily at midnight
+      message: "daily check",
+      chat_id: "12345",
+    });
 
-    registerJob(db, bot, schedule, 'UTC')
-    await new Promise((r) => setTimeout(r, 50))
+    registerJob(db, bot, schedule, "UTC");
+    await new Promise((r) => setTimeout(r, 50));
 
     // Cron shouldn't fire immediately
-    expect(bot.api.sendMessage).not.toHaveBeenCalled()
-  })
+    expect(bot.api.sendMessage).not.toHaveBeenCalled();
+  });
 
   // ── registerJob: deduplication ─────────────────────────────────
 
-  it('ignores duplicate registerJob calls', async () => {
-    const bot = makeMockBot()
+  it("ignores duplicate registerJob calls", async () => {
+    const bot = makeMockBot();
     const schedule = await insertSchedule(db, {
-      run_at: '2099-12-31T23:59:59Z',
-      message: 'dedup test',
-      chat_id: '12345',
-    })
+      run_at: "2099-12-31T23:59:59Z",
+      message: "dedup test",
+      chat_id: "12345",
+    });
 
-    registerJob(db, bot, schedule, 'UTC')
-    registerJob(db, bot, schedule, 'UTC')
-    registerJob(db, bot, schedule, 'UTC')
+    registerJob(db, bot, schedule, "UTC");
+    registerJob(db, bot, schedule, "UTC");
+    registerJob(db, bot, schedule, "UTC");
 
     // No error, no duplicate side effects
-    expect(bot.api.sendMessage).not.toHaveBeenCalled()
-  })
+    expect(bot.api.sendMessage).not.toHaveBeenCalled();
+  });
 
   // ── stopScheduler cleanup ──────────────────────────────────────
 
-  it('cleans up all jobs on stopScheduler', async () => {
-    const bot = makeMockBot()
+  it("cleans up all jobs on stopScheduler", async () => {
+    const bot = makeMockBot();
 
     const s1 = await insertSchedule(db, {
-      cron_expression: '0 * * * *',
-      message: 'hourly',
-      chat_id: '12345',
-    })
+      cron_expression: "0 * * * *",
+      message: "hourly",
+      chat_id: "12345",
+    });
     const s2 = await insertSchedule(db, {
-      run_at: '2099-06-01T00:00:00Z',
-      message: 'future',
-      chat_id: '12345',
-    })
+      run_at: "2099-06-01T00:00:00Z",
+      message: "future",
+      chat_id: "12345",
+    });
 
-    registerJob(db, bot, schedule(s1), 'UTC')
-    registerJob(db, bot, schedule(s2), 'UTC')
+    registerJob(db, bot, asSchedule(s1), "UTC");
+    registerJob(db, bot, asSchedule(s2), "UTC");
 
     // stopScheduler called in afterEach — verify it doesn't throw
-    stopScheduler()
+    stopScheduler();
 
     // After stop, registering the same ID should work again (map was cleared)
-    registerJob(db, bot, schedule(s1), 'UTC')
-  })
+    registerJob(db, bot, asSchedule(s1), "UTC");
+  });
 
   // ── DB query helpers ───────────────────────────────────────────
 
-  it('listSchedules returns only active schedules', async () => {
-    await insertSchedule(db, { message: 'active', chat_id: '1', run_at: '2099-01-01T00:00:00Z' })
-    const s2 = await insertSchedule(db, { message: 'cancelled', chat_id: '1', run_at: '2099-01-01T00:00:00Z' })
+  it("listSchedules returns only active schedules", async () => {
+    await insertSchedule(db, { message: "active", chat_id: "1", run_at: "2099-01-01T00:00:00Z" });
+    const s2 = await insertSchedule(db, {
+      message: "cancelled",
+      chat_id: "1",
+      run_at: "2099-01-01T00:00:00Z",
+    });
 
-    await cancelSchedule(db, s2.id)
+    await cancelSchedule(db, s2.id);
 
-    const active = await listSchedules(db, true)
-    expect(active).toHaveLength(1)
-    expect(active[0].message).toBe('active')
+    const active = await listSchedules(db, true);
+    expect(active).toHaveLength(1);
+    expect(active[0].message).toBe("active");
 
-    const all = await listSchedules(db, false)
-    expect(all).toHaveLength(2)
-  })
+    const all = await listSchedules(db, false);
+    expect(all).toHaveLength(2);
+  });
 
-  it('cancelSchedule is idempotent', async () => {
-    const s = await insertSchedule(db, { message: 'x', chat_id: '1', run_at: '2099-01-01T00:00:00Z' })
+  it("cancelSchedule is idempotent", async () => {
+    const s = await insertSchedule(db, {
+      message: "x",
+      chat_id: "1",
+      run_at: "2099-01-01T00:00:00Z",
+    });
 
-    const first = await cancelSchedule(db, s.id)
-    expect(first).toBe(true)
+    const first = await cancelSchedule(db, s.id);
+    expect(first).toBe(true);
 
-    const second = await cancelSchedule(db, s.id)
-    expect(second).toBe(false)
-  })
+    const second = await cancelSchedule(db, s.id);
+    expect(second).toBe(false);
+  });
 
-  it('markScheduleRun sets last_run_at', async () => {
-    const s = await insertSchedule(db, { message: 'x', chat_id: '1', run_at: '2099-01-01T00:00:00Z' })
-    expect(s.last_run_at).toBeNull()
+  it("markScheduleRun sets last_run_at", async () => {
+    const s = await insertSchedule(db, {
+      message: "x",
+      chat_id: "1",
+      run_at: "2099-01-01T00:00:00Z",
+    });
+    expect(s.last_run_at).toBeNull();
 
-    await markScheduleRun(db, s.id)
+    await markScheduleRun(db, s.id);
 
     const updated = await db
-      .selectFrom('schedules')
-      .select('last_run_at')
-      .where('id', '=', s.id)
-      .executeTakeFirstOrThrow()
+      .selectFrom("schedules")
+      .select("last_run_at")
+      .where("id", "=", s.id)
+      .executeTakeFirstOrThrow();
 
-    expect(updated.last_run_at).toBeTruthy()
-  })
+    expect(updated.last_run_at).toBeTruthy();
+  });
 
   // ── Agent-prompt schedules ──────────────────────────────────────
 
-  it('fires agent prompt for schedule with prompt column', async () => {
-    const bot = makeMockBot()
+  it("fires agent prompt for schedule with prompt column", async () => {
+    const bot = makeMockBot();
     mockProcessMessage.mockResolvedValueOnce({
-      text: 'Weather is cold, wear a jacket!',
+      text: "Weather is cold, wear a jacket!",
       toolCalls: [],
-    })
+    });
 
     const s = await insertSchedule(db, {
-      run_at: '2020-01-01T00:00:00Z',
-      message: 'Check weather',
-      prompt: 'Check the weather and alert if cold',
-      chat_id: '12345',
-    })
+      run_at: "2020-01-01T00:00:00Z",
+      message: "Check weather",
+      prompt: "Check the weather and alert if cold",
+      chat_id: "12345",
+    });
 
-    registerJob(db, bot, s, 'UTC')
-    await new Promise((r) => setTimeout(r, 100))
+    registerJob(db, bot, s, "UTC");
+    await new Promise((r) => setTimeout(r, 100));
 
     // processMessage should have been called with the framed prompt
     expect(mockProcessMessage).toHaveBeenCalledWith(
       db,
-      expect.stringContaining('Check the weather and alert if cold'),
+      expect.stringContaining("Check the weather and alert if cold"),
       expect.objectContaining({
-        source: 'scheduler',
+        source: "scheduler",
         externalId: `schedule:${s.id}`,
-        chatId: '12345',
+        chatId: "12345",
       }),
-    )
+    );
 
     // Non-empty response should be sent to Telegram
-    expect(bot.api.sendMessage).toHaveBeenCalled()
-    const call = bot.api.sendMessage.mock.calls[0]
-    expect(call[0]).toBe('12345')
-    expect(call[1]).toContain('Weather is cold')
-  })
-
-  it('stays silent when agent returns empty response', async () => {
-    const bot = makeMockBot()
+    expect(bot.api.sendMessage).toHaveBeenCalled();
+    const call = bot.api.sendMessage.mock.calls[0];
+    expect(call[0]).toBe("12345");
+    expect(call[1]).toContain("Weather is cold");
+  });
+
+  it("stays silent when agent returns empty response", async () => {
+    const bot = makeMockBot();
     mockProcessMessage.mockResolvedValueOnce({
-      text: '   ',
+      text: "   ",
       toolCalls: [],
-    })
+    });
 
     const s = await insertSchedule(db, {
-      run_at: '2020-01-01T00:00:00Z',
-      message: 'Background task',
-      prompt: 'Run system updates silently',
-      chat_id: '12345',
-    })
+      run_at: "2020-01-01T00:00:00Z",
+      message: "Background task",
+      prompt: "Run system updates silently",
+      chat_id: "12345",
+    });
 
-    registerJob(db, bot, s, 'UTC')
-    await new Promise((r) => setTimeout(r, 100))
+    registerJob(db, bot, s, "UTC");
+    await new Promise((r) => setTimeout(r, 100));
 
-    expect(mockProcessMessage).toHaveBeenCalled()
+    expect(mockProcessMessage).toHaveBeenCalled();
     // Empty response → no Telegram message
-    expect(bot.api.sendMessage).not.toHaveBeenCalled()
+    expect(bot.api.sendMessage).not.toHaveBeenCalled();
 
     // But last_run_at should still be set
     const row = await db
-      .selectFrom('schedules')
-      .select('last_run_at')
-      .where('id', '=', s.id)
-      .executeTakeFirstOrThrow()
-    expect(row.last_run_at).toBeTruthy()
-  })
+      .selectFrom("schedules")
+      .select("last_run_at")
+      .where("id", "=", s.id)
+      .executeTakeFirstOrThrow();
+    expect(row.last_run_at).toBeTruthy();
+  });
 
-  it('does not crash scheduler when agent throws', async () => {
-    const bot = makeMockBot()
-    mockProcessMessage.mockRejectedValueOnce(new Error('LLM API down'))
+  it("does not crash scheduler when agent throws", async () => {
+    const bot = makeMockBot();
+    mockProcessMessage.mockRejectedValueOnce(new Error("LLM API down"));
 
     const s = await insertSchedule(db, {
-      run_at: '2020-01-01T00:00:00Z',
-      message: 'Risky task',
-      prompt: 'Do something that fails',
-      chat_id: '12345',
-    })
+      run_at: "2020-01-01T00:00:00Z",
+      message: "Risky task",
+      prompt: "Do something that fails",
+      chat_id: "12345",
+    });
 
-    registerJob(db, bot, s, 'UTC')
-    await new Promise((r) => setTimeout(r, 100))
+    registerJob(db, bot, s, "UTC");
+    await new Promise((r) => setTimeout(r, 100));
 
     // Should not throw, sendMessage should not be called
-    expect(bot.api.sendMessage).not.toHaveBeenCalled()
-  })
+    expect(bot.api.sendMessage).not.toHaveBeenCalled();
+  });
 
-  it('routes legacy schedules (no prompt) through agent using message as instruction', async () => {
-    const bot = makeMockBot()
+  it("routes legacy schedules (no prompt) through agent using message as instruction", async () => {
+    const bot = makeMockBot();
     mockProcessMessage.mockResolvedValueOnce({
-      text: 'Here is your plain reminder!',
+      text: "Here is your plain reminder!",
       toolCalls: [],
-    })
+    });
 
     const s = await insertSchedule(db, {
-      run_at: '2020-01-01T00:00:00Z',
-      message: 'plain reminder',
-      chat_id: '12345',
-    })
+      run_at: "2020-01-01T00:00:00Z",
+      message: "plain reminder",
+      chat_id: "12345",
+    });
 
-    registerJob(db, bot, s, 'UTC')
-    await new Promise((r) => setTimeout(r, 100))
+    registerJob(db, bot, s, "UTC");
+    await new Promise((r) => setTimeout(r, 100));
 
     // All schedules go through agent — legacy message becomes the framed instruction
     expect(mockProcessMessage).toHaveBeenCalledWith(
       db,
-      expect.stringContaining('plain reminder'),
+      expect.stringContaining("plain reminder"),
       expect.objectContaining({
-        source: 'scheduler',
-        chatId: '12345',
+        source: "scheduler",
+        chatId: "12345",
       }),
-    )
+    );
     // Agent response sent to Telegram
-    expect(bot.api.sendMessage).toHaveBeenCalled()
-    const call = bot.api.sendMessage.mock.calls[0]
-    expect(call[1]).toContain('plain reminder')
-  })
-})
+    expect(bot.api.sendMessage).toHaveBeenCalled();
+    const call = bot.api.sendMessage.mock.calls[0];
+    expect(call[1]).toContain("plain reminder");
+  });
+});
 
 /** Identity helper — makes the test read better. */
-function schedule(s: Schedule): Schedule {
-  return s
+function asSchedule(s: Schedule): Schedule {
+  return s;
 }
diff --git a/apps/construct/src/scheduler/index.ts b/apps/construct/src/scheduler/index.ts
index b101818..1eaad53 100644
--- a/apps/construct/src/scheduler/index.ts
+++ b/apps/construct/src/scheduler/index.ts
@@ -1,157 +1,148 @@
-import { Cron } from 'croner'
-import type { Kysely } from 'kysely'
-import type { Bot } from 'grammy'
-import { listSchedules, markScheduleRun, cancelSchedule, getOrCreateConversation, saveMessage } from '../db/queries.js'
-import { schedulerLog } from '../logger.js'
-import type { Database, Schedule } from '../db/schema.js'
-import { processMessage } from '../agent.js'
-import { markdownToTelegramHtml } from '../telegram/format.js'
-
-const activeJobs = new Map()
-let syncInterval: ReturnType | null = null
+import { Cron } from "croner";
+import type { Kysely } from "kysely";
+import type { Bot } from "grammy";
+import {
+  listSchedules,
+  markScheduleRun,
+  cancelSchedule,
+  getOrCreateConversation,
+  saveMessage,
+} from "../db/queries.js";
+import { schedulerLog } from "../logger.js";
+import type { Database, Schedule } from "../db/schema.js";
+import { processMessage } from "../agent.js";
+import { markdownToTelegramHtml } from "../telegram/format.js";
+
+const activeJobs = new Map();
+let syncInterval: ReturnType | null = null;
 
 export async function startScheduler(db: Kysely, bot: Bot, timezone: string) {
-  schedulerLog.info`Starting scheduler (timezone: ${timezone})`
+  schedulerLog.info`Starting scheduler (timezone: ${timezone})`;
 
-  const schedules = await listSchedules(db, true)
+  const schedules = await listSchedules(db, true);
   for (const schedule of schedules) {
-    registerJob(db, bot, schedule, timezone)
+    registerJob(db, bot, schedule, timezone);
   }
 
-  schedulerLog.info`Loaded ${schedules.length} active schedules`
+  schedulerLog.info`Loaded ${schedules.length} active schedules`;
 
   // Poll for new schedules every 30 seconds
   syncInterval = setInterval(async () => {
-    await syncSchedules(db, bot, timezone)
-  }, 30_000)
+    await syncSchedules(db, bot, timezone);
+  }, 30_000);
 }
 
-export function registerJob(
-  db: Kysely,
-  bot: Bot,
-  schedule: Schedule,
-  timezone: string,
-) {
-  if (activeJobs.has(schedule.id)) return
+export function registerJob(db: Kysely, bot: Bot, schedule: Schedule, timezone: string) {
+  if (activeJobs.has(schedule.id)) return;
 
   if (schedule.cron_expression) {
-    schedulerLog.info`Registering cron job [${schedule.id}]: ${schedule.description} (${schedule.cron_expression}) [${timezone}]`
+    schedulerLog.info`Registering cron job [${schedule.id}]: ${schedule.description} (${schedule.cron_expression}) [${timezone}]`;
     const job = new Cron(schedule.cron_expression, { timezone }, async () => {
-      await fireSchedule(db, bot, schedule)
-    })
-    activeJobs.set(schedule.id, job)
+      await fireSchedule(db, bot, schedule);
+    });
+    activeJobs.set(schedule.id, job);
   } else if (schedule.run_at) {
-    schedulerLog.info`Registering one-shot job [${schedule.id}]: ${schedule.description} at ${schedule.run_at} [${timezone}]`
+    schedulerLog.info`Registering one-shot job [${schedule.id}]: ${schedule.description} at ${schedule.run_at} [${timezone}]`;
     const job = new Cron(schedule.run_at, { timezone }, async () => {
-      await fireSchedule(db, bot, schedule)
-      await cancelSchedule(db, schedule.id)
-      activeJobs.delete(schedule.id)
-    })
+      await fireSchedule(db, bot, schedule);
+      await cancelSchedule(db, schedule.id);
+      activeJobs.delete(schedule.id);
+    });
 
     // If nextRun is null, the time is in the past — fire immediately
     if (job.nextRun() === null) {
-      job.stop()
-      schedulerLog.info`Schedule [${schedule.id}] is past due, firing immediately`
-      fireSchedule(db, bot, schedule).then(() =>
-        cancelSchedule(db, schedule.id),
-      )
-      return
+      job.stop();
+      schedulerLog.info`Schedule [${schedule.id}] is past due, firing immediately`;
+      fireSchedule(db, bot, schedule).then(() => cancelSchedule(db, schedule.id));
+      return;
     }
 
-    activeJobs.set(schedule.id, job)
+    activeJobs.set(schedule.id, job);
   }
 }
 
-async function fireSchedule(
-  db: Kysely,
-  bot: Bot,
-  schedule: Schedule,
-) {
-  await fireAgentSchedule(db, bot, schedule)
+async function fireSchedule(db: Kysely, bot: Bot, schedule: Schedule) {
+  await fireAgentSchedule(db, bot, schedule);
 }
 
-async function fireAgentSchedule(
-  db: Kysely,
-  bot: Bot,
-  schedule: Schedule,
-) {
-  const instruction = schedule.prompt ?? schedule.message
-  schedulerLog.info`Firing agent schedule [${schedule.id}]: ${schedule.description} → instruction: "${instruction}"`
+async function fireAgentSchedule(db: Kysely, bot: Bot, schedule: Schedule) {
+  const instruction = schedule.prompt ?? schedule.message;
+  schedulerLog.info`Firing agent schedule [${schedule.id}]: ${schedule.description} → instruction: "${instruction}"`;
   try {
     // Frame the instruction so the agent knows this schedule is firing NOW
     // and should execute/deliver it, not re-schedule it.
     const framedInstruction = [
       `[Scheduled task "${schedule.description}" is firing now. Execute the instruction — do not re-schedule it.]`,
       instruction,
-    ].join('\n')
+    ].join("\n");
 
     const response = await processMessage(db, framedInstruction, {
-      source: 'scheduler',
+      source: "scheduler",
       externalId: `schedule:${schedule.id}`,
       chatId: schedule.chat_id,
-    })
-    await markScheduleRun(db, schedule.id)
+    });
+    await markScheduleRun(db, schedule.id);
 
-    const text = response.text.trim()
+    const text = response.text.trim();
     if (text) {
       // Send agent response to Telegram
       try {
         await bot.api.sendMessage(schedule.chat_id, markdownToTelegramHtml(text), {
-          parse_mode: 'HTML',
-        })
+          parse_mode: "HTML",
+        });
       } catch {
         // HTML parse failed — send as plain text
-        await bot.api.sendMessage(schedule.chat_id, text)
+        await bot.api.sendMessage(schedule.chat_id, text);
       }
 
       // Mirror to user's telegram conversation history
       try {
-        const conversationId = await getOrCreateConversation(db, 'telegram', schedule.chat_id)
+        const conversationId = await getOrCreateConversation(db, "telegram", schedule.chat_id);
         await saveMessage(db, {
           conversation_id: conversationId,
-          role: 'assistant',
+          role: "assistant",
           content: `[Scheduled: ${schedule.description}] ${text}`,
-        })
+        });
       } catch (saveErr) {
-        schedulerLog.error`Failed to save agent schedule message to history [${schedule.id}]: ${saveErr}`
+        schedulerLog.error`Failed to save agent schedule message to history [${schedule.id}]: ${saveErr}`;
       }
 
-      schedulerLog.info`Agent schedule [${schedule.id}] fired, response delivered (${text.length} chars)`
+      schedulerLog.info`Agent schedule [${schedule.id}] fired, response delivered (${text.length} chars)`;
     } else {
-      schedulerLog.info`Agent schedule [${schedule.id}] fired silently (empty response)`
+      schedulerLog.info`Agent schedule [${schedule.id}] fired silently (empty response)`;
     }
   } catch (err) {
-    schedulerLog.error`Agent schedule [${schedule.id}] failed: ${err}`
+    schedulerLog.error`Agent schedule [${schedule.id}] failed: ${err}`;
   }
 }
 
 async function syncSchedules(db: Kysely, bot: Bot, timezone: string) {
-  const schedules = await listSchedules(db, true)
-  const activeIds = new Set(schedules.map((s) => s.id))
+  const schedules = await listSchedules(db, true);
+  const activeIds = new Set(schedules.map((s) => s.id));
 
   for (const schedule of schedules) {
     if (!activeJobs.has(schedule.id)) {
-      registerJob(db, bot, schedule, timezone)
+      registerJob(db, bot, schedule, timezone);
     }
   }
 
   for (const [id, job] of activeJobs) {
     if (!activeIds.has(id)) {
-      schedulerLog.info`Removing cancelled schedule [${id}]`
-      job.stop()
-      activeJobs.delete(id)
+      schedulerLog.info`Removing cancelled schedule [${id}]`;
+      job.stop();
+      activeJobs.delete(id);
     }
   }
 }
 
 export function stopScheduler() {
   if (syncInterval) {
-    clearInterval(syncInterval)
-    syncInterval = null
+    clearInterval(syncInterval);
+    syncInterval = null;
   }
   for (const [id, job] of activeJobs) {
-    job.stop()
-    activeJobs.delete(id)
+    job.stop();
+    activeJobs.delete(id);
   }
-  schedulerLog.info`Scheduler stopped`
+  schedulerLog.info`Scheduler stopped`;
 }
diff --git a/apps/construct/src/system-prompt.ts b/apps/construct/src/system-prompt.ts
index 1e6d3de..964faf2 100644
--- a/apps/construct/src/system-prompt.ts
+++ b/apps/construct/src/system-prompt.ts
@@ -1,4 +1,4 @@
-import type { Skill } from './extensions/types.js'
+import type { Skill } from "./extensions/types.js";
 
 /**
  * Base system prompt — static part that enables prompt caching.
@@ -56,22 +56,22 @@ SOUL.md, IDENTITY.md, and USER.md are living documents — update them as the re
 - Extensions are for integrations, experiments, and personal workflows
 - After creating or editing extension files, call extension_reload to activate changes.
 - Native source (src/) is for core capabilities needing deep system access
-`
+`;
 
 /** Identity files for system prompt injection */
 interface IdentityInput {
-  soul?: string | null
-  identity?: string | null
-  user?: string | null
+  soul?: string | null;
+  identity?: string | null;
+  user?: string | null;
 }
 
 /** Cached system prompt (base + identity files) */
-let cachedPrompt: string | null = null
-let cachedKey: string | null = null
+let cachedPrompt: string | null = null;
+let cachedKey: string | null = null;
 
 function identityCacheKey(id?: IdentityInput | null): string {
-  if (!id) return ''
-  return `${id.soul ?? ''}|${id.identity ?? ''}|${id.user ?? ''}`
+  if (!id) return "";
+  return `${id.soul ?? ""}|${id.identity ?? ""}|${id.user ?? ""}`;
 }
 
 /**
@@ -79,49 +79,49 @@ function identityCacheKey(id?: IdentityInput | null): string {
  * Caches the result until invalidated.
  */
 export function getSystemPrompt(identity?: IdentityInput | null): string {
-  const key = identityCacheKey(identity)
+  const key = identityCacheKey(identity);
   if (cachedPrompt !== null && key === cachedKey) {
-    return cachedPrompt
+    return cachedPrompt;
   }
 
-  cachedKey = key
-  let prompt = BASE_SYSTEM_PROMPT
+  cachedKey = key;
+  let prompt = BASE_SYSTEM_PROMPT;
 
   if (identity?.identity) {
-    prompt += `\n## Identity\n${identity.identity}\n`
+    prompt += `\n## Identity\n${identity.identity}\n`;
   }
   if (identity?.user) {
-    prompt += `\n## User\n${identity.user}\n`
+    prompt += `\n## User\n${identity.user}\n`;
   }
   if (identity?.soul) {
-    prompt += `\n## Soul\n${identity.soul}\n`
+    prompt += `\n## Soul\n${identity.soul}\n`;
   }
 
-  cachedPrompt = prompt
-  return cachedPrompt
+  cachedPrompt = prompt;
+  return cachedPrompt;
 }
 
 /** Invalidate the cached system prompt. Called on extension reload. */
 export function invalidateSystemPromptCache(): void {
-  cachedPrompt = null
-  cachedKey = null
+  cachedPrompt = null;
+  cachedKey = null;
 }
 
 /**
  * Format current date+time in the configured timezone using Intl (built-in, no deps).
  */
 export function formatNow(timezone: string): string {
-  const now = new Date()
-  return now.toLocaleString('en-US', {
+  const now = new Date();
+  return now.toLocaleString("en-US", {
     timeZone: timezone,
-    weekday: 'long',
-    year: 'numeric',
-    month: 'long',
-    day: 'numeric',
-    hour: 'numeric',
-    minute: '2-digit',
+    weekday: "long",
+    year: "numeric",
+    month: "long",
+    day: "numeric",
+    hour: "numeric",
+    minute: "2-digit",
     hour12: true,
-  })
+  });
 }
 
 /**
@@ -131,66 +131,68 @@ export function formatNow(timezone: string): string {
  * pattern recognition and selected skills.
  */
 export function buildContextPreamble(context: {
-  timezone: string
-  source: string
-  dev?: boolean
-  observations?: string
-  recentMemories?: Array<{ content: string; category: string; created_at: string }>
-  relevantMemories?: Array<{ content: string; category: string; score?: number }>
-  skills?: Skill[]
-  replyContext?: string
+  timezone: string;
+  source: string;
+  dev?: boolean;
+  observations?: string;
+  recentMemories?: Array<{ content: string; category: string; created_at: string }>;
+  relevantMemories?: Array<{ content: string; category: string; score?: number }>;
+  skills?: Skill[];
+  replyContext?: string;
 }): string {
-  const now = new Date()
-  const time = now.toLocaleString('en-US', {
+  const now = new Date();
+  const time = now.toLocaleString("en-US", {
     timeZone: context.timezone,
-    hour: 'numeric',
-    minute: '2-digit',
+    hour: "numeric",
+    minute: "2-digit",
     hour12: true,
-  })
-  const date = now.toLocaleString('en-US', {
+  });
+  const date = now.toLocaleString("en-US", {
     timeZone: context.timezone,
-    weekday: 'long',
-    year: 'numeric',
-    month: 'long',
-    day: 'numeric',
-  })
-  const envLabel = context.dev ? ' | DEV MODE' : ''
-  let preamble = `[Current time: ${time} | ${date} | ${context.timezone} | ${context.source}${envLabel}]\n`
+    weekday: "long",
+    year: "numeric",
+    month: "long",
+    day: "numeric",
+  });
+  const envLabel = context.dev ? " | DEV MODE" : "";
+  let preamble = `[Current time: ${time} | ${date} | ${context.timezone} | ${context.source}${envLabel}]\n`;
 
   if (context.dev) {
-    preamble += '[Running in development — hot reload is active, self_deploy is disabled]\n'
+    preamble += "[Running in development — hot reload is active, self_deploy is disabled]\n";
   }
 
   if (context.observations) {
-    preamble += '\n[Conversation observations — compressed context from earlier in this conversation]\n'
-    preamble += context.observations + '\n'
+    preamble +=
+      "\n[Conversation observations — compressed context from earlier in this conversation]\n";
+    preamble += context.observations + "\n";
   }
 
   if (context.recentMemories && context.recentMemories.length > 0) {
-    preamble += '\n[Recent memories — use these for context, pattern recognition, and continuity]\n'
+    preamble +=
+      "\n[Recent memories — use these for context, pattern recognition, and continuity]\n";
     for (const m of context.recentMemories) {
-      preamble += `- (${m.category}) ${m.content}\n`
+      preamble += `- (${m.category}) ${m.content}\n`;
     }
   }
 
   if (context.relevantMemories && context.relevantMemories.length > 0) {
-    preamble += '\n[Potentially relevant memories]\n'
+    preamble += "\n[Potentially relevant memories]\n";
     for (const m of context.relevantMemories) {
-      const score = m.score !== undefined ? ` (${(m.score * 100).toFixed(0)}% match)` : ''
-      preamble += `- (${m.category}) ${m.content}${score}\n`
+      const score = m.score !== undefined ? ` (${(m.score * 100).toFixed(0)}% match)` : "";
+      preamble += `- (${m.category}) ${m.content}${score}\n`;
     }
   }
 
   if (context.skills && context.skills.length > 0) {
-    preamble += '\n[Active skills — follow these instructions when relevant]\n'
+    preamble += "\n[Active skills — follow these instructions when relevant]\n";
     for (const skill of context.skills) {
-      preamble += `\n### ${skill.name}\n${skill.body}\n`
+      preamble += `\n### ${skill.name}\n${skill.body}\n`;
     }
   }
 
   if (context.replyContext) {
-    preamble += `\n[Replying to: "${context.replyContext.slice(0, 300)}"]\n`
+    preamble += `\n[Replying to: "${context.replyContext.slice(0, 300)}"]\n`;
   }
 
-  return preamble + '\n'
+  return preamble + "\n";
 }
diff --git a/apps/construct/src/telegram/__tests__/format.test.ts b/apps/construct/src/telegram/__tests__/format.test.ts
index 1585219..28e7441 100644
--- a/apps/construct/src/telegram/__tests__/format.test.ts
+++ b/apps/construct/src/telegram/__tests__/format.test.ts
@@ -1,155 +1,149 @@
-import { describe, it, expect } from 'vitest'
-import { escapeHtml, markdownToTelegramHtml } from '../format.js'
+import { describe, it, expect } from "vitest";
+import { escapeHtml, markdownToTelegramHtml } from "../format.js";
 
-describe('escapeHtml', () => {
-  it('escapes & < >', () => {
-    expect(escapeHtml('a < b & c > d')).toBe('a < b & c > d')
-  })
+describe("escapeHtml", () => {
+  it("escapes & < >", () => {
+    expect(escapeHtml("a < b & c > d")).toBe("a < b & c > d");
+  });
 
-  it('handles multiple occurrences', () => {
-    expect(escapeHtml('<<>>')).toBe('<<>>')
-  })
+  it("handles multiple occurrences", () => {
+    expect(escapeHtml("<<>>")).toBe("<<>>");
+  });
 
-  it('passes through text without special chars', () => {
-    expect(escapeHtml('hello world')).toBe('hello world')
-  })
-})
+  it("passes through text without special chars", () => {
+    expect(escapeHtml("hello world")).toBe("hello world");
+  });
+});
 
-describe('markdownToTelegramHtml', () => {
+describe("markdownToTelegramHtml", () => {
   // --- Bold ---
 
-  it('converts **bold**', () => {
-    expect(markdownToTelegramHtml('hello **world**')).toBe('hello world')
-  })
+  it("converts **bold**", () => {
+    expect(markdownToTelegramHtml("hello **world**")).toBe("hello world");
+  });
 
   // --- Italic ---
 
-  it('converts *italic*', () => {
-    expect(markdownToTelegramHtml('hello *world*')).toBe('hello world')
-  })
+  it("converts *italic*", () => {
+    expect(markdownToTelegramHtml("hello *world*")).toBe("hello world");
+  });
 
   // --- Bold-italic ---
 
-  it('converts ***bold-italic***', () => {
-    expect(markdownToTelegramHtml('***wow***')).toBe('wow')
-  })
+  it("converts ***bold-italic***", () => {
+    expect(markdownToTelegramHtml("***wow***")).toBe("wow");
+  });
 
   // --- Headers ---
 
-  it('converts # headers to bold', () => {
-    expect(markdownToTelegramHtml('# Title')).toBe('Title')
-    expect(markdownToTelegramHtml('## Subtitle')).toBe('Subtitle')
-    expect(markdownToTelegramHtml('### Deep')).toBe('Deep')
-  })
+  it("converts # headers to bold", () => {
+    expect(markdownToTelegramHtml("# Title")).toBe("Title");
+    expect(markdownToTelegramHtml("## Subtitle")).toBe("Subtitle");
+    expect(markdownToTelegramHtml("### Deep")).toBe("Deep");
+  });
 
-  it('strips ** inside headers', () => {
-    expect(markdownToTelegramHtml('## **Bold header**')).toBe('Bold header')
-  })
+  it("strips ** inside headers", () => {
+    expect(markdownToTelegramHtml("## **Bold header**")).toBe("Bold header");
+  });
 
   // --- Bullet points ---
 
-  it('converts * bullets to •', () => {
-    expect(markdownToTelegramHtml('* item one\n* item two')).toBe(
-      '• item one\n• item two',
-    )
-  })
+  it("converts * bullets to •", () => {
+    expect(markdownToTelegramHtml("* item one\n* item two")).toBe("• item one\n• item two");
+  });
 
-  it('converts - bullets to •', () => {
-    expect(markdownToTelegramHtml('- first\n- second')).toBe(
-      '• first\n• second',
-    )
-  })
+  it("converts - bullets to •", () => {
+    expect(markdownToTelegramHtml("- first\n- second")).toBe("• first\n• second");
+  });
 
   // --- Code blocks ---
 
-  it('protects fenced code blocks from formatting', () => {
-    const input = '```js\nconst x = **bold**;\n```'
-    const result = markdownToTelegramHtml(input)
+  it("protects fenced code blocks from formatting", () => {
+    const input = "```js\nconst x = **bold**;\n```";
+    const result = markdownToTelegramHtml(input);
     // Code should be in 
, not have  tags
-    expect(result).toContain('
')
-    expect(result).not.toContain('')
-    expect(result).toContain('const x = **bold**;')
-  })
-
-  it('escapes HTML inside code blocks', () => {
-    const input = '```\n\n```'
-    const result = markdownToTelegramHtml(input)
-    expect(result).toContain('<script>')
-    expect(result).not.toContain('\n```';
+    const result = markdownToTelegramHtml(input);
+    expect(result).toContain("<script>");
+    expect(result).not.toContain("