Skip to content

[integrations] Enhanced MCP server with alpha tool suite#6

Open
alanshurafa wants to merge 34 commits intomainfrom
contrib/alanshurafa/enhanced-mcp
Open

[integrations] Enhanced MCP server with alpha tool suite#6
alanshurafa wants to merge 34 commits intomainfrom
contrib/alanshurafa/enhanced-mcp

Conversation

@alanshurafa
Copy link
Copy Markdown
Owner

Summary

  • Adds a production-grade remote MCP server in integrations/enhanced-mcp/ expanding the tool surface from 4 to 14 tools
  • Includes enhanced search (semantic + full-text), full CRUD, content dedup via SHA-256 fingerprinting, automatic LLM classification, sensitivity detection, and operational monitoring
  • Schema-backed tools (graph_search, entity_detail, ops_capture_status, ops_source_monitor) gracefully degrade when optional schemas (smart-ingest, knowledge-graph) are absent
  • Original server/ is untouched — this deploys as a second remote connector

Dependencies

Files

integrations/enhanced-mcp/
  README.md              — Setup guide with 5 numbered steps
  metadata.json          — Passes .github/metadata.schema.json validation
  deno.json              — Import map matching server/ versions
  index.ts               — 14-tool MCP server (Hono + StreamableHTTPTransport)
  _shared/config.ts      — Constants, classifier prompt, sensitivity patterns
  _shared/helpers.ts     — embedText, extractMetadata, detectSensitivity, prepareThoughtPayload

Tool Surface (14 tools)

# Tool Schema
1-7 search_thoughts, list_thoughts, get_thought, update_thought, delete_thought, capture_thought, thought_stats Enhanced Thoughts
8-10 search_thoughts_text, count_thoughts, related_thoughts Enhanced Thoughts
11 ops_capture_status Smart Ingest
12-13 graph_search, entity_detail Knowledge Graph
14 ops_source_monitor Ops Views

Test Plan

  • All 15 OB1 PR Gate checks pass
  • metadata.json validates against .github/metadata.schema.json
  • README links resolve (../../docs/01-getting-started.md, ../../docs/05-tool-audit.md)
  • No credentials, no local MCP patterns
  • Deploy to test Supabase project and smoke-test core tools

Generated with Claude Code

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 7633681f7b

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +114 to +116
if (!response.ok) {
throw new Error(`OpenRouter embedding failed (${response.status}): ${await response.text()}`);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Use OpenAI fallback when OpenRouter embeddings fail

The embedText helper currently throws as soon as OpenRouter returns a non-2xx response, which prevents execution from ever reaching the documented OpenAI fallback branch even when OPENAI_API_KEY is configured. In environments with both providers enabled, any transient OpenRouter failure will hard-fail embedding-dependent tools (search_thoughts semantic mode, capture_thought, update_thought) instead of degrading gracefully to OpenAI.

Useful? React with 👍 / 👎.

supabase: { from: (name: string) => { select: (cols: string) => { limit: (n: number) => Promise<{ error: unknown }> } } },
tableName: string,
): Promise<boolean> {
const { error } = await supabase.from(tableName).select("id").limit(0);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Make table existence checks column-agnostic

The schema guard utility claims to test whether a relation exists, but it probes by selecting id, which incorrectly returns false for any valid table/view that does not expose an id column. Because this helper is used to gate optional tools, a deployed schema can be present yet reported unavailable purely due to column shape. The existence check should not assume a specific column name.

Useful? React with 👍 / 👎.

@github-actions github-actions Bot added documentation Improvements or additions to documentation recipe labels Apr 6, 2026
justfinethanku and others added 25 commits April 12, 2026 21:57
* [recipes] Add repo learning coach recipe

* [recipes] Harden repo learning coach sync and reads
…es-Projects#146)

* [dashboards] Add Workflow kanban board with drag-and-drop, mobile support, and MCP progress_task tool

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* [dashboards] Mobile UX fixes: modal centering, landscape layout, touch drag-and-drop

- Fix modal positioning with createPortal to escape DnD transform context
- Add phone landscape CSS to hide sidebar and show mobile topbar
- Switch to MouseSensor + TouchSensor for proper mobile drag delay
- Add touchAction pan-y for scroll + drag coexistence
- Add allowedDevOrigins for mobile dev testing
- Add suppressHydrationWarning for browser extension compatibility

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* [dashboards] Allow pinch-to-zoom on kanban cards

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* [schemas] Add workflow status tracking columns for kanban board

Adds status and status_updated_at columns to the thoughts table,
enabling kanban-style workflow management for task and idea types.
Includes migration SQL, backfill for existing thoughts, and partial index.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* [dashboards] Add Workflow kanban board with drag-and-drop and mobile support

Adds a full kanban board interface for managing task and idea thoughts:
- Drag-and-drop between status columns (New/Planning/Active/Review/Done)
- Touch-friendly with 200ms hold delay, pinch-to-zoom enabled
- Collapsible columns with localStorage persistence
- Inline edit modal for status, priority, type, and content
- Dashboard summary widget showing active workflow items
- Mobile-first responsive layout with full-screen edit on small screens
- @dnd-kit for accessible drag-and-drop (mouse + touch sensors)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* [dashboards] Add delete button to kanban card edit modal

Adds a Delete button in the kanban card modal footer with a confirmation
banner before permanently deleting the thought. Wires up a new
/api/kanban/delete route and optimistic removal from the board.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* [dashboards] Make delete confirmation a separate popup dialog

Replace the inline banner with a standalone centered dialog that
overlays on top of the edit modal, with clear title, description,
and Cancel/Delete buttons.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [dashboards] Fix deleteThought parsing empty response body

The REST API returns an empty body on DELETE, but apiFetch always
called res.json() causing a parse error. Inline the fetch so it
skips JSON parsing.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Ivan <ivan@openbrain.dev>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
…ng (NateBJones-Projects#141)

Syncs Claude Code's local memory saves to Open Brain via
mcp__open-brain__capture_thought so memories are accessible
from ChatGPT, Claude Desktop, Codex, and any MCP-connected client.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…fix skill divergence (NateBJones-Projects#135)

* [recipes] Update life-engine schema: user_id TEXT, add weekly_review/cron_state types

- Changed user_id from UUID to TEXT across all 5 tables (supports
  Telegram chat_id as identifier without UUID padding hacks)
- Added weekly_review and cron_state to briefing_type check constraint

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [recipes] Clean up Life Engine: add state table, simplify loop timing, fix skill divergence

- Add life_engine_state key-value table for runtime state (cron job ID,
  sleep schedule) instead of overloading briefing log with cron_state type
- Remove cron_state from briefing_type CHECK constraint
- Simplify Dynamic Loop Timing from 6 tiers to 4 (15m/30m/60m/one-shot)
- Replace duplicate embedded skill in README with pointer to life-engine-skill.md
- Add user_responded update logic to Rule 7 for self-improvement engagement tracking
- Add timezone note to skill time windows
- Fix platform references to include Discord alongside Telegram
- Add RLS comment explaining why no row policies are needed
- Update metadata.json date

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [recipes] Harden Life Engine permissions: lead with settings.json allowlist, scope MCP tools

- Restructure Step 6 to recommend settings.json allowlist as default (Option A)
- Replace broad mcp__open-brain__* and mcp__supabase__* wildcards with
  specific tool names (search_thoughts, list_thoughts, execute_sql, etc.)
- Include CronCreate and CronDelete in the default allowlist
- Demote --dangerously-skip-permissions to Option D (testing only)
- Update Quick Setup and Step 7 launch commands to use settings.json approach
- Addresses HIGH finding from security audit

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [recipes] Add rain forecast to Life Engine morning briefing via Open-Meteo

- Add Weather section to skill with Open-Meteo API call (free, no API key)
- Include rain windows with time ranges and probability in morning briefing
- Default coordinates: Portland, OR (45.52, -122.68), configurable via life_engine_state
- Only show rain line when precipitation_probability >= 30%
- Update schema comment to document latitude/longitude state keys

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [recipes] Add Daily Capture, portable customizations, and manual sync rule to Life Engine

Backport portable customizations from installed SKILL.md into the recipe:
date anchor, database note, user identity, valid briefing types, proactive
chat_id, rules 9-14. Add Daily Capture prompt in evening window with
capture_thought integration. Add Rule 14 requiring manual sync between
recipe and installed skill files.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [recipes] Fix hallucinated column name: briefings table uses 'content' not 'summary'

Add explicit column reference note to prevent the LLM from hallucinating
a 'summary' column on life_engine_briefings — the correct column is 'content'.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [recipes] Address PR review: Discord support, migration steps, permission docs

Fixes all issues from PR NateBJones-Projects#135 review:
- P1: Add Bash(date/curl) and capture_thought to README allowlist examples
- P1: Make channel event handling platform-agnostic (Telegram + Discord)
  in skill Rules 7, 10, 11 and Channel Tools section
- P1: Add upgrade migration steps to schema.sql for user_id UUID→TEXT
- P2: Add CHECK constraint on delivered_via ('telegram', 'discord')
- P2: Add single-user assumption comment on life_engine_state table
- Bump version to 1.1.0, update date to 2026-04-01

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [recipes] Broaden Bash permission to Bash(*) — scoped patterns are fragile

Scoped Bash patterns like Bash(date *) and Bash(curl -s *api.open-meteo.com*)
break when the LLM varies its exact command syntax between runs, causing
silent permission blocks during unattended operation. Replace with Bash(*)
since Life Engine only uses benign read-only commands (date, curl) and
Rule 11 prevents dangerous execution from external triggers.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…teBJones-Projects#125)

Replaces the empty stub with a working zero-infrastructure approach
using Claude Code scheduled tasks + Open Brain MCP + Gmail MCP.
Preserves the Edge Function approach as a planned future option.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…es-Projects#37)

* [recipes] Vercel + Neon + Telegram alternative architecture

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [fix] Replace local MCP pattern with custom connectors (PR review feedback)

Replace claude_desktop_config.json + mcp-remote bridge instructions with
Claude Desktop custom connectors UI approach in both Step 8 and the
Troubleshooting section, aligning with CONTRIBUTING.md Rule #14.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…BJones-Projects#171)

* [recipes] ChatGPT import v2: multi-thought knowledge extraction

Replace 1-3 sentence summarization with structured knowledge extraction
that produces 2-5 typed thoughts per conversation (decision, preference,
learning, context, brainstorm, reference) with enriched metadata.

Key changes to import-chatgpt.py:
- Branch resolution via current_node parent-pointer walk
- Content type dispatch for 14 export message formats (voice, reasoning, web search, code)
- Signal-based filtering replaces regex title matching
- Session boundary detection for multi-day conversations
- Semantic deduplication via match_thoughts RPC
- Re-import handling with update_time/content_hash detection
- Embed thought content, not [ChatGPT: title] prefix
- --store-conversations for optional conversation history with pyramid summaries
- --focus flag with presets (tech, strategy, personal, creative) and custom text
- --openrouter-model flag for model selection
- --max-words flag to skip oversized conversations (default: 50000)
- Robust JSON parsing for non-OpenAI models (Anthropic, Ollama)
- Accurate progress display with percentage and skip counts

New files:
- chatgpt_parser.py: parsing, content dispatch, filtering, session detection
- schema.sql: chatgpt_conversations table with pyramid summaries and indexes

All existing CLI flags preserved (--dry-run, --model ollama, --after/--before,
--limit, --report, --verbose, --raw, --ingest-endpoint).

* [recipes] Fix ChatGPT import filtering defaults

---------

Co-authored-by: Jonathan Edwards <justfinethanku@gmail.com>
NateBJones-Projects#160)

* [recipes] Local Ollama embeddings — zero-cost alternative to OpenRouter

Generate embeddings locally via Ollama and insert into Supabase.
Keeps the existing OB1 architecture, only swaps the embedding provider.

Five models tested including gte-qwen2-1.5b (1536-dim) which is
drop-in compatible with the default Open Brain schema.

Includes quality benchmarks comparing discrimination power across
all five models.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Fix markdown lint errors in README

Add blank lines around fenced code blocks (MD031) and merge
consecutive blockquotes (MD028).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* [recipes] Fix local Ollama env loading docs

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Jonathan Edwards <justfinethanku@gmail.com>
…s-Projects#150)

* [docs] Fix MD028 blank line between blockquotes in getting-started guide

Removes blank line between WARNING and IMPORTANT blockquotes that was
failing markdownlint across all PRs.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Fix claudeception recipe: convert multi-line YAML descriptions to single-line

Multi-line descriptions (description: |) break agent routing silently.
Nate's March 2026 Skills Standard requires single-line YAML descriptions
for reliable semantic matching. Fixed 3 instances: the recipe's own
description and 2 template examples.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* [recipes] Clean up Claudeception docs formatting

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Jonathan Edwards <justfinethanku@gmail.com>
…NateBJones-Projects#148)

* fix(professional-crm): remove Accept header patch causing SSE reconnect loop

The Accept: text/event-stream header patch forced StreamableHTTPTransport
into SSE mode on every request. Since Supabase edge functions are stateless,
the SSE stream terminates immediately after each response — causing the MCP
client to reconnect every ~2 seconds (~43k invocations/day).

StreamableHTTPTransport is request/response by design. Removing the patch
lets it respond with plain JSON, eliminating the reconnect loop entirely.

* fix(professional-crm): force JSON-only Accept header to prevent SSE reconnect loop

Removing text/event-stream from the Accept header before it reaches
StreamableHTTPTransport prevents it from opening SSE streams. MCP clients
send Accept: application/json, text/event-stream per spec -- this is what
triggers SSE mode even without the original workaround.

JSON-only responses close cleanly, eliminating the boot/shutdown cycle.
…ateBJones-Projects#139)

* recipes: add adaptive capture classification with confidence gating

* recipes: address review — fix author, OB1 types, add TypeScript implementation

* recipes: incorporate GitHub edits to README, classifier prompt, and metadata

* [recipes] Tighten adaptive capture setup and threshold updates

---------

Co-authored-by: Jonathan Edwards <justfinethanku@gmail.com>
…ateBJones-Projects#133)

* Add update_professional_contact tool to CRM extension

Adds the ability to update existing contact fields (name, company,
title, email, phone, tags, notes, follow_up_date, etc.) which was
proposed in NateBJones-Projects#93 but never implemented. Only provided fields are
updated, and the existing updated_at trigger handles timestamping.

* Allow clearing follow_up_date by passing null or empty string

Fixes the case where a follow-up date, once set, could never be
cleared — leaving contacts permanently stuck in get_follow_ups_due.

* [extensions] Document contact update tool

---------

Co-authored-by: Matt Hallett <matthallett@gmail.com>
Co-authored-by: Jonathan Edwards <justfinethanku@gmail.com>
…es-Projects#161)

* Fix pre-existing markdownlint errors across 15 files

Add blank lines around headings (MD022), fenced code blocks (MD031),
and between adjacent blockquotes (MD028). Fix broken link fragment
(MD051) and remove extra blank line (MD012). No content changes.

These errors were blocking CI on all open PRs since the lint check
runs repo-wide.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* [docs] Preserve README links during markdown cleanup

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Jonathan Edwards <justfinethanku@gmail.com>
…raphics (NateBJones-Projects#85)

* [recipes] Infographic Generator: turn research docs into visual infographics

Second recipe from @jaredirish. Part of the Open Brain Flywheel
(capture-process-visualize loop, see Issue NateBJones-Projects#84).

Takes any markdown doc or Open Brain thought cluster and generates
professional infographic images via Gemini's free-tier API.
Auto-chunks content, writes verbose prompts (300+ words each),
generates PNGs with specific colors/layout/typography.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* [recipes] Fix broken relative links in infographic-generator README

../brain-dump-processor/ → ../panning-for-gold/
../auto-capture-protocol/ → ../auto-capture/

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* [recipes] Address review feedback on infographic generator

- Sync generate.py with working local version (cleaner error handling,
  fix --redo display counter bug)
- Fix auto-capture link: directory doesn't exist until PR NateBJones-Projects#42 merges,
  so link to the PR instead of a non-existent directory

Note: part.as_image() and gemini-2.5-flash-image are both valid per
the official google-genai SDK docs. Reviewer concerns on those were
based on outdated information.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* [recipes] Fix infographic redo progress output

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Jonathan Edwards <justfinethanku@gmail.com>
* [recipes] Add OB-Graph knowledge graph layer

Adds graph database functionality for Open Brain using PostgreSQL nodes + edges
with recursive CTE traversal. Includes schema, MCP server with 10 tools, and
setup documentation.

https://claude.ai/code/session_015Z8wCeokTMTdrVMthqzGKJ

* [recipes] Clarify OB-Graph deployment setup

---------

Co-authored-by: Claude <noreply@anthropic.com>
* [docs] Fix Cursor MCP connection — use native url field, not mcp-remote

mcp-remote@latest now attempts OAuth client registration before sending
custom headers, which breaks against Open Brain's simple key-based auth.
Cursor supports remote MCP servers natively via the url field, so
mcp-remote is unnecessary.

Changes:
- Add dedicated Cursor section to getting-started guide (7.5) and
  remote-mcp primitive with native url config
- Update mcp-remote examples to pass key via ?key= query parameter
  instead of --header to avoid OAuth discovery issues
- Clarify x-brain-key (core) vs x-access-key (extensions) in
  troubleshooting guides

Made-with: Cursor

* [primitives] Bring remote MCP docs in line with repo format

---------

Co-authored-by: Jonathan Edwards <justfinethanku@gmail.com>
* [skills] Add weekly signal diff skill pack

* [skills] Fix markdownlint numbering in weekly signal diff
…rojects#181)

* [recipes] Add Bring Your Own Context recipe

* [recipes] Fix markdownlint regression in activation README
* [repo] Sweep fix-now backlog issues

* [docs] Fix setup-guide markdownlint regression
…tchWithTimeout

Adds an AbortController-backed fetchWithTimeout helper to
_shared/helpers.ts and rewires all 5 outbound fetches (OpenRouter +
OpenAI embeddings; OpenRouter + OpenAI + Anthropic chat completions)
through it. Default 60s, override via FETCH_TIMEOUT_MS env.

Also widens isTransientError to match the new "fetch timeout" error
string plus "aborted" and 504, and adds a sibling isFatalProviderError
for 400/401/402/403 so BLOCKER-2 can fail-fast on hard auth/quota
errors instead of cascading to fallback providers.

Why: on upstream provider stall every caller was hanging until the
Supabase Edge Function runtime killed the connection (~150s). With
5 LLM calls per capture in the worst case, a single capture_thought
could pin an Edge Function for ~10+ minutes. This is the Wave-wide
timeout pattern applied consistently here.
Three layered safeguards so capture_thought cannot rack up unbounded
per-request LLM charges:

1. Fingerprint-first dedup in capture_thought — before we pay for any
   classification or embedding, hash the raw content and check if it
   already exists. Identical re-captures short-circuit with an
   action="deduplicated" result. upsert_thought dedups too, but by
   then we've already burned the enrichment cycle.

2. Global call budget ENHANCED_MCP_MAX_CALLS (default 10000). Edge
   Function instance tracks cumulative classifier invocations and
   returns fallback metadata once the cap is hit. Set to 0 to disable
   classification entirely for bulk imports.

3. Fail-fast on 400/401/402/403 via a new isFatalProviderError. The
   old path treated ALL errors as cascade-worthy, so a single 402
   (payment required) on OpenRouter would fire OpenAI *and* Anthropic
   in sequence — double-billing the user on the two providers that
   had nothing to do with the original failure. New path: fatal
   errors skip the fallback chain entirely. Also caps attempt 3 at
   exactly ONE fallback provider instead of iterating through all
   remaining providers.

Why: the original extractMetadata could fire up to 4 LLM calls per
capture (primary + retry + 2 fallback providers). A batch of 100
captures on a rate-limited primary would easily hit 300+ calls with
1.5s retry delays adding 150+ seconds of wall-clock, and any 402 on
OpenRouter would double-bill into OpenAI and Anthropic regardless of
whether either could plausibly fix the problem.
…ding sensitivity

update_thought was writing detectSensitivity(content).tier directly to
the row, which meant editing a `personal` thought to remove the
sensitive phrasing silently relabeled it as `standard` — and any
restricted pattern in the new content was happily persisted to the
cloud even though capture_thought refuses that same content.

Fix:
1. Pre-flight reject if the NEW content trips any RESTRICTED_PATTERN.
   Matches capture_thought's behavior and returns the detection
   reason in the error so the caller knows why.
2. Use resolveSensitivityTier() with the EXISTING row's tier as the
   floor. Escalation-only semantics: personal -> standard is blocked,
   standard -> personal / personal -> restricted still work. This is
   the same helper prepareThoughtPayload already uses everywhere else.

Why: this was a real data-leak vector. A user captures "my salary is
$120k" as `personal`, later rephrases it to "my income situation is
comfortable", and the row goes back to `standard`. The next broad
list_thoughts exposes it to any connected client. Update paths must
maintain the escalation invariant that capture paths enforce.
…release

Removes the delete_thought tool registration, the README table row,
and the 14 -> 13 tool count everywhere (README intro, Expected
Outcome, Tool Surface Area, metadata.json description). Renumbers
the remaining section comments in index.ts.

Adds an "Intentionally Excluded From This Release" section to the
README explaining why delete_thought will ship in a follow-up:
hard DELETE has no tombstone path on the enhanced-thoughts schema
today, and the maintainer's PR NateBJones-Projects#127 guidance was "depreciate and
version rather than delete." Shipping a safe soft-delete requires a
`deleted_at` column and a restore_thought sibling that don't exist
yet.

Why: the drafted implementation was hard DELETE on a row with no
deleted_at column, no audit trail, no restore path. Aligning with
PR NateBJones-Projects#127 posture is cheaper than trying to bolt on soft-delete here
without schema support — we'll land both tool and schema changes
together in a later PR.
…prefix

Renames the four tools that share names with server/index.ts so both
MCP servers can stay connected without the model seeing duplicate
tool entries:

- search_thoughts  -> brain_search_thoughts
- list_thoughts    -> brain_list_thoughts
- capture_thought  -> brain_capture_thought
- thought_stats    -> brain_thought_stats

Also updates the README What-It-Does, Step 4, Expected Outcome, and
Tool Reference sections to reflect the new names and explain the
collision-prevention intent, plus matching section comments and
internal error-log labels in index.ts for grep-ability.

Why: Claude Desktop and most MCP clients list connector tools in a
flat namespace. When the stock server and this server both expose
`capture_thought`, the model has to guess which one the user meant;
if it picks the stock one, there's no sensitivity pre-flight and
"restricted content stays local" silently breaks. `brain_` prefix is
a cheap one-pass rename that eliminates the footgun by design.
…e tableExists

Two related fixes:

1. The schema guard in ops_source_monitor was looking for
   `ops_source_volume_24h`, a view name that exists in neither this
   repo nor the brain-health-monitoring recipe. Result: once the user
   installed the recipe (which defines `ops_source_ingestion_24h`,
   `ops_source_errors_24h`, `ops_source_recent_failures`), the tool
   STILL returned "install required views" because the guard looked
   for a view nothing creates. Fix: check `ops_source_ingestion_24h`
   (one of the real views) and add a partial-install detection that
   returns a graceful "only partially installed" response if any one
   of the three views is missing.

2. `tableExists` previously required the target to have an `id`
   column (`select("id")`). That works for tables but not for views
   like `ops_source_errors_24h` which has only
   `(source, error_events_24h)`. Switched to
   `select("*", { head: true, count: "exact" }).limit(0)` which
   performs a HEAD request with no data transfer and no column-name
   dependency, so it works on any table or view.

Why: without this fix the tool never activates even when the user
installs exactly the recipe the README told them to install — a
pure dead-end UX. And `tableExists` was one unusual view schema away
from false-negatives on other operational tooling.
Adds an escapeLikePattern helper that escapes `\`, `%`, and `_` in a
user query before interpolating into an ILIKE pattern. graph_search
now runs `%${escapeLikePattern(query)}%` instead of `%${query}%`.

Why: a user searching for "100%" (e.g. "100% uptime") was producing
the ILIKE pattern `%100%%` which matches every entity whose
canonical_name contains "100" — effectively the whole graph for a
dense brain, capped only by LIMIT. And "a_b" was matching "aab",
"axb", etc. Not SQL injection (PostgREST parameterizes the value)
but a DoS-adjacent correctness bug that passes unit tests on
alphanumeric queries and falls over on real queries.
… fallback

Two auth hardening changes:

1. Replace `provided !== MCP_ACCESS_KEY` with a timingSafeEqualStrings
   helper that prefers crypto.subtle.timingSafeEqual and falls back to
   a manual XOR loop. Length mismatch short-circuits — acceptable for
   the fixed 32-char access key; a variable-length key would need a
   different pattern.
2. Drop the `?key=<access-key>` query-parameter fallback. Auth now
   requires `x-brain-key: <key>` OR `Authorization: Bearer <key>`
   only. Query strings end up in Supabase request logs, CDN logs, and
   any intermediate proxy logs — leaking the credential into places
   that don't get rotated with the secret itself. Also updated README
   Step 3 and Troubleshooting to document the header-only posture.

Why: timing-safe comparison is the Wave-wide review bar for any
bearer-equivalent token, and URL query credentials are a classic
"works today, leaks tomorrow" anti-pattern.
… search

Two-layer defense for the "top-N then post-filter" correctness bug:

1. Forward start_date / end_date into the match_thoughts RPC filter
   payload along with exclude_restricted. RPC versions that honour
   these filter keys will pre-filter at the SQL level before applying
   the similarity cutoff — making the behavior server-side correct.
   Older RPCs ignore unknown filter keys and we fall through to (2).

2. When a date filter is active, over-fetch 3x the requested limit
   (capped at 500) instead of limit + 50. The previous +50 slack was
   catastrophic on dense recent brains: a top-200 result set where
   all 200 matches were recent would silently return 0 rows for an
   old-date query even when relevant matches existed below rank 200.

Also documents the limitation in a new README "Known Limitations"
section so users running on the older RPC signature understand the
workaround (switch to mode: "text" or narrow the query).

Why: silent empty results are the worst class of search UX failure
because users assume "no matches" rather than "cutoff too tight." The
over-fetch cost is bounded at 500 rows, a few milliseconds on any
reasonable brain size.
…ture

Adds a new "Security" section to the README covering:

1. This server's own auth model (constant-time compare, header-only
   MCP_ACCESS_KEY, service_role under the hood as the sensitivity-
   filter boundary).
2. Companion schema risk: the enhanced-thoughts schema installs its
   three SECURITY DEFINER RPCs with service_role-only grants by
   default; granting anon/authenticated on those RPCs would be an
   RLS bypass because SECURITY DEFINER runs with the function
   owner's privileges. Combined with a publicly-reachable
   enhanced-mcp deployment that pattern would let anyone with the
   Supabase project URL + anon key read thought content directly,
   routing around this server's sensitivity filtering.

Why: the README previously advertised the RPC names (which makes
them discoverable) without warning that exposing them to anon
collapses the whole sensitivity story. A one-paragraph callout
costs nothing and is exactly the kind of "safe defaults" note the
upstream gate reviewers expect from integration docs.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.