[schemas] Add brain-stats-daily — daily stats rollup + heatmap filter#16
[schemas] Add brain-stats-daily — daily stats rollup + heatmap filter#16alanshurafa wants to merge 6 commits intomainfrom
Conversation
Port of ExoCortex brain_stats_daily RPCs (migrations 6/9/10 consolidated into one file, final state only) for server-side daily-bucket heatmap aggregation. Includes JSONB variants that bypass PostgREST's default 1000-row cap so multi-year heatmap windows work end-to-end, plus a lifelog variant that buckets by metadata life-date fields. Optional dashboard-snippet ports HeatmapSourceFilter.tsx with wiring notes for open-brain-dashboard-next. Depends on enhanced-thoughts for source_type and sensitivity_tier columns.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 3acf3c566f
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const thoughtId = active.id as number; | ||
| const newStatus = over.id as string; | ||
|
|
There was a problem hiding this comment.
Derive drop status from container instead of
over.id
Use the drop container ID (column status) rather than over.id directly here. In dnd-kit sortable lists, over.id is often the hovered card ID when dropping onto a populated column, so this code sends a numeric thought ID as status. That fails validation in /api/kanban/update and reverts, which makes drag-and-drop unreliable whenever users drop onto an existing card instead of empty column space.
Useful? React with 👍 / 👎.
| const results: Thought[] = []; | ||
| for (const thoughtType of ["task", "idea"]) { | ||
| const sp = new URLSearchParams(); | ||
| sp.set("per_page", "100"); |
There was a problem hiding this comment.
Paginate Kanban queries beyond first 100 rows
This hard-codes a single page of 100 records per type and never follows additional pages. Accounts with more than 100 task thoughts or 100 idea thoughts will silently miss items on the workflow board, so users cannot view or update a large part of their backlog from this UI.
Useful? React with 👍 / 👎.
| when raw_date ~ '^\d{4}-\d{2}-\d{2}' then substring(raw_date from 1 for 10)::date | ||
| else null |
There was a problem hiding this comment.
Handle invalid metadata dates before casting to
date
This cast can throw for malformed-but-matching values like 2024-13-01 or 2024-02-31, which aborts the whole RPC instead of skipping bad rows. Since these fields come from imported metadata, one bad raw_date can break daily lifelog aggregation for all results; the same pattern is repeated in the JSONB variant later in this file.
Useful? React with 👍 / 👎.
Two-schema install with an enforced ordering dependency is not a beginner experience. Installing brain-stats-daily first errors out on missing source_type/sensitivity_tier columns, so catalogs sorting by difficulty should present this as intermediate. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…urity Apply four Round-1 review fixes to schema.sql (and docs): 1. P1-1 security: switch all RPCs from SECURITY DEFINER → SECURITY INVOKER and drop the anon grant. Definer + owner-is-postgres bypasses RLS on the thoughts table, leaking aggregate counts of private captures to unauthenticated callers. Invoker + no-anon makes the RPCs respect whatever policy the tenant set. 2. P1-2 safe date parsing: add a _brain_stats_try_parse_iso_date helper that returns NULL on bad input. One malformed metadata value (e.g. '2026-99-99T...') would otherwise raise date/time-field-out-of-range and abort the whole RPC. Now each candidate field is parsed in isolation and a bad earlier field no longer hides a valid later fallback (P2-2 same fix). 3. P1-3 calendar-day window: replace 'now() - interval' rolling cutoff with (now() at time zone 'UTC')::date - v_days + 1 so p_days=30 returns exactly 30 calendar-day buckets, not 31 partial ones. Also unifies the UTC convention between the setof and lifelog variants (P2-1). 4. P2-4 case-normalize restricted tier: lower(sensitivity_tier) is distinct from 'restricted' so mixed-case values don't leak through the default exclusion. Add a 'Security Model' section to the schema README and an auth note to the dashboard-snippets README so users know anon isn't granted by default. Reword the prereq language so users understand the missing-column error surfaces at first RPC call, not at CREATE FUNCTION time. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…ote hardcoded values - encodeURIComponent the source_type when interpolating into the Link href. Current values are safe, but the snippet explicitly invites customization and an edited value with '&' or '#' would silently corrupt the URL. - Add aria-current='page' to the active pill so screen readers get the same signal sighted users get from the color change. - Comment above HEATMAP_SOURCE_OPTIONS flagging the values as examples and pointing users at a SQL query to list their actual source_types. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
Refreshing checks after markdownlint cleanup merged into fork main. |
|
Refreshing checks after fork markdownlint workflow fix. |
Contribution Type
/schemas)What does this do?
Adds four Postgres RPC functions for server-side daily-bucket aggregation that power dashboard heatmaps:
brain_stats_daily,brain_stats_daily_lifelog, and their_jsonbvariants. The JSONB variants exist specifically to bypass PostgREST's defaultdb-max-rows=1000cap so multi-year heatmap windows (e.g., 10 years) return complete data in a single response. Also ships an optionalHeatmapSourceFilter.tsxsnippet foropen-brain-dashboard-nextwith wiring notes.Port of ExoCortex migrations
202604210006(core),202604210009(broader lifelog source coverage), and202604210010(jsonb variants) — consolidated into a single file with final state only. Rationale: OB1 schemas are installed by pasting SQL into the Supabase SQL Editor, not applied as a migration chain, so intermediate states would just confuse users.Requirements
thoughtstable (standard Open Brain setup)enhanced-thoughtsschema (addssource_typeandsensitivity_tiercolumns that these RPCs read). Linked as a prerequisite in the README.open-brain-dashboard-nextfor theHeatmapSourceFiltersnippetChecklist
README.mdwith prerequisites, step-by-step instructions, and expected outcomemetadata.jsonhas all required fields (passescheck-jsonschemaagainst.github/metadata.schema.json)enhanced-thoughtsdependencyNotes
NateBJones-Projects/OB1. Planning to run cross-AI review (gsd-code-reviewer + codex exec) against this branch before pushing upstream.CREATE OR REPLACE), noDROP TABLE, noTRUNCATE, no unqualifiedDELETE FROM, no corethoughtscolumn modifications.