Skip to content

[schemas] Add brain-stats-daily — daily stats rollup + heatmap filter#16

Open
alanshurafa wants to merge 6 commits intomainfrom
contrib/alanshurafa/brain-stats-daily
Open

[schemas] Add brain-stats-daily — daily stats rollup + heatmap filter#16
alanshurafa wants to merge 6 commits intomainfrom
contrib/alanshurafa/brain-stats-daily

Conversation

@alanshurafa
Copy link
Copy Markdown
Owner

Contribution Type

  • Schema (/schemas)

What does this do?

Adds four Postgres RPC functions for server-side daily-bucket aggregation that power dashboard heatmaps: brain_stats_daily, brain_stats_daily_lifelog, and their _jsonb variants. The JSONB variants exist specifically to bypass PostgREST's default db-max-rows=1000 cap so multi-year heatmap windows (e.g., 10 years) return complete data in a single response. Also ships an optional HeatmapSourceFilter.tsx snippet for open-brain-dashboard-next with wiring notes.

Port of ExoCortex migrations 202604210006 (core), 202604210009 (broader lifelog source coverage), and 202604210010 (jsonb variants) — consolidated into a single file with final state only. Rationale: OB1 schemas are installed by pasting SQL into the Supabase SQL Editor, not applied as a migration chain, so intermediate states would just confuse users.

Requirements

  • Core thoughts table (standard Open Brain setup)
  • The enhanced-thoughts schema (adds source_type and sensitivity_tier columns that these RPCs read). Linked as a prerequisite in the README.
  • Optional: open-brain-dashboard-next for the HeatmapSourceFilter snippet

Checklist

  • I've read CONTRIBUTING.md
  • My contribution has a README.md with prerequisites, step-by-step instructions, and expected outcome
  • My metadata.json has all required fields (passes check-jsonschema against .github/metadata.schema.json)
  • README links to the enhanced-thoughts dependency
  • I tested this on my own Open Brain instance
  • No credentials, API keys, or secrets are included

Notes

  • Pre-review: this is a fork PR ahead of opening the upstream PR against NateBJones-Projects/OB1. Planning to run cross-AI review (gsd-code-reviewer + codex exec) against this branch before pushing upstream.
  • Markdownlint passes on all new files (whole-repo lint fails on pre-existing issues in other contributions — separate cleanup concern, not introduced here).
  • SQL is idempotent (CREATE OR REPLACE), no DROP TABLE, no TRUNCATE, no unqualified DELETE FROM, no core thoughts column modifications.

Port of ExoCortex brain_stats_daily RPCs (migrations 6/9/10 consolidated
into one file, final state only) for server-side daily-bucket heatmap
aggregation. Includes JSONB variants that bypass PostgREST's default
1000-row cap so multi-year heatmap windows work end-to-end, plus a
lifelog variant that buckets by metadata life-date fields. Optional
dashboard-snippet ports HeatmapSourceFilter.tsx with wiring notes for
open-brain-dashboard-next. Depends on enhanced-thoughts for source_type
and sensitivity_tier columns.
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 3acf3c566f

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +115 to +117
const thoughtId = active.id as number;
const newStatus = over.id as string;

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Derive drop status from container instead of over.id

Use the drop container ID (column status) rather than over.id directly here. In dnd-kit sortable lists, over.id is often the hovered card ID when dropping onto a populated column, so this code sends a numeric thought ID as status. That fails validation in /api/kanban/update and reverts, which makes drag-and-drop unreliable whenever users drop onto an existing card instead of empty column space.

Useful? React with 👍 / 👎.

const results: Thought[] = [];
for (const thoughtType of ["task", "idea"]) {
const sp = new URLSearchParams();
sp.set("per_page", "100");
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Paginate Kanban queries beyond first 100 rows

This hard-codes a single page of 100 records per type and never follows additional pages. Accounts with more than 100 task thoughts or 100 idea thoughts will silently miss items on the workflow board, so users cannot view or update a large part of their backlog from this UI.

Useful? React with 👍 / 👎.

Comment thread schemas/brain-stats-daily/schema.sql Outdated
Comment on lines +128 to +129
when raw_date ~ '^\d{4}-\d{2}-\d{2}' then substring(raw_date from 1 for 10)::date
else null
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Handle invalid metadata dates before casting to date

This cast can throw for malformed-but-matching values like 2024-13-01 or 2024-02-31, which aborts the whole RPC instead of skipping bad rows. Since these fields come from imported metadata, one bad raw_date can break daily lifelog aggregation for all results; the same pattern is repeated in the JSONB variant later in this file.

Useful? React with 👍 / 👎.

alanshurafa and others added 4 commits April 21, 2026 16:59
Two-schema install with an enforced ordering dependency is not a
beginner experience. Installing brain-stats-daily first errors out
on missing source_type/sensitivity_tier columns, so catalogs sorting
by difficulty should present this as intermediate.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…urity

Apply four Round-1 review fixes to schema.sql (and docs):

1. P1-1 security: switch all RPCs from SECURITY DEFINER → SECURITY
   INVOKER and drop the anon grant. Definer + owner-is-postgres
   bypasses RLS on the thoughts table, leaking aggregate counts of
   private captures to unauthenticated callers. Invoker + no-anon
   makes the RPCs respect whatever policy the tenant set.

2. P1-2 safe date parsing: add a _brain_stats_try_parse_iso_date
   helper that returns NULL on bad input. One malformed metadata
   value (e.g. '2026-99-99T...') would otherwise raise
   date/time-field-out-of-range and abort the whole RPC. Now each
   candidate field is parsed in isolation and a bad earlier field
   no longer hides a valid later fallback (P2-2 same fix).

3. P1-3 calendar-day window: replace 'now() - interval' rolling
   cutoff with (now() at time zone 'UTC')::date - v_days + 1 so
   p_days=30 returns exactly 30 calendar-day buckets, not 31
   partial ones. Also unifies the UTC convention between the
   setof and lifelog variants (P2-1).

4. P2-4 case-normalize restricted tier: lower(sensitivity_tier)
   is distinct from 'restricted' so mixed-case values don't leak
   through the default exclusion.

Add a 'Security Model' section to the schema README and an auth
note to the dashboard-snippets README so users know anon isn't
granted by default. Reword the prereq language so users understand
the missing-column error surfaces at first RPC call, not at
CREATE FUNCTION time.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…ote hardcoded values

- encodeURIComponent the source_type when interpolating into the
  Link href. Current values are safe, but the snippet explicitly
  invites customization and an edited value with '&' or '#' would
  silently corrupt the URL.
- Add aria-current='page' to the active pill so screen readers
  get the same signal sighted users get from the color change.
- Comment above HEATMAP_SOURCE_OPTIONS flagging the values as
  examples and pointing users at a SQL query to list their actual
  source_types.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@alanshurafa
Copy link
Copy Markdown
Owner Author

Refreshing checks after markdownlint cleanup merged into fork main.

@alanshurafa alanshurafa reopened this Apr 22, 2026
@alanshurafa
Copy link
Copy Markdown
Owner Author

Refreshing checks after fork markdownlint workflow fix.

@alanshurafa alanshurafa reopened this Apr 22, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant