Conversation
…hema comments and downgrade sequences
📝 WalkthroughWalkthroughAdds PostgreSQL migrations to create reference, public, and audit schemas (lookup tables, business tables with partitioned heartbeats, immutable audit logs), associated downgrade scripts and README, removes a placeholder init script, and adds an HMAC-SHA256 session-token hashing helper in backend security. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@migrations/02_reference.sql`:
- Line 126: The enum entry with identifier 'HEARTBEAT_ERROR' currently
references a non-existent table name "audit.heartbeats" in its action
description; update that literal in the tuple ('HEARTBEAT_ERROR',
'...audit.heartbeats...') to reference the actual persistence location used in
this schema (replace "audit.heartbeats" with the correct table name or
schema.table used for storing heartbeat/audit events) and verify the corrected
table exists in the migrations. Ensure the text retains the same meaning but
points to the real table used for appending heartbeat audit records.
In `@migrations/03_public.sql`:
- Around line 116-120: The schema currently omits CONTINUE heartbeat rows from
the heartbeats table so MAX(heartbeatAt) cannot reflect current liveness; update
the migration so that every accepted heartbeat (including events with type
CONTINUE) updates the session's lastHeartbeatAt column and keep the heartbeats
table as an append-only event log that only stores non-CONTINUE events if
desired. Concretely, ensure the session record field lastHeartbeatAt is
set/updated inside the same transaction in the heartbeat handling path
(references: lastHeartbeatAt, heartbeats, CONTINUE) and adjust any
constraint/trigger or application insert logic to write lastHeartbeatAt for all
heartbeat types while preserving the heartbeats table as the non-CONTINUE event
log or alternatively rewrite inserts to include CONTINUE rows if you prefer
MAX(heartbeatAt) semantics.
- Around line 141-149: The partition block pre-creates fixed quarter partitions
(heartbeats_2026_q1, heartbeats_2026_q2, heartbeats_2026_q3, heartbeats_2026_q4,
heartbeats_2027_q1) for table "heartbeats" but stops at 2027_q1 so inserts
beyond that will fail; update the migration to either add a DEFAULT partition
for "heartbeats" or implement scheduled/automated partition creation (e.g., a
maintenance job or function triggered before each quarter boundary) that creates
new partitions for forthcoming quarters, and ensure the migration includes
creation of that DEFAULT partition or the helper function/cron job registration
so inserts never hit an uncovered range.
- Around line 98-109: Change the sessions table to store a one-way hash of the
bearer token instead of the raw token: alter the "sessions" schema to replace
"sessionToken" TEXT NOT NULL UNIQUE with a column like "sessionTokenHash" TEXT
NOT NULL (optionally UNIQUE) and update the COMMENT on "sessions"."sessionToken"
accordingly; then update all code paths that create or validate sessions to hash
the provided token before INSERT/SELECT (e.g., functions that create sessions,
validate heartbeats, or look up sessions by token such as createSession,
findSessionByToken, validateHeartbeat) using a secure one-way hash (e.g.,
HMAC/SHA-256 with app-level secret/pepper) so comparisons are done against the
stored hash rather than raw tokens.
In `@migrations/04_audit.sql`:
- Around line 69-88: Add non-unique indexes on the foreign key columns to speed
lookups and FK checks: create an index on audit."auditLogLicenses" for
"licenseId" and an index on audit."auditLogSessions" for "sessionId" (use CREATE
INDEX ... ON ...("licenseId") and CREATE INDEX ... ON ...("sessionId"));
consider using CREATE INDEX CONCURRENTLY to avoid locking in production and
include meaningful index names (e.g., ix_auditLogLicenses_licenseId,
ix_auditLogSessions_sessionId).
- Around line 7-15: Add a DB-level immutability guard: create a trigger function
named prevent_audit_update_delete() that raises an exception, then attach a
BEFORE UPDATE OR DELETE trigger (e.g. prevent_audit_update_delete_tr) to every
table in the audit schema to block UPDATE/DELETE; also revoke UPDATE/DELETE on
the audit schema (REVOKE UPDATE, DELETE ON ALL TABLES IN SCHEMA audit FROM
PUBLIC) and explicitly GRANT only INSERT to the intended audit writer role (e.g.
audit_writer) so permissions and the trigger together enforce append-only
semantics. Ensure the migration defines the function, adds the triggers for
existing audit tables, and includes a note/event-trigger or procedure to
automatically attach the trigger to any future tables created in the audit
schema.
In `@migrations/down/01_schemas_down.sql`:
- Around line 10-12: Remove the line that drops the built-in public schema — do
not execute "DROP SCHEMA IF EXISTS public CASCADE;". Instead, only drop
migration-owned schemas ("DROP SCHEMA IF EXISTS audit CASCADE;" and "DROP SCHEMA
IF EXISTS reference CASCADE;") in the rollback. Locate the down migration that
contains the DROP SCHEMA statements and delete or comment out the "DROP SCHEMA
IF EXISTS public CASCADE;" entry so the rollback no longer removes the shared
public schema.
In `@migrations/README.md`:
- Line 140: The ERD line currently enforces one-to-one between audit_auditLogs
and audit_auditLogVendorActors using "audit_auditLogs ||--||
audit_auditLogVendorActors : acts_as"; change the cardinality to indicate
zero-or-one vendor actor per log by replacing the right-side strict "||" with an
optional marker (e.g., "audit_auditLogs ||--o| audit_auditLogVendorActors :
acts_as"), so audit_auditLogs remains required and audit_auditLogVendorActors
becomes optional.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (10)
migrations/01_init.sqlmigrations/01_schemas.sqlmigrations/02_reference.sqlmigrations/03_public.sqlmigrations/04_audit.sqlmigrations/README.mdmigrations/down/01_schemas_down.sqlmigrations/down/02_reference_down.sqlmigrations/down/03_public_down.sqlmigrations/down/04_audit_down.sql
💤 Files with no reviewable changes (1)
- migrations/01_init.sql
… token hashing, and refined comments
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (1)
migrations/04_audit.sql (1)
69-88:⚠️ Potential issue | 🟠 MajorAdd FK indexes for
licenseIdandsessionIdin audit junction tables.These columns will be common lookup paths and participate in FK checks; indexing them avoids avoidable degradation as audit rows grow.
Proposed fix
CREATE TABLE IF NOT EXISTS audit."auditLogLicenses" ( @@ ); +CREATE INDEX IF NOT EXISTS "auditLogLicenses_licenseId_idx" + ON audit."auditLogLicenses" ("licenseId"); @@ CREATE TABLE IF NOT EXISTS audit."auditLogSessions" ( @@ ); +CREATE INDEX IF NOT EXISTS "auditLogSessions_sessionId_idx" + ON audit."auditLogSessions" ("sessionId");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@migrations/04_audit.sql` around lines 69 - 88, Add non-unique indexes on the foreign key columns to speed lookups and FK checks: create an index on audit."auditLogLicenses"("licenseId") and another on audit."auditLogSessions"("sessionId"); ensure index names are unique (e.g., ix_auditLogLicenses_licenseId and ix_auditLogSessions_sessionId) and add them after the CREATE TABLE statements so they apply when the tables exist.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@migrations/03_public.sql`:
- Around line 141-152: Add automated partition maintenance: implement a database
function (e.g., create_heartbeats_partitions()) that computes upcoming quarter
boundaries and issues CREATE TABLE IF NOT EXISTS "heartbeats_YYYY_qN" PARTITION
OF "heartbeats" FOR VALUES FROM (...) TO (...) for the next N quarters (matching
the existing "heartbeats_2026_q1".. "heartbeats_2027_q1" pattern), plus a
backfill function (e.g., backfill_heartbeats_default()) that moves rows from
"heartbeats_default" into their correct partitioned tables (using INSERT INTO
... SELECT ... WHERE date BETWEEN ... then DELETE or equivalent partition-aware
move) to preserve partition pruning; register both functions with a scheduler
(pg_cron/pgagent) to run periodically (create partitions ahead of quarter
boundaries and backfill regularly) and ensure idempotency by using CREATE TABLE
IF NOT EXISTS and safe row-moving logic.
- Line 39: Update the column comment for "vendors"."passwordHash" to reflect
that stored values may be bcrypt or Argon2 outputs (and that
parameters/rounds/cost vary by algorithm) and keep the note that raw passwords
are never persisted; locate the COMMENT ON COLUMN "vendors"."passwordHash"
statement in the migration and replace the bcrypt-only text with a neutral
description mentioning both bcrypt and Argon2 and variable cost/parameters.
- Around line 124-130: Add a DB-level CHECK constraint on the "heartbeats" table
to enforce consistency between heartbeatRespStatusCode and errorCode: when
heartbeatRespStatusCode = 'ERROR' then errorCode must be NOT NULL, and when
heartbeatRespStatusCode is anything else errorCode must be NULL. Update the
CREATE TABLE "heartbeats" (or ALTER TABLE) to include a named constraint (e.g.,
chk_heartbeats_status_errorcode_consistency) that implements the boolean
expression tying "heartbeatRespStatusCode" to "errorCode" so impossible states
(ERROR with NULL errorCode, or non-ERROR with non-NULL errorCode) are rejected
by the DB.
In `@migrations/04_audit.sql`:
- Around line 141-144: The trigger creation currently concatenates schema_name
and object_identity (prevent_audit_update_delete_tr using format with %I.%I and
v_obj.schema_name / v_obj.object_identity), which double‑qualifies relations;
instead use the event trigger row's objid cast to regclass and interpolate it
with %s so Postgres will emit a correct, safely quoted relation name. Replace
the format call that builds the CREATE TRIGGER with one that uses format('CREATE
TRIGGER ... ON %s FOR EACH ROW ...', (v_obj.objid::regclass)) and keep the same
trigger/function names (prevent_audit_update_delete_tr,
audit.prevent_audit_update_delete()) so the relation is targeted safely without
schema/object string concatenation.
In `@migrations/down/04_audit_down.sql`:
- Around line 14-33: The downgrade currently drops the trigger function
audit.prevent_audit_update_delete() before removing dependent triggers and
tables and also omits dropping the audit tables; update the script to first drop
any table-level triggers that call audit.prevent_audit_update_delete(), then
drop the audit tables (e.g., audit.audit_log, audit.audit_log_vendor_actors or
any tables created in this migration) and the index
audit."auditLogVendorActors_vendorId_idx", and only after those drops revoke
schema privileges and drop the audit_writer role and finally DROP FUNCTION IF
EXISTS audit.prevent_audit_update_delete(); ensure you reference the trigger
function name audit.prevent_audit_update_delete, the index
audit."auditLogVendorActors_vendorId_idx", and the audit_writer role when
ordering these operations so no objects remain that depend on the function
before it is removed.
---
Duplicate comments:
In `@migrations/04_audit.sql`:
- Around line 69-88: Add non-unique indexes on the foreign key columns to speed
lookups and FK checks: create an index on audit."auditLogLicenses"("licenseId")
and another on audit."auditLogSessions"("sessionId"); ensure index names are
unique (e.g., ix_auditLogLicenses_licenseId and ix_auditLogSessions_sessionId)
and add them after the CREATE TABLE statements so they apply when the tables
exist.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (8)
backend/app/core/security.pymigrations/02_reference.sqlmigrations/03_public.sqlmigrations/04_audit.sqlmigrations/README.mdmigrations/down/01_schemas_down.sqlmigrations/down/03_public_down.sqlmigrations/down/04_audit_down.sql
migrations/down/04_audit_down.sql
Outdated
| -- Drop immutability trigger function (triggers are dropped implicitly with tables, but cleanup for clarity). | ||
| DROP FUNCTION IF EXISTS audit.prevent_audit_update_delete(); | ||
|
|
||
| -- Revoke audit_writer role permissions. | ||
| ALTER DEFAULT PRIVILEGES IN SCHEMA audit REVOKE INSERT ON TABLES FROM audit_writer; | ||
| REVOKE INSERT ON ALL TABLES IN SCHEMA audit FROM audit_writer; | ||
| REVOKE USAGE ON SCHEMA audit FROM audit_writer; | ||
|
|
||
| -- Drop audit_writer role if it exists. | ||
| DO $$ | ||
| BEGIN | ||
| IF EXISTS (SELECT 1 FROM pg_roles WHERE rolname = 'audit_writer') THEN | ||
| DROP ROLE IF EXISTS audit_writer; | ||
| END IF; | ||
| END $$; | ||
|
|
||
| -- Drop index. | ||
| DROP INDEX IF EXISTS audit."auditLogVendorActors_vendorId_idx"; | ||
|
|
||
| -- Drop tables (triggers are dropped implicitly with cascading). |
There was a problem hiding this comment.
Downgrade is currently broken: dependent triggers/tables are not removed before dropping the trigger function.
At Line 15, DROP FUNCTION audit.prevent_audit_update_delete() will fail while table triggers still depend on it. Also, this script never drops the audit tables it says it removes.
Proposed fix
-- Drop immutability trigger function (triggers are dropped implicitly with tables, but cleanup for clarity).
-DROP FUNCTION IF EXISTS audit.prevent_audit_update_delete();
+DROP TABLE IF EXISTS audit."auditLogVendorActors" CASCADE;
+DROP TABLE IF EXISTS audit."auditLogLicenses" CASCADE;
+DROP TABLE IF EXISTS audit."auditLogSessions" CASCADE;
+DROP TABLE IF EXISTS audit."auditLogs" CASCADE;
+DROP FUNCTION IF EXISTS audit.prevent_audit_update_delete();
@@
--- Drop tables (triggers are dropped implicitly with cascading).
+-- Audit tables dropped above.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| -- Drop immutability trigger function (triggers are dropped implicitly with tables, but cleanup for clarity). | |
| DROP FUNCTION IF EXISTS audit.prevent_audit_update_delete(); | |
| -- Revoke audit_writer role permissions. | |
| ALTER DEFAULT PRIVILEGES IN SCHEMA audit REVOKE INSERT ON TABLES FROM audit_writer; | |
| REVOKE INSERT ON ALL TABLES IN SCHEMA audit FROM audit_writer; | |
| REVOKE USAGE ON SCHEMA audit FROM audit_writer; | |
| -- Drop audit_writer role if it exists. | |
| DO $$ | |
| BEGIN | |
| IF EXISTS (SELECT 1 FROM pg_roles WHERE rolname = 'audit_writer') THEN | |
| DROP ROLE IF EXISTS audit_writer; | |
| END IF; | |
| END $$; | |
| -- Drop index. | |
| DROP INDEX IF EXISTS audit."auditLogVendorActors_vendorId_idx"; | |
| -- Drop tables (triggers are dropped implicitly with cascading). | |
| -- Drop immutability trigger function (triggers are dropped implicitly with tables, but cleanup for clarity). | |
| DROP TABLE IF EXISTS audit."auditLogVendorActors" CASCADE; | |
| DROP TABLE IF EXISTS audit."auditLogLicenses" CASCADE; | |
| DROP TABLE IF EXISTS audit."auditLogSessions" CASCADE; | |
| DROP TABLE IF EXISTS audit."auditLogs" CASCADE; | |
| DROP FUNCTION IF EXISTS audit.prevent_audit_update_delete(); | |
| -- Revoke audit_writer role permissions. | |
| ALTER DEFAULT PRIVILEGES IN SCHEMA audit REVOKE INSERT ON TABLES FROM audit_writer; | |
| REVOKE INSERT ON ALL TABLES IN SCHEMA audit FROM audit_writer; | |
| REVOKE USAGE ON SCHEMA audit FROM audit_writer; | |
| -- Drop audit_writer role if it exists. | |
| DO $$ | |
| BEGIN | |
| IF EXISTS (SELECT 1 FROM pg_roles WHERE rolname = 'audit_writer') THEN | |
| DROP ROLE IF EXISTS audit_writer; | |
| END IF; | |
| END $$; | |
| -- Drop index. | |
| DROP INDEX IF EXISTS audit."auditLogVendorActors_vendorId_idx"; | |
| -- Audit tables dropped above. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@migrations/down/04_audit_down.sql` around lines 14 - 33, The downgrade
currently drops the trigger function audit.prevent_audit_update_delete() before
removing dependent triggers and tables and also omits dropping the audit tables;
update the script to first drop any table-level triggers that call
audit.prevent_audit_update_delete(), then drop the audit tables (e.g.,
audit.audit_log, audit.audit_log_vendor_actors or any tables created in this
migration) and the index audit."auditLogVendorActors_vendorId_idx", and only
after those drops revoke schema privileges and drop the audit_writer role and
finally DROP FUNCTION IF EXISTS audit.prevent_audit_update_delete(); ensure you
reference the trigger function name audit.prevent_audit_update_delete, the index
audit."auditLogVendorActors_vendorId_idx", and the audit_writer role when
ordering these operations so no objects remain that depend on the function
before it is removed.
There was a problem hiding this comment.
please review if this is still applicable otherwise resolve this.
There was a problem hiding this comment.
@laraibg786 The issue is still applicable. The downgrade script will fail when executed because:
-
Line 15 drops the function too early:
DROP FUNCTION IF EXISTS audit.prevent_audit_update_delete();attempts to remove the function while triggers on the audit tables still depend on it. PostgreSQL will reject this with a dependency error. -
Missing table drops: The script has a comment on line 33 saying "Drop tables (triggers are dropped implicitly with cascading)" but there are no actual
DROP TABLEstatements anywhere in the file. The audit tables (auditLogs,auditLogVendorActors,auditLogLicenses,auditLogSessions) are never removed.
The correct order should be:
- Drop the audit tables (which CASCADE drops their triggers)
- Drop the index
- Drop the trigger function
- Revoke permissions and drop the role
My original suggestion still applies and will fix both issues.
…, including enhanced comments, constraint adjustments, and proper cleanup in downgrade sequences.
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
migrations/04_audit.sql (1)
69-73:⚠️ Potential issue | 🟠 MajorAdd indexes for
licenseIdandsessionIdin audit resource tables.These FK columns are likely lookup/filter paths and participate in FK checks; leaving them unindexed will degrade with audit growth.
🛠️ Proposed fix
+CREATE INDEX IF NOT EXISTS "auditLogLicenses_licenseId_idx" +ON audit."auditLogLicenses" ("licenseId"); + +CREATE INDEX IF NOT EXISTS "auditLogSessions_sessionId_idx" +ON audit."auditLogSessions" ("sessionId");Also applies to: 84-88
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@migrations/04_audit.sql` around lines 69 - 73, Add B-tree indexes on the foreign-key columns used for lookups in the audit resource tables: create indexes for "licenseId" in audit."auditLogLicenses" and for "sessionId" in the corresponding audit table (the one defined around lines 84-88, e.g., audit."auditLogSessions"); ensure the index names are unique (e.g., audit_auditLogLicenses_licenseId_idx and audit_auditLogSessions_sessionId_idx) and use CREATE INDEX IF NOT EXISTS so migrations are idempotent to avoid FK lookup slowdowns as audit volume grows.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@migrations/03_public.sql`:
- Around line 153-157: Partition range bounds use untyped date strings which are
interpreted using the session TimeZone; update each CREATE TABLE ... PARTITION
OF "heartbeats" FOR VALUES FROM (...) TO (...) (e.g., "heartbeats_2026_q1",
"heartbeats_2026_q2", "heartbeats_2026_q3", "heartbeats_2026_q4",
"heartbeats_2027_q1") to use explicit UTC timestamptz bounds such as TIMESTAMPTZ
'2026-01-01 00:00:00+00' (or the equivalent '2026-01-01
00:00:00+00'::timestamptz) for both FROM and TO so partition boundaries are
fixed to UTC and do not shift with session TimeZone.
In `@migrations/down/04_audit_down.sql`:
- Around line 29-31: Wrap the three REVOKE statements (the ALTER DEFAULT
PRIVILEGES IN SCHEMA audit REVOKE INSERT ON TABLES FROM audit_writer; REVOKE
INSERT ON ALL TABLES IN SCHEMA audit FROM audit_writer; REVOKE USAGE ON SCHEMA
audit FROM audit_writer;) in a conditional that checks for the existence of the
role audit_writer before executing them (e.g., a DO block or IF EXISTS check
against pg_roles where rolname='audit_writer'), so the revokes are only run if
the role exists and the downgrade becomes idempotent.
---
Duplicate comments:
In `@migrations/04_audit.sql`:
- Around line 69-73: Add B-tree indexes on the foreign-key columns used for
lookups in the audit resource tables: create indexes for "licenseId" in
audit."auditLogLicenses" and for "sessionId" in the corresponding audit table
(the one defined around lines 84-88, e.g., audit."auditLogSessions"); ensure the
index names are unique (e.g., audit_auditLogLicenses_licenseId_idx and
audit_auditLogSessions_sessionId_idx) and use CREATE INDEX IF NOT EXISTS so
migrations are idempotent to avoid FK lookup slowdowns as audit volume grows.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (3)
migrations/03_public.sqlmigrations/04_audit.sqlmigrations/down/04_audit_down.sql
| CREATE EVENT TRIGGER audit_immutability_on_create_table ON ddl_command_end | ||
| WHEN TAG IN ('CREATE TABLE') | ||
| EXECUTE FUNCTION audit.attach_immutability_trigger(); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "04_audit.sql" -type fRepository: eggzec/permit
Length of output: 82
🏁 Script executed:
cat -n ./migrations/04_audit.sqlRepository: eggzec/permit
Length of output: 11715
🌐 Web query:
PostgreSQL CREATE EVENT TRIGGER superuser requirement
💡 Result:
In core PostgreSQL, CREATE EVENT TRIGGER is superuser-only—there is no GRANT-able privilege (e.g., no GRANT CREATE EVENT TRIGGER ...) that lets a normal role create event triggers. The official docs state: “Only superusers can create event triggers.” [1]
This restriction exists because event triggers can fire on (and potentially interfere with) broad database-wide DDL events, which creates serious security/privilege-escalation and availability (DoS) risk if untrusted roles can define them (this is why there are ongoing community discussions/patch proposals about safely relaxing the restriction, but it’s not generally available as a standard privilege today). [2]
On some managed Postgres platforms, a provider-specific “admin/master” role may be allowed to create event triggers even though it’s not full superuser (platform-dependent), but that’s outside standard Postgres behavior. [2]
Sources: [1] [2]
🌐 Web query:
PostgreSQL CREATE ROLE privilege requirement CREATEROLE superuser
💡 Result:
In PostgreSQL, to run CREATE ROLE you must be either:
- a superuser, or
- a role that has the
CREATEROLEattribute/privilege. [1][2]
Important limits:
- To create a role with
SUPERUSER, you must already be a superuser;CREATEROLEalone is not enough. [1][3] - In current PostgreSQL,
CREATEROLEdoes not allow creating/granting special attributes likeREPLICATIONorBYPASSRLSeither; those require superuser. [3][4]
Sources: PostgreSQL docs for CREATE ROLE and role attributes. [1][2][3][4]
🌐 Web query:
PostgreSQL "postgres" role non-portable managed database environments
💡 Result:
In “vanilla” PostgreSQL installs, postgres is typically the local bootstrap superuser role. In managed/PostgreSQL-as-a-service environments, relying on a hard-coded postgres role (or assuming it is a true SUPERUSER) is non-portable because providers deliberately change/restrict that model:
- Amazon RDS for PostgreSQL: the
postgreslogin exists, but it is explicitly not a PostgreSQLSUPERUSER; it instead has an RDS-managed privilege set viards_superuser, while internal management is performed by roles likerdsadminthat customers can’t use as a real superuser. [1] - Azure Database for PostgreSQL: true superuser capability is reserved for the service; customers get an “administrator” login that is a member of a restricted pseudo-admin role (
azure_pg_admin), while a separate superuser-like role (azuresu) is for Microsoft control-plane operations. [2] - Google Cloud SQL for PostgreSQL: users (including
postgres) are grantedcloudsqlsuperuser, which providesCREATEROLE,CREATEDB,LOGIN, but notSUPERUSER(and replication/some other attributes are restricted). [3]
Practical portability guidance
- Don’t assume the admin role is named
postgres, and don’t write automation that requiresSUPERUSER. - Parameterize the “admin”/migration user and only require the minimal privileges (often
CREATEDB,CREATEROLE, and specificGRANTs). - For tasks like
CREATE EXTENSION, design for provider allow-lists/managed workflows rather than superuser-required behavior.
Sources: Amazon RDS rds_superuser / postgres not superuser [1]; Azure PostgreSQL access control / restricted superuser model [2]; Cloud SQL cloudsqlsuperuser privileges and no SUPERUSER [3].
This migration requires superuser or cluster-level privileges that are unavailable in least-privilege and managed PostgreSQL environments.
CREATE EVENT TRIGGER (line 152) is superuser-only with no grantable privilege. CREATE ROLE (line 172) requires CREATEROLE or superuser. REVOKE ... FROM postgres (line 166) fails in managed services (RDS, Azure, Cloud SQL) where the postgres role either lacks superuser privileges or doesn't exist. This blocks deployment in restricted environments.
🛠️ Proposed hardening direction
+-- Option A (recommended): move CREATE EVENT TRIGGER / CREATE ROLE / admin REVOKE
+-- statements into a privileged DBA bootstrap migration run as superuser.
+
+-- Option B: guard environment assumptions explicitly and fail with a clear message.
+DO $$
+BEGIN
+ IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = current_user AND rolsuper) THEN
+ RAISE EXCEPTION '04_audit.sql requires superuser for CREATE EVENT TRIGGER';
+ END IF;
+END $$;… and add indexes for audit log tables
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
migrations/04_audit.sql (1)
157-159:⚠️ Potential issue | 🟠 MajorPrivileged DDL here can block migration in managed or least-privilege environments.
At Line 157,
CREATE EVENT TRIGGERtypically requires superuser. At Line 177,CREATE ROLErequiresCREATEROLE(or superuser). At Line 171, hard-codedREVOKE ... FROM postgrescan fail where that role is absent. Consider moving these statements to a privileged bootstrap migration, or guard them explicitly.Possible guard pattern
+DO $$ +BEGIN + IF NOT EXISTS ( + SELECT 1 FROM pg_roles + WHERE rolname = current_user + AND rolsuper + ) THEN + RAISE EXCEPTION '04_audit.sql requires SUPERUSER for CREATE EVENT TRIGGER'; + END IF; +END $$; + REVOKE UPDATE, DELETE ON ALL TABLES IN SCHEMA audit FROM PUBLIC; -REVOKE UPDATE, DELETE ON ALL TABLES IN SCHEMA audit FROM postgres; +DO $$ +BEGIN + IF EXISTS (SELECT 1 FROM pg_roles WHERE rolname = 'postgres') THEN + EXECUTE 'REVOKE UPDATE, DELETE ON ALL TABLES IN SCHEMA audit FROM postgres'; + END IF; +END $$;Verification (read-only): confirm privileged statements in code and required capabilities in target DB.
#!/bin/bash set -euo pipefail rg -n "CREATE EVENT TRIGGER|CREATE ROLE audit_writer|REVOKE UPDATE, DELETE ON ALL TABLES IN SCHEMA audit FROM postgres" migrations/04_audit.sql cat <<'SQL' -- Run on target database: SELECT current_user, rolsuper, rolcreaterole FROM pg_roles WHERE rolname = current_user; SELECT EXISTS ( SELECT 1 FROM pg_roles WHERE rolname = 'postgres' ) AS has_postgres_role; SQLAlso applies to: 171-171, 177-177
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@migrations/04_audit.sql` around lines 157 - 159, The migration contains privileged statements—CREATE EVENT TRIGGER audit_immutability_on_create_table, CREATE ROLE audit_writer, and the hard-coded REVOKE ... FROM postgres—that will fail under least-privilege/managed DBs; either move these to a separate privileged bootstrap migration, or guard them in this script by checking capabilities and role existence before running (e.g., verify current_user has rolsuper or rolcreaterole for CREATE ROLE, and check existence of role 'postgres' before REVOKE), and skip or emit a safe notice when not permitted so the rest of the migration can proceed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@migrations/03_public.sql`:
- Around line 94-102: The sessions table stores HMAC-SHA256 digests in the
sessionTokenHash BYTEA column but lacks a DB-level length check; update the
CREATE TABLE "sessions" definition (and any other similar CREATEs) to add a
CHECK constraint on "sessionTokenHash" that enforces
octet_length("sessionTokenHash") = 32 (i.e., exactly 32 bytes) so
malformed/incorrect-length hashes are rejected at the DB layer.
---
Duplicate comments:
In `@migrations/04_audit.sql`:
- Around line 157-159: The migration contains privileged statements—CREATE EVENT
TRIGGER audit_immutability_on_create_table, CREATE ROLE audit_writer, and the
hard-coded REVOKE ... FROM postgres—that will fail under least-privilege/managed
DBs; either move these to a separate privileged bootstrap migration, or guard
them in this script by checking capabilities and role existence before running
(e.g., verify current_user has rolsuper or rolcreaterole for CREATE ROLE, and
check existence of role 'postgres' before REVOKE), and skip or emit a safe
notice when not permitted so the rest of the migration can proceed.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (3)
migrations/03_public.sqlmigrations/04_audit.sqlmigrations/down/04_audit_down.sql
| CREATE TABLE IF NOT EXISTS "sessions" ( | ||
| "id" UUID PRIMARY KEY DEFAULT uuidv7(), | ||
| "licenseId" UUID NOT NULL REFERENCES "licenses"("id") ON DELETE RESTRICT, | ||
| "sessionStatusCode" TEXT NOT NULL REFERENCES reference."sessionStatuses"("code") ON DELETE RESTRICT, | ||
| "sessionTokenHash" BYTEA NOT NULL UNIQUE, | ||
| "deviceFingerprintHash" TEXT NOT NULL, | ||
| "createdAt" TIMESTAMPTZ NOT NULL DEFAULT NOW(), | ||
| "metadata" JSONB | ||
| ); |
There was a problem hiding this comment.
Enforce fixed digest length for sessionTokenHash.
At Line 98 and Line 108, the schema documents/stores HMAC-SHA256 token hashes but does not enforce the expected 32-byte length at the DB layer. Add a CHECK to prevent malformed hash values.
Suggested hardening
CREATE TABLE IF NOT EXISTS "sessions" (
"id" UUID PRIMARY KEY DEFAULT uuidv7(),
"licenseId" UUID NOT NULL REFERENCES "licenses"("id") ON DELETE RESTRICT,
"sessionStatusCode" TEXT NOT NULL REFERENCES reference."sessionStatuses"("code") ON DELETE RESTRICT,
"sessionTokenHash" BYTEA NOT NULL UNIQUE,
"deviceFingerprintHash" TEXT NOT NULL,
"createdAt" TIMESTAMPTZ NOT NULL DEFAULT NOW(),
- "metadata" JSONB
+ "metadata" JSONB,
+ CONSTRAINT "sessions_sessionTokenHash_sha256_len"
+ CHECK (octet_length("sessionTokenHash") = 32)
);Also applies to: 108-108
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@migrations/03_public.sql` around lines 94 - 102, The sessions table stores
HMAC-SHA256 digests in the sessionTokenHash BYTEA column but lacks a DB-level
length check; update the CREATE TABLE "sessions" definition (and any other
similar CREATEs) to add a CHECK constraint on "sessionTokenHash" that enforces
octet_length("sessionTokenHash") = 32 (i.e., exactly 32 bytes) so
malformed/incorrect-length hashes are rejected at the DB layer.
Summary by CodeRabbit
Chores
New Features
Documentation