diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md index 27b4e41..f429536 100644 --- a/.github/pull_request_template.md +++ b/.github/pull_request_template.md @@ -22,6 +22,8 @@ - [ ] All tests pass - [ ] TypeScript / Python types are valid (no new type errors) - [ ] Documentation updated if behavior changed +- [ ] If feature/status labels changed, I updated `status/feature-readiness.json`, regenerated `status/generated/feature-table.md`, and synchronized `README.md`, `ROADMAP.md`, and affected `kitty-specs/*/meta.json` +- [ ] If status changes impact private planning docs, I logged the required `joyus-ai-internal` sync action in this PR (or marked N/A) - [ ] No secrets, credentials, or client-specific content introduced - [ ] Follows the Client Abstraction rule (§2.10): no real names, client names, or domain-specific jargon - [ ] PR title is descriptive and follows conventional commit style if applicable diff --git a/.github/workflows/secret-scan.yml b/.github/workflows/secret-scan.yml index a04c8a0..2faa31e 100644 --- a/.github/workflows/secret-scan.yml +++ b/.github/workflows/secret-scan.yml @@ -20,8 +20,18 @@ jobs: with: fetch-depth: 0 - - uses: gitleaks/gitleaks-action@v2 - with: - args: --config=.gitleaks.toml --redact - env: - GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} + - name: Install gitleaks CLI + run: | + VERSION="8.28.0" + curl -sSL "https://github.com/gitleaks/gitleaks/releases/download/v${VERSION}/gitleaks_${VERSION}_linux_x64.tar.gz" \ + | tar -xz gitleaks + sudo mv gitleaks /usr/local/bin/gitleaks + gitleaks version + + - name: Run gitleaks scan + run: | + if [ -f ".gitleaks.toml" ]; then + gitleaks git --redact --config ".gitleaks.toml" + else + gitleaks git --redact + fi diff --git a/.github/workflows/status-consistency.yml b/.github/workflows/status-consistency.yml new file mode 100644 index 0000000..2432652 --- /dev/null +++ b/.github/workflows/status-consistency.yml @@ -0,0 +1,27 @@ +name: Status Consistency + +on: + pull_request: + push: + branches: + - main + +jobs: + status-consistency: + runs-on: ubuntu-latest + steps: + - name: Checkout + uses: actions/checkout@v4 + + - name: Setup Python + uses: actions/setup-python@v5 + with: + python-version: '3.12' + + - name: Validate canonical status consistency + run: | + python scripts/verify-status-consistency.py + + - name: Verify generated status snippets are up to date + run: | + python scripts/generate-status-snippets.py --check diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 0978d5a..2f4ebfb 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -111,6 +111,18 @@ Keep the summary line under 72 characters. Use the body to explain _why_, not _w - The project constitution at `spec/constitution.md` defines hard constraints. Read it before making architectural decisions. - Client-specific content belongs in private deployment repos, not here. +## Status Synchronization Policy + +When a PR changes feature or readiness status, keep all status surfaces aligned in the same PR: + +- `status/feature-readiness.json` (canonical source) +- `status/generated/feature-table.md` (generated artifact; run `python scripts/generate-status-snippets.py`) +- `README.md` status section +- `ROADMAP.md` lifecycle sections +- Any affected `kitty-specs//meta.json` lifecycle fields + +If private planning artifacts also need updates (for example in `joyus-ai-internal`), record that required sync action in the PR description before merge. + ## Questions Open a [GitHub Discussion](https://github.com/Priivacy-ai/joyus-ai/discussions) for design questions or ideas that aren't yet a concrete issue. diff --git a/README.md b/README.md index 3ad867f..6af356b 100644 --- a/README.md +++ b/README.md @@ -87,17 +87,23 @@ Production deployment configuration is maintained in a separate private reposito This project uses [Spec Kitty](https://github.com/Priivacy-ai/spec-kitty) for spec-driven development. Feature specifications live in `kitty-specs/`. -Current status snapshot (source: `python scripts/pride-status.py` on 2026-02-23): +Current status snapshot (canonical source: `status/feature-readiness.json`; generated via `python scripts/generate-status-snippets.py`): | Spec | Description | Status | |------|-------------|--------| -| `001` | MCP Server AWS Deployment | Complete | -| `002` | Session Context Management | Complete | -| `003` | Platform Architecture Overview | Spec-Only | -| `004` | Workflow Enforcement | Complete | -| `005` | Content Intelligence (Profile Engine) | Complete (Phases A–C, WP01–WP14) | -| `006` | Content Infrastructure | Complete (WP01–WP12) | -| `007` | Org-Scale Agentic Governance | Planning | +| `001` | MCP Server AWS Deployment | Lifecycle: execution, implementation: integrated, readiness: not_ready | +| `002` | Session Context Management | Lifecycle: done, implementation: validated, readiness: pilot_ready | +| `003` | Platform Architecture Overview | Lifecycle: spec-only, implementation: none, readiness: not_ready | +| `004` | Workflow Enforcement | Lifecycle: done, implementation: validated, readiness: pilot_ready | +| `005` | Content Intelligence (Profile Engine) | Lifecycle: done, implementation: validated, readiness: pilot_ready | +| `006` | Content Infrastructure | Lifecycle: done, implementation: integrated, readiness: not_ready | +| `007` | Org-Scale Agentic Governance | Lifecycle: planning, implementation: scaffolded, readiness: not_ready | +| `008` | Profile Isolation and Scale | Lifecycle: execution, implementation: integrated, readiness: not_ready | +| `009` | Automated Pipelines Framework | Lifecycle: execution, implementation: integrated, readiness: not_ready | +| `010` | Multi-Location Operations Module | Lifecycle: planning, implementation: none, readiness: not_ready | +| `011` | Compliance Policy Modules | Lifecycle: planning, implementation: none, readiness: not_ready | + +Generated status artifact: `status/generated/feature-table.md`. Project-level architecture decisions, implementation plan, and constitution are in `spec/`. diff --git a/ROADMAP.md b/ROADMAP.md index 900dcc8..3d95023 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -2,6 +2,8 @@ An open-source, multi-tenant AI agent platform that encodes organizational knowledge as testable, enforceable skills. +Canonical readiness source: `status/feature-readiness.json` (rendered summary: `status/generated/feature-table.md`). + --- ## Shipped @@ -12,10 +14,12 @@ An open-source, multi-tenant AI agent platform that encodes organizational knowl - **Web Chat UI** - Browser-based chat interface with Claude Desktop configuration support. - **Content Intelligence** - Corpus analysis, stylometric extraction, structured writing profiles, fidelity verification, drift monitoring, and repair. -## In Development +## In Development / Hardening -- **Content Infrastructure** — Corpus connector interface, search abstraction layer, content state management, access level mapping, AI-optimized content API for bot mediation. +- **Content Infrastructure** — Lifecycle: done, implementation: integrated, production_readiness: not_ready (staging/data-schema validation and soak/rollback gates pending). - **Org-Scale Agentic Governance** — Maturity scoring, spec lifecycle enforcement, CI-integrated governance gates, remediation tracking. +- **Profile Isolation and Scale** — Lifecycle: execution, implementation: integrated (WP01/WP02 enforcement + WP03 queue/backpressure primitives), production_readiness: not_ready. +- **Automated Pipelines Framework** — Lifecycle: execution, implementation: integrated (core stage contract + orchestrator), production_readiness: not_ready. ## Planned diff --git a/deploy/scripts/feature-006-search-vector-check.sh b/deploy/scripts/feature-006-search-vector-check.sh new file mode 100755 index 0000000..59ad953 --- /dev/null +++ b/deploy/scripts/feature-006-search-vector-check.sh @@ -0,0 +1,97 @@ +#!/usr/bin/env bash +# Feature 006: verify content.items search_vector readiness + query plan. +# +# Required env: +# DATABASE_URL PostgreSQL DSN +# Optional env: +# TEST_QUERY Search phrase (default: "policy") +# TEST_SOURCE_ID Explicit source_id filter +# PG_PSQL_CONTAINER default: joyus-ai-mcp-server-db-1 (docker fallback) +set -euo pipefail + +if [ -z "${DATABASE_URL:-}" ]; then + echo "ERROR: DATABASE_URL is required." >&2 + exit 1 +fi + +TEST_QUERY="${TEST_QUERY:-policy}" +TEST_SOURCE_ID="${TEST_SOURCE_ID:-}" +PG_PSQL_CONTAINER="${PG_PSQL_CONTAINER:-joyus-ai-mcp-server-db-1}" + +PSQL_MODE="host" +if ! command -v psql >/dev/null 2>&1; then + if command -v docker >/dev/null 2>&1 && docker inspect "$PG_PSQL_CONTAINER" >/dev/null 2>&1; then + PSQL_MODE="docker" + else + echo "ERROR: psql not available and docker fallback container not found." >&2 + exit 1 + fi +fi + +DB_NAME="$(echo "$DATABASE_URL" | sed -E 's|.*/([^/?]+).*|\1|')" + +run_sql() { + local sql="$1" + if [ "$PSQL_MODE" = "host" ]; then + psql "$DATABASE_URL" -v ON_ERROR_STOP=1 -X -Atc "$sql" + else + docker exec "$PG_PSQL_CONTAINER" sh -lc "PGPASSWORD=postgres psql -U postgres -d '$DB_NAME' -v ON_ERROR_STOP=1 -X -Atc \"$sql\"" + fi +} + +run_sql_script() { + local sql="$1" + if [ "$PSQL_MODE" = "host" ]; then + psql "$DATABASE_URL" -v ON_ERROR_STOP=1 -X <&2 + exit 1 +fi + +column_exists="$(run_sql "select exists (select 1 from information_schema.columns where table_schema='content' and table_name='items' and column_name='search_vector');")" +if [ "$column_exists" != "t" ]; then + echo "ERROR: content.items.search_vector column is missing." >&2 + exit 1 +fi + +gin_count="$(run_sql "select count(*) from pg_indexes where schemaname='content' and tablename='items' and indexdef ilike '%using gin%' and indexdef ilike '%search_vector%';")" +if [ "$gin_count" -eq 0 ]; then + echo "ERROR: no GIN index found for content.items.search_vector." >&2 + exit 1 +fi + +if [ -z "$TEST_SOURCE_ID" ]; then + TEST_SOURCE_ID="$(run_sql "select source_id from content.items where source_id is not null limit 1;")" +fi + +echo "search_vector column: OK" +echo "search_vector GIN indexes: $gin_count" +echo "test query: $TEST_QUERY" +echo "test source_id: ${TEST_SOURCE_ID:-}" + +echo +echo "== Query plan (EXPLAIN ANALYZE) ==" +run_sql_script " +EXPLAIN (ANALYZE, BUFFERS) +SELECT id, source_id, title +FROM content.items +WHERE ('$TEST_SOURCE_ID' = '' OR source_id = '$TEST_SOURCE_ID') + AND search_vector @@ plainto_tsquery('english', '$TEST_QUERY') +ORDER BY ts_rank(search_vector, plainto_tsquery('english', '$TEST_QUERY')) DESC +LIMIT 10; +" + +echo +echo "Feature 006 search_vector validation completed." diff --git a/deploy/scripts/feature-006-smoke.sh b/deploy/scripts/feature-006-smoke.sh new file mode 100755 index 0000000..87a28cd --- /dev/null +++ b/deploy/scripts/feature-006-smoke.sh @@ -0,0 +1,200 @@ +#!/usr/bin/env bash +# Feature 006 mediation smoke checks. +# +# Optional env: +# BASE_URL API base URL (default: http://localhost:3000) +# MEDIATION_API_KEY Required for token-negative + happy path checks +# MEDIATION_BEARER_TOKEN Required for happy path checks +# MEDIATION_PROFILE_ID Optional profile for session create +# MEDIATION_TEST_MESSAGE Message for happy path (default provided) +# MEDIATION_TEST_MAX_SOURCES Max sources (default: 3) +# REQUIRE_NON_PLACEHOLDER Require response to not include placeholder sentinel (default: true) +# REQUIRE_CITATIONS Require at least 1 citation in happy-path response (default: true) +set -euo pipefail + +if ! command -v curl >/dev/null 2>&1; then + echo "ERROR: curl is required." >&2 + exit 1 +fi + +BASE_URL="${BASE_URL:-http://localhost:3000}" +MEDIATION_API_KEY="${MEDIATION_API_KEY:-}" +MEDIATION_BEARER_TOKEN="${MEDIATION_BEARER_TOKEN:-}" +MEDIATION_PROFILE_ID="${MEDIATION_PROFILE_ID:-}" +MEDIATION_TEST_MESSAGE="${MEDIATION_TEST_MESSAGE:-policy status update}" +MEDIATION_TEST_MAX_SOURCES="${MEDIATION_TEST_MAX_SOURCES:-3}" +REQUIRE_NON_PLACEHOLDER="${REQUIRE_NON_PLACEHOLDER:-true}" +REQUIRE_CITATIONS="${REQUIRE_CITATIONS:-true}" + +TMP_DIR="$(mktemp -d)" +trap 'rm -rf "$TMP_DIR"' EXIT + +FAILURES=0 + +pass() { echo "PASS: $*"; } +fail() { echo "FAIL: $*"; FAILURES=$((FAILURES + 1)); } + +request() { + local method="$1" + local url="$2" + local body_file="$3" + shift 3 + local out_file="$TMP_DIR/response.json" + local code + + if [ -n "$body_file" ] && [ -f "$body_file" ]; then + if ! code="$(curl -sS -o "$out_file" -w "%{http_code}" -X "$method" "$url" -H 'Content-Type: application/json' "$@" --data @"$body_file" 2>/dev/null)"; then + code="000" + fi + else + if ! code="$(curl -sS -o "$out_file" -w "%{http_code}" -X "$method" "$url" "$@" 2>/dev/null)"; then + code="000" + fi + fi + + if [ ! -f "$out_file" ] || [ ! -s "$out_file" ]; then + echo '{"error":"network_error","message":"request failed before receiving an HTTP response"}' > "$out_file" + fi + + echo "$code" > "$TMP_DIR/status.code" +} + +status_code() { + cat "$TMP_DIR/status.code" +} + +response_contains() { + local needle="$1" + grep -Fq "$needle" "$TMP_DIR/response.json" +} + +require_status() { + local expected="$1" + local context="$2" + local got + got="$(status_code)" + if [ "$got" = "$expected" ]; then + pass "$context (HTTP $got)" + else + fail "$context (expected HTTP $expected, got $got)" + echo "Response:" && cat "$TMP_DIR/response.json" + fi +} + +json_get() { + local expr="$1" + if command -v jq >/dev/null 2>&1; then + jq -r "$expr" "$TMP_DIR/response.json" + else + echo "" + fi +} + +echo "== Feature 006 smoke ==" +echo "Target: $BASE_URL" + +echo +echo "-- Health / metrics --" +request GET "$BASE_URL/api/content/health" "" +require_status 200 "content health endpoint" + +request GET "$BASE_URL/api/content/metrics" "" +require_status 200 "content metrics endpoint" + +request GET "$BASE_URL/api/mediation/health" "" +require_status 200 "mediation health endpoint" + +echo +echo "-- Negative auth paths --" +NO_BODY="$TMP_DIR/no-body.json" +echo '{}' > "$NO_BODY" + +request POST "$BASE_URL/api/mediation/sessions" "$NO_BODY" +require_status 401 "session create without api key" +if response_contains '"error":"missing_api_key"'; then + pass "missing_api_key error code" +else + fail "expected missing_api_key error code" +fi + +if [ -n "$MEDIATION_API_KEY" ]; then + request POST "$BASE_URL/api/mediation/sessions" "$NO_BODY" -H "X-API-Key: $MEDIATION_API_KEY" + require_status 401 "session create without bearer token" + if response_contains '"error":"missing_user_token"'; then + pass "missing_user_token error code" + else + fail "expected missing_user_token error code" + fi +else + echo "SKIP: token-negative check requires MEDIATION_API_KEY" +fi + +echo +echo "-- Happy path (optional) --" +if [ -z "$MEDIATION_API_KEY" ] || [ -z "$MEDIATION_BEARER_TOKEN" ]; then + echo "SKIP: happy path requires MEDIATION_API_KEY and MEDIATION_BEARER_TOKEN" +else + CREATE_BODY="$TMP_DIR/create-session.json" + if [ -n "$MEDIATION_PROFILE_ID" ]; then + printf '{"profileId":"%s"}\n' "$MEDIATION_PROFILE_ID" > "$CREATE_BODY" + else + echo '{}' > "$CREATE_BODY" + fi + + request POST "$BASE_URL/api/mediation/sessions" "$CREATE_BODY" \ + -H "X-API-Key: $MEDIATION_API_KEY" \ + -H "Authorization: Bearer $MEDIATION_BEARER_TOKEN" + require_status 201 "session create with api key + bearer" + + SESSION_ID="$(json_get '.sessionId // .id // empty')" + if [ -z "$SESSION_ID" ]; then + fail "could not parse sessionId from create response (jq required for happy path)" + else + pass "session created: $SESSION_ID" + + MESSAGE_BODY="$TMP_DIR/message.json" + printf '{"message":"%s","maxSources":%s}\n' "$MEDIATION_TEST_MESSAGE" "$MEDIATION_TEST_MAX_SOURCES" > "$MESSAGE_BODY" + + request POST "$BASE_URL/api/mediation/sessions/$SESSION_ID/messages" "$MESSAGE_BODY" \ + -H "X-API-Key: $MEDIATION_API_KEY" \ + -H "Authorization: Bearer $MEDIATION_BEARER_TOKEN" + require_status 200 "session message send" + + MESSAGE_TEXT="$(json_get '.message // empty')" + if [ -n "$MESSAGE_TEXT" ] && [ "$MESSAGE_TEXT" != "null" ]; then + pass "message response present" + else + fail "message response missing" + fi + + if [ "$REQUIRE_NON_PLACEHOLDER" = "true" ]; then + if echo "$MESSAGE_TEXT" | grep -Fq '[Generation not configured]'; then + fail "message is placeholder sentinel (generation provider not configured)" + else + pass "message is non-placeholder" + fi + fi + + if [ "$REQUIRE_CITATIONS" = "true" ]; then + CITATION_COUNT="$(json_get '(.citations // []) | length')" + if [ "${CITATION_COUNT:-0}" -gt 0 ]; then + pass "citations present ($CITATION_COUNT)" + else + fail "expected citations in response" + fi + fi + + request DELETE "$BASE_URL/api/mediation/sessions/$SESSION_ID" "" \ + -H "X-API-Key: $MEDIATION_API_KEY" \ + -H "Authorization: Bearer $MEDIATION_BEARER_TOKEN" + require_status 204 "session close" + fi +fi + +echo +if [ "$FAILURES" -gt 0 ]; then + echo "Feature 006 smoke FAILED ($FAILURES failure(s))." + exit 1 +fi + +echo "Feature 006 smoke PASSED." diff --git a/deploy/scripts/feature-006-staging-rehearsal.sh b/deploy/scripts/feature-006-staging-rehearsal.sh new file mode 100755 index 0000000..98630a9 --- /dev/null +++ b/deploy/scripts/feature-006-staging-rehearsal.sh @@ -0,0 +1,101 @@ +#!/usr/bin/env bash +# Feature 006 staging rehearsal: +# 1) run schema migration +# 2) validate search_vector/index/query plan +# 3) optionally run mediation smoke +# 4) optionally rollback by restoring pre-migration dump +# +# Required env: +# DATABASE_URL +# Optional env: +# REPO_ROOT default: auto-detect from this script path +# RUN_SMOKE default: false +# DO_ROLLBACK default: false +# RUN_DB_PUSH default: false (use only for ad-hoc schema reconciliation) +# PG_DUMP_PATH default: auto temp file +# PG_DUMP_CONTAINER default: joyus-ai-mcp-server-db-1 (docker fallback) +# BASE_URL used by feature-006-smoke.sh if RUN_SMOKE=true +# MEDIATION_API_KEY used by feature-006-smoke.sh if RUN_SMOKE=true +# MEDIATION_BEARER_TOKEN used by feature-006-smoke.sh if RUN_SMOKE=true +set -euo pipefail + +if [ -z "${DATABASE_URL:-}" ]; then + echo "ERROR: DATABASE_URL is required." >&2 + exit 1 +fi + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="${REPO_ROOT:-$(cd "$SCRIPT_DIR/../.." && pwd)}" +RUN_SMOKE="${RUN_SMOKE:-false}" +DO_ROLLBACK="${DO_ROLLBACK:-false}" +RUN_DB_PUSH="${RUN_DB_PUSH:-false}" +PG_DUMP_PATH="${PG_DUMP_PATH:-$(mktemp -t feature006-staging-XXXXXX.dump)}" +PG_DUMP_CONTAINER="${PG_DUMP_CONTAINER:-joyus-ai-mcp-server-db-1}" + +DUMP_MODE="host" +if ! command -v pg_dump >/dev/null 2>&1 || ! command -v pg_restore >/dev/null 2>&1; then + if command -v docker >/dev/null 2>&1 && docker inspect "$PG_DUMP_CONTAINER" >/dev/null 2>&1; then + DUMP_MODE="docker" + else + echo "ERROR: pg_dump/pg_restore not available and docker fallback container not found." >&2 + exit 1 + fi +fi + +cleanup_dump() { + if [ -f "$PG_DUMP_PATH" ]; then + rm -f "$PG_DUMP_PATH" + fi +} + +if [ "$DO_ROLLBACK" != "true" ]; then + trap cleanup_dump EXIT +fi + +echo "== Feature 006 staging rehearsal ==" +echo "Repo root: $REPO_ROOT" +echo "Database: ${DATABASE_URL%%\?*}" +echo "Dump mode: $DUMP_MODE" +echo + +echo "-- Step 0: backup pre-migration database --" +if [ "$DUMP_MODE" = "host" ]; then + pg_dump --format=custom --file="$PG_DUMP_PATH" "$DATABASE_URL" +else + DB_NAME="$(echo "$DATABASE_URL" | sed -E 's|.*/([^/?]+).*|\1|')" + docker exec "$PG_DUMP_CONTAINER" sh -lc "PGPASSWORD=postgres pg_dump -U postgres -d '$DB_NAME' -Fc" > "$PG_DUMP_PATH" +fi +echo "Backup created at: $PG_DUMP_PATH" + +echo +echo "-- Step 1: run migrations --" +npm --prefix "$REPO_ROOT/joyus-ai-mcp-server" run db:migrate +if [ "$RUN_DB_PUSH" = "true" ]; then + echo "RUN_DB_PUSH=true -> running db:push reconciliation" + npm --prefix "$REPO_ROOT/joyus-ai-mcp-server" run db:push +fi + +echo +echo "-- Step 2: validate search_vector readiness and query plan --" +"$REPO_ROOT/deploy/scripts/feature-006-search-vector-check.sh" + +if [ "$RUN_SMOKE" = "true" ]; then + echo + echo "-- Step 3: run mediation smoke --" + "$REPO_ROOT/deploy/scripts/feature-006-smoke.sh" +fi + +if [ "$DO_ROLLBACK" = "true" ]; then + echo + echo "-- Step 4: rollback rehearsal (restore backup) --" + if [ "$DUMP_MODE" = "host" ]; then + pg_restore --clean --if-exists --no-owner --no-privileges --dbname="$DATABASE_URL" "$PG_DUMP_PATH" + else + DB_NAME="$(echo "$DATABASE_URL" | sed -E 's|.*/([^/?]+).*|\1|')" + cat "$PG_DUMP_PATH" | docker exec -i "$PG_DUMP_CONTAINER" sh -lc "PGPASSWORD=postgres pg_restore --clean --if-exists --no-owner --no-privileges -U postgres -d '$DB_NAME'" + fi + echo "Rollback restore complete." +fi + +echo +echo "Feature 006 staging rehearsal completed." diff --git a/joyus-ai-mcp-server/drizzle.config.ts b/joyus-ai-mcp-server/drizzle.config.ts index b8db31e..5ec75f1 100644 --- a/joyus-ai-mcp-server/drizzle.config.ts +++ b/joyus-ai-mcp-server/drizzle.config.ts @@ -1,7 +1,7 @@ import { defineConfig } from 'drizzle-kit'; export default defineConfig({ - schema: './src/db/schema.ts', + schema: ['./src/db/schema.ts', './src/content/schema.ts'], out: './drizzle/migrations', dialect: 'postgresql', dbCredentials: { diff --git a/joyus-ai-mcp-server/drizzle/migrations/0001_fast_shadowcat.sql b/joyus-ai-mcp-server/drizzle/migrations/0001_fast_shadowcat.sql new file mode 100644 index 0000000..55465ac --- /dev/null +++ b/joyus-ai-mcp-server/drizzle/migrations/0001_fast_shadowcat.sql @@ -0,0 +1,213 @@ +CREATE SCHEMA IF NOT EXISTS "content"; +--> statement-breakpoint +CREATE TYPE "content"."content_source_status" AS ENUM('active', 'syncing', 'error', 'disconnected');--> statement-breakpoint +CREATE TYPE "content"."content_source_type" AS ENUM('relational-database', 'rest-api');--> statement-breakpoint +CREATE TYPE "content"."content_sync_run_status" AS ENUM('pending', 'running', 'completed', 'failed');--> statement-breakpoint +CREATE TYPE "content"."content_sync_strategy" AS ENUM('mirror', 'pass-through', 'hybrid');--> statement-breakpoint +CREATE TYPE "content"."content_sync_trigger" AS ENUM('scheduled', 'manual');--> statement-breakpoint +CREATE TABLE "content"."api_keys" ( + "id" text PRIMARY KEY NOT NULL, + "tenant_id" text NOT NULL, + "key_hash" text NOT NULL, + "key_prefix" text NOT NULL, + "integration_name" text NOT NULL, + "jwks_uri" text, + "issuer" text, + "audience" text, + "is_active" boolean DEFAULT true NOT NULL, + "last_used_at" timestamp, + "created_at" timestamp DEFAULT now() NOT NULL, + CONSTRAINT "api_keys_key_hash_unique" UNIQUE("key_hash") +); +--> statement-breakpoint +CREATE TABLE "content"."drift_reports" ( + "id" text PRIMARY KEY NOT NULL, + "tenant_id" text NOT NULL, + "profile_id" text NOT NULL, + "window_start" timestamp NOT NULL, + "window_end" timestamp NOT NULL, + "generations_evaluated" integer NOT NULL, + "overall_drift_score" real NOT NULL, + "dimension_scores" jsonb NOT NULL, + "recommendations" jsonb NOT NULL, + "created_at" timestamp DEFAULT now() NOT NULL +); +--> statement-breakpoint +CREATE TABLE "content"."entitlements" ( + "id" text PRIMARY KEY NOT NULL, + "tenant_id" text NOT NULL, + "user_id" text NOT NULL, + "session_id" text NOT NULL, + "product_id" text NOT NULL, + "resolved_from" text NOT NULL, + "resolved_at" timestamp NOT NULL, + "expires_at" timestamp NOT NULL +); +--> statement-breakpoint +CREATE TABLE "content"."generation_logs" ( + "id" text PRIMARY KEY NOT NULL, + "tenant_id" text NOT NULL, + "user_id" text NOT NULL, + "session_id" text, + "profile_id" text, + "query" text NOT NULL, + "sources_used" jsonb NOT NULL, + "citation_count" integer DEFAULT 0 NOT NULL, + "response_length" integer NOT NULL, + "drift_score" real, + "created_at" timestamp DEFAULT now() NOT NULL +); +--> statement-breakpoint +CREATE TABLE "content"."items" ( + "id" text PRIMARY KEY NOT NULL, + "source_id" text NOT NULL, + "source_ref" text NOT NULL, + "title" text NOT NULL, + "body" text, + "content_type" text DEFAULT 'text' NOT NULL, + "metadata" jsonb NOT NULL, + "data_tier" integer DEFAULT 1 NOT NULL, + "search_vector" "tsvector", + "last_synced_at" timestamp NOT NULL, + "is_stale" boolean DEFAULT false NOT NULL, + "created_at" timestamp DEFAULT now() NOT NULL, + "updated_at" timestamp DEFAULT now() NOT NULL +); +--> statement-breakpoint +CREATE TABLE "content"."mediation_sessions" ( + "id" text PRIMARY KEY NOT NULL, + "tenant_id" text NOT NULL, + "api_key_id" text NOT NULL, + "user_id" text NOT NULL, + "active_profile_id" text, + "message_count" integer DEFAULT 0 NOT NULL, + "started_at" timestamp DEFAULT now() NOT NULL, + "last_activity_at" timestamp DEFAULT now() NOT NULL, + "ended_at" timestamp +); +--> statement-breakpoint +CREATE TABLE "content"."operation_logs" ( + "id" text PRIMARY KEY NOT NULL, + "tenant_id" text NOT NULL, + "operation" text NOT NULL, + "source_id" text, + "user_id" text, + "duration_ms" integer NOT NULL, + "success" boolean NOT NULL, + "metadata" jsonb NOT NULL, + "created_at" timestamp DEFAULT now() NOT NULL +); +--> statement-breakpoint +CREATE TABLE "content"."product_profiles" ( + "product_id" text NOT NULL, + "profile_id" text NOT NULL, + CONSTRAINT "product_profiles_product_id_profile_id_pk" PRIMARY KEY("product_id","profile_id") +); +--> statement-breakpoint +CREATE TABLE "content"."product_sources" ( + "product_id" text NOT NULL, + "source_id" text NOT NULL, + CONSTRAINT "product_sources_product_id_source_id_pk" PRIMARY KEY("product_id","source_id") +); +--> statement-breakpoint +CREATE TABLE "content"."products" ( + "id" text PRIMARY KEY NOT NULL, + "tenant_id" text NOT NULL, + "name" text NOT NULL, + "description" text, + "is_active" boolean DEFAULT true NOT NULL, + "created_at" timestamp DEFAULT now() NOT NULL, + "updated_at" timestamp DEFAULT now() NOT NULL +); +--> statement-breakpoint +CREATE TABLE "content"."sources" ( + "id" text PRIMARY KEY NOT NULL, + "tenant_id" text NOT NULL, + "name" text NOT NULL, + "type" "content"."content_source_type" NOT NULL, + "sync_strategy" "content"."content_sync_strategy" NOT NULL, + "connection_config" jsonb NOT NULL, + "freshness_window_minutes" integer DEFAULT 1440 NOT NULL, + "status" "content"."content_source_status" DEFAULT 'active' NOT NULL, + "item_count" integer DEFAULT 0 NOT NULL, + "last_sync_at" timestamp, + "last_sync_error" text, + "schema_version" text, + "created_at" timestamp DEFAULT now() NOT NULL, + "updated_at" timestamp DEFAULT now() NOT NULL +); +--> statement-breakpoint +CREATE TABLE "content"."sync_runs" ( + "id" text PRIMARY KEY NOT NULL, + "source_id" text NOT NULL, + "status" "content"."content_sync_run_status" NOT NULL, + "trigger" "content"."content_sync_trigger" NOT NULL, + "items_discovered" integer DEFAULT 0 NOT NULL, + "items_created" integer DEFAULT 0 NOT NULL, + "items_updated" integer DEFAULT 0 NOT NULL, + "items_removed" integer DEFAULT 0 NOT NULL, + "cursor" text, + "error" text, + "started_at" timestamp DEFAULT now() NOT NULL, + "completed_at" timestamp +); +--> statement-breakpoint +ALTER TABLE "content"."entitlements" ADD CONSTRAINT "entitlements_product_id_products_id_fk" FOREIGN KEY ("product_id") REFERENCES "content"."products"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint +ALTER TABLE "content"."items" ADD CONSTRAINT "items_source_id_sources_id_fk" FOREIGN KEY ("source_id") REFERENCES "content"."sources"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint +ALTER TABLE "content"."mediation_sessions" ADD CONSTRAINT "mediation_sessions_api_key_id_api_keys_id_fk" FOREIGN KEY ("api_key_id") REFERENCES "content"."api_keys"("id") ON DELETE no action ON UPDATE no action;--> statement-breakpoint +ALTER TABLE "content"."product_profiles" ADD CONSTRAINT "product_profiles_product_id_products_id_fk" FOREIGN KEY ("product_id") REFERENCES "content"."products"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint +ALTER TABLE "content"."product_sources" ADD CONSTRAINT "product_sources_product_id_products_id_fk" FOREIGN KEY ("product_id") REFERENCES "content"."products"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint +ALTER TABLE "content"."product_sources" ADD CONSTRAINT "product_sources_source_id_sources_id_fk" FOREIGN KEY ("source_id") REFERENCES "content"."sources"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint +ALTER TABLE "content"."sync_runs" ADD CONSTRAINT "sync_runs_source_id_sources_id_fk" FOREIGN KEY ("source_id") REFERENCES "content"."sources"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint +CREATE INDEX "content_api_keys_tenant_id_idx" ON "content"."api_keys" USING btree ("tenant_id");--> statement-breakpoint +CREATE INDEX "content_api_keys_tenant_active_idx" ON "content"."api_keys" USING btree ("tenant_id","is_active");--> statement-breakpoint +CREATE INDEX "content_drift_tenant_profile_window_idx" ON "content"."drift_reports" USING btree ("tenant_id","profile_id","window_end");--> statement-breakpoint +CREATE INDEX "content_drift_profile_created_idx" ON "content"."drift_reports" USING btree ("profile_id","created_at");--> statement-breakpoint +CREATE INDEX "content_entitlements_session_user_idx" ON "content"."entitlements" USING btree ("session_id","user_id");--> statement-breakpoint +CREATE INDEX "content_entitlements_user_product_idx" ON "content"."entitlements" USING btree ("user_id","product_id");--> statement-breakpoint +CREATE INDEX "content_entitlements_tenant_id_idx" ON "content"."entitlements" USING btree ("tenant_id");--> statement-breakpoint +CREATE INDEX "content_gen_logs_tenant_created_idx" ON "content"."generation_logs" USING btree ("tenant_id","created_at");--> statement-breakpoint +CREATE INDEX "content_gen_logs_tenant_user_idx" ON "content"."generation_logs" USING btree ("tenant_id","user_id");--> statement-breakpoint +CREATE INDEX "content_gen_logs_profile_created_idx" ON "content"."generation_logs" USING btree ("profile_id","created_at");--> statement-breakpoint +CREATE INDEX "content_items_source_id_idx" ON "content"."items" USING btree ("source_id");--> statement-breakpoint +CREATE UNIQUE INDEX "content_items_source_ref_unique" ON "content"."items" USING btree ("source_id","source_ref");--> statement-breakpoint +CREATE INDEX "content_items_source_stale_idx" ON "content"."items" USING btree ("source_id","is_stale");--> statement-breakpoint +CREATE INDEX "content_items_search_vector_gin_idx" ON "content"."items" USING gin ("search_vector");--> statement-breakpoint +CREATE INDEX "content_sessions_tenant_user_idx" ON "content"."mediation_sessions" USING btree ("tenant_id","user_id");--> statement-breakpoint +CREATE INDEX "content_sessions_api_key_id_idx" ON "content"."mediation_sessions" USING btree ("api_key_id");--> statement-breakpoint +CREATE INDEX "content_sessions_tenant_activity_idx" ON "content"."mediation_sessions" USING btree ("tenant_id","last_activity_at");--> statement-breakpoint +CREATE INDEX "content_op_logs_tenant_op_created_idx" ON "content"."operation_logs" USING btree ("tenant_id","operation","created_at");--> statement-breakpoint +CREATE INDEX "content_op_logs_tenant_created_idx" ON "content"."operation_logs" USING btree ("tenant_id","created_at");--> statement-breakpoint +CREATE INDEX "content_products_tenant_id_idx" ON "content"."products" USING btree ("tenant_id");--> statement-breakpoint +CREATE UNIQUE INDEX "content_products_tenant_name_unique" ON "content"."products" USING btree ("tenant_id","name");--> statement-breakpoint +CREATE INDEX "content_sources_tenant_id_idx" ON "content"."sources" USING btree ("tenant_id");--> statement-breakpoint +CREATE INDEX "content_sources_tenant_type_idx" ON "content"."sources" USING btree ("tenant_id","type");--> statement-breakpoint +CREATE INDEX "content_sources_tenant_status_idx" ON "content"."sources" USING btree ("tenant_id","status");--> statement-breakpoint +CREATE INDEX "content_sync_runs_source_started_idx" ON "content"."sync_runs" USING btree ("source_id","started_at");--> statement-breakpoint +CREATE INDEX "content_sync_runs_status_idx" ON "content"."sync_runs" USING btree ("status"); +--> statement-breakpoint +CREATE OR REPLACE FUNCTION "content"."content_items_search_vector_tgr_fn"() +RETURNS trigger +LANGUAGE plpgsql +AS $$ +BEGIN + NEW.search_vector := to_tsvector( + 'english', + concat_ws(' ', coalesce(NEW.title, ''), coalesce(NEW.body, '')) + ); + RETURN NEW; +END; +$$; +--> statement-breakpoint +CREATE TRIGGER "content_items_search_vector_tgr" +BEFORE INSERT OR UPDATE OF "title", "body" +ON "content"."items" +FOR EACH ROW +EXECUTE FUNCTION "content"."content_items_search_vector_tgr_fn"(); +--> statement-breakpoint +UPDATE "content"."items" +SET "search_vector" = to_tsvector( + 'english', + concat_ws(' ', coalesce("title", ''), coalesce("body", '')) +) +WHERE "search_vector" IS NULL; diff --git a/joyus-ai-mcp-server/drizzle/migrations/meta/0001_snapshot.json b/joyus-ai-mcp-server/drizzle/migrations/meta/0001_snapshot.json new file mode 100644 index 0000000..30b8ca4 --- /dev/null +++ b/joyus-ai-mcp-server/drizzle/migrations/meta/0001_snapshot.json @@ -0,0 +1,2301 @@ +{ + "id": "cebdefeb-b20e-4943-91f3-681652cdef02", + "prevId": "8c1f6c66-df92-446b-83c6-857121216841", + "version": "7", + "dialect": "postgresql", + "tables": { + "public.audit_logs": { + "name": "audit_logs", + "schema": "", + "columns": { + "id": { + "name": "id", + "type": "text", + "primaryKey": true, + "notNull": true + }, + "user_id": { + "name": "user_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "tool": { + "name": "tool", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "input": { + "name": "input", + "type": "json", + "primaryKey": false, + "notNull": true + }, + "success": { + "name": "success", + "type": "boolean", + "primaryKey": false, + "notNull": true + }, + "error": { + "name": "error", + "type": "text", + "primaryKey": false, + "notNull": false + }, + "duration": { + "name": "duration", + "type": "integer", + "primaryKey": false, + "notNull": true + }, + "created_at": { + "name": "created_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true, + "default": "now()" + } + }, + "indexes": { + "audit_logs_user_created_idx": { + "name": "audit_logs_user_created_idx", + "columns": [ + { + "expression": "user_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "created_at", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + }, + "audit_logs_tool_created_idx": { + "name": "audit_logs_tool_created_idx", + "columns": [ + { + "expression": "tool", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "created_at", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + } + }, + "foreignKeys": { + "audit_logs_user_id_users_id_fk": { + "name": "audit_logs_user_id_users_id_fk", + "tableFrom": "audit_logs", + "tableTo": "users", + "columnsFrom": [ + "user_id" + ], + "columnsTo": [ + "id" + ], + "onDelete": "cascade", + "onUpdate": "no action" + } + }, + "compositePrimaryKeys": {}, + "uniqueConstraints": {}, + "policies": {}, + "checkConstraints": {}, + "isRLSEnabled": false + }, + "public.connections": { + "name": "connections", + "schema": "", + "columns": { + "id": { + "name": "id", + "type": "text", + "primaryKey": true, + "notNull": true + }, + "user_id": { + "name": "user_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "service": { + "name": "service", + "type": "service", + "typeSchema": "public", + "primaryKey": false, + "notNull": true + }, + "access_token": { + "name": "access_token", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "refresh_token": { + "name": "refresh_token", + "type": "text", + "primaryKey": false, + "notNull": false + }, + "expires_at": { + "name": "expires_at", + "type": "timestamp", + "primaryKey": false, + "notNull": false + }, + "scope": { + "name": "scope", + "type": "text", + "primaryKey": false, + "notNull": false + }, + "metadata": { + "name": "metadata", + "type": "json", + "primaryKey": false, + "notNull": false + }, + "created_at": { + "name": "created_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true, + "default": "now()" + }, + "updated_at": { + "name": "updated_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true, + "default": "now()" + } + }, + "indexes": { + "connections_user_service_unique": { + "name": "connections_user_service_unique", + "columns": [ + { + "expression": "user_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "service", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": true, + "concurrently": false, + "method": "btree", + "with": {} + }, + "connections_user_id_idx": { + "name": "connections_user_id_idx", + "columns": [ + { + "expression": "user_id", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + } + }, + "foreignKeys": { + "connections_user_id_users_id_fk": { + "name": "connections_user_id_users_id_fk", + "tableFrom": "connections", + "tableTo": "users", + "columnsFrom": [ + "user_id" + ], + "columnsTo": [ + "id" + ], + "onDelete": "cascade", + "onUpdate": "no action" + } + }, + "compositePrimaryKeys": {}, + "uniqueConstraints": {}, + "policies": {}, + "checkConstraints": {}, + "isRLSEnabled": false + }, + "public.oauth_states": { + "name": "oauth_states", + "schema": "", + "columns": { + "id": { + "name": "id", + "type": "text", + "primaryKey": true, + "notNull": true + }, + "state": { + "name": "state", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "user_id": { + "name": "user_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "service": { + "name": "service", + "type": "service", + "typeSchema": "public", + "primaryKey": false, + "notNull": true + }, + "created_at": { + "name": "created_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true, + "default": "now()" + }, + "expires_at": { + "name": "expires_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true + } + }, + "indexes": { + "oauth_states_state_idx": { + "name": "oauth_states_state_idx", + "columns": [ + { + "expression": "state", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + } + }, + "foreignKeys": {}, + "compositePrimaryKeys": {}, + "uniqueConstraints": { + "oauth_states_state_unique": { + "name": "oauth_states_state_unique", + "nullsNotDistinct": false, + "columns": [ + "state" + ] + } + }, + "policies": {}, + "checkConstraints": {}, + "isRLSEnabled": false + }, + "public.scheduled_tasks": { + "name": "scheduled_tasks", + "schema": "", + "columns": { + "id": { + "name": "id", + "type": "text", + "primaryKey": true, + "notNull": true + }, + "user_id": { + "name": "user_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "name": { + "name": "name", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "description": { + "name": "description", + "type": "text", + "primaryKey": false, + "notNull": false + }, + "schedule": { + "name": "schedule", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "timezone": { + "name": "timezone", + "type": "text", + "primaryKey": false, + "notNull": true, + "default": "'America/New_York'" + }, + "task_type": { + "name": "task_type", + "type": "task_type", + "typeSchema": "public", + "primaryKey": false, + "notNull": true + }, + "config": { + "name": "config", + "type": "json", + "primaryKey": false, + "notNull": true + }, + "notify_slack": { + "name": "notify_slack", + "type": "text", + "primaryKey": false, + "notNull": false + }, + "notify_email": { + "name": "notify_email", + "type": "text", + "primaryKey": false, + "notNull": false + }, + "notify_on_error": { + "name": "notify_on_error", + "type": "boolean", + "primaryKey": false, + "notNull": true, + "default": true + }, + "notify_on_success": { + "name": "notify_on_success", + "type": "boolean", + "primaryKey": false, + "notNull": true, + "default": false + }, + "enabled": { + "name": "enabled", + "type": "boolean", + "primaryKey": false, + "notNull": true, + "default": true + }, + "last_run_at": { + "name": "last_run_at", + "type": "timestamp", + "primaryKey": false, + "notNull": false + }, + "next_run_at": { + "name": "next_run_at", + "type": "timestamp", + "primaryKey": false, + "notNull": false + }, + "created_at": { + "name": "created_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true, + "default": "now()" + }, + "updated_at": { + "name": "updated_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true, + "default": "now()" + } + }, + "indexes": { + "scheduled_tasks_user_id_idx": { + "name": "scheduled_tasks_user_id_idx", + "columns": [ + { + "expression": "user_id", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + }, + "scheduled_tasks_enabled_next_run_idx": { + "name": "scheduled_tasks_enabled_next_run_idx", + "columns": [ + { + "expression": "enabled", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "next_run_at", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + } + }, + "foreignKeys": { + "scheduled_tasks_user_id_users_id_fk": { + "name": "scheduled_tasks_user_id_users_id_fk", + "tableFrom": "scheduled_tasks", + "tableTo": "users", + "columnsFrom": [ + "user_id" + ], + "columnsTo": [ + "id" + ], + "onDelete": "cascade", + "onUpdate": "no action" + } + }, + "compositePrimaryKeys": {}, + "uniqueConstraints": {}, + "policies": {}, + "checkConstraints": {}, + "isRLSEnabled": false + }, + "public.task_runs": { + "name": "task_runs", + "schema": "", + "columns": { + "id": { + "name": "id", + "type": "text", + "primaryKey": true, + "notNull": true + }, + "task_id": { + "name": "task_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "user_id": { + "name": "user_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "status": { + "name": "status", + "type": "task_run_status", + "typeSchema": "public", + "primaryKey": false, + "notNull": true + }, + "started_at": { + "name": "started_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true, + "default": "now()" + }, + "completed_at": { + "name": "completed_at", + "type": "timestamp", + "primaryKey": false, + "notNull": false + }, + "duration": { + "name": "duration", + "type": "integer", + "primaryKey": false, + "notNull": false + }, + "output": { + "name": "output", + "type": "json", + "primaryKey": false, + "notNull": false + }, + "error": { + "name": "error", + "type": "text", + "primaryKey": false, + "notNull": false + }, + "notified": { + "name": "notified", + "type": "boolean", + "primaryKey": false, + "notNull": true, + "default": false + }, + "notified_at": { + "name": "notified_at", + "type": "timestamp", + "primaryKey": false, + "notNull": false + } + }, + "indexes": { + "task_runs_task_started_idx": { + "name": "task_runs_task_started_idx", + "columns": [ + { + "expression": "task_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "started_at", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + }, + "task_runs_user_started_idx": { + "name": "task_runs_user_started_idx", + "columns": [ + { + "expression": "user_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "started_at", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + }, + "task_runs_status_idx": { + "name": "task_runs_status_idx", + "columns": [ + { + "expression": "status", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + } + }, + "foreignKeys": { + "task_runs_task_id_scheduled_tasks_id_fk": { + "name": "task_runs_task_id_scheduled_tasks_id_fk", + "tableFrom": "task_runs", + "tableTo": "scheduled_tasks", + "columnsFrom": [ + "task_id" + ], + "columnsTo": [ + "id" + ], + "onDelete": "cascade", + "onUpdate": "no action" + }, + "task_runs_user_id_users_id_fk": { + "name": "task_runs_user_id_users_id_fk", + "tableFrom": "task_runs", + "tableTo": "users", + "columnsFrom": [ + "user_id" + ], + "columnsTo": [ + "id" + ], + "onDelete": "cascade", + "onUpdate": "no action" + } + }, + "compositePrimaryKeys": {}, + "uniqueConstraints": {}, + "policies": {}, + "checkConstraints": {}, + "isRLSEnabled": false + }, + "public.users": { + "name": "users", + "schema": "", + "columns": { + "id": { + "name": "id", + "type": "text", + "primaryKey": true, + "notNull": true + }, + "email": { + "name": "email", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "name": { + "name": "name", + "type": "text", + "primaryKey": false, + "notNull": false + }, + "mcp_token": { + "name": "mcp_token", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "created_at": { + "name": "created_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true, + "default": "now()" + }, + "updated_at": { + "name": "updated_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true, + "default": "now()" + } + }, + "indexes": {}, + "foreignKeys": {}, + "compositePrimaryKeys": {}, + "uniqueConstraints": { + "users_email_unique": { + "name": "users_email_unique", + "nullsNotDistinct": false, + "columns": [ + "email" + ] + }, + "users_mcp_token_unique": { + "name": "users_mcp_token_unique", + "nullsNotDistinct": false, + "columns": [ + "mcp_token" + ] + } + }, + "policies": {}, + "checkConstraints": {}, + "isRLSEnabled": false + }, + "content.api_keys": { + "name": "api_keys", + "schema": "content", + "columns": { + "id": { + "name": "id", + "type": "text", + "primaryKey": true, + "notNull": true + }, + "tenant_id": { + "name": "tenant_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "key_hash": { + "name": "key_hash", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "key_prefix": { + "name": "key_prefix", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "integration_name": { + "name": "integration_name", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "jwks_uri": { + "name": "jwks_uri", + "type": "text", + "primaryKey": false, + "notNull": false + }, + "issuer": { + "name": "issuer", + "type": "text", + "primaryKey": false, + "notNull": false + }, + "audience": { + "name": "audience", + "type": "text", + "primaryKey": false, + "notNull": false + }, + "is_active": { + "name": "is_active", + "type": "boolean", + "primaryKey": false, + "notNull": true, + "default": true + }, + "last_used_at": { + "name": "last_used_at", + "type": "timestamp", + "primaryKey": false, + "notNull": false + }, + "created_at": { + "name": "created_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true, + "default": "now()" + } + }, + "indexes": { + "content_api_keys_tenant_id_idx": { + "name": "content_api_keys_tenant_id_idx", + "columns": [ + { + "expression": "tenant_id", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + }, + "content_api_keys_tenant_active_idx": { + "name": "content_api_keys_tenant_active_idx", + "columns": [ + { + "expression": "tenant_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "is_active", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + } + }, + "foreignKeys": {}, + "compositePrimaryKeys": {}, + "uniqueConstraints": { + "api_keys_key_hash_unique": { + "name": "api_keys_key_hash_unique", + "nullsNotDistinct": false, + "columns": [ + "key_hash" + ] + } + }, + "policies": {}, + "checkConstraints": {}, + "isRLSEnabled": false + }, + "content.drift_reports": { + "name": "drift_reports", + "schema": "content", + "columns": { + "id": { + "name": "id", + "type": "text", + "primaryKey": true, + "notNull": true + }, + "tenant_id": { + "name": "tenant_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "profile_id": { + "name": "profile_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "window_start": { + "name": "window_start", + "type": "timestamp", + "primaryKey": false, + "notNull": true + }, + "window_end": { + "name": "window_end", + "type": "timestamp", + "primaryKey": false, + "notNull": true + }, + "generations_evaluated": { + "name": "generations_evaluated", + "type": "integer", + "primaryKey": false, + "notNull": true + }, + "overall_drift_score": { + "name": "overall_drift_score", + "type": "real", + "primaryKey": false, + "notNull": true + }, + "dimension_scores": { + "name": "dimension_scores", + "type": "jsonb", + "primaryKey": false, + "notNull": true + }, + "recommendations": { + "name": "recommendations", + "type": "jsonb", + "primaryKey": false, + "notNull": true + }, + "created_at": { + "name": "created_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true, + "default": "now()" + } + }, + "indexes": { + "content_drift_tenant_profile_window_idx": { + "name": "content_drift_tenant_profile_window_idx", + "columns": [ + { + "expression": "tenant_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "profile_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "window_end", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + }, + "content_drift_profile_created_idx": { + "name": "content_drift_profile_created_idx", + "columns": [ + { + "expression": "profile_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "created_at", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + } + }, + "foreignKeys": {}, + "compositePrimaryKeys": {}, + "uniqueConstraints": {}, + "policies": {}, + "checkConstraints": {}, + "isRLSEnabled": false + }, + "content.entitlements": { + "name": "entitlements", + "schema": "content", + "columns": { + "id": { + "name": "id", + "type": "text", + "primaryKey": true, + "notNull": true + }, + "tenant_id": { + "name": "tenant_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "user_id": { + "name": "user_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "session_id": { + "name": "session_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "product_id": { + "name": "product_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "resolved_from": { + "name": "resolved_from", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "resolved_at": { + "name": "resolved_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true + }, + "expires_at": { + "name": "expires_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true + } + }, + "indexes": { + "content_entitlements_session_user_idx": { + "name": "content_entitlements_session_user_idx", + "columns": [ + { + "expression": "session_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "user_id", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + }, + "content_entitlements_user_product_idx": { + "name": "content_entitlements_user_product_idx", + "columns": [ + { + "expression": "user_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "product_id", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + }, + "content_entitlements_tenant_id_idx": { + "name": "content_entitlements_tenant_id_idx", + "columns": [ + { + "expression": "tenant_id", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + } + }, + "foreignKeys": { + "entitlements_product_id_products_id_fk": { + "name": "entitlements_product_id_products_id_fk", + "tableFrom": "entitlements", + "tableTo": "products", + "schemaTo": "content", + "columnsFrom": [ + "product_id" + ], + "columnsTo": [ + "id" + ], + "onDelete": "cascade", + "onUpdate": "no action" + } + }, + "compositePrimaryKeys": {}, + "uniqueConstraints": {}, + "policies": {}, + "checkConstraints": {}, + "isRLSEnabled": false + }, + "content.generation_logs": { + "name": "generation_logs", + "schema": "content", + "columns": { + "id": { + "name": "id", + "type": "text", + "primaryKey": true, + "notNull": true + }, + "tenant_id": { + "name": "tenant_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "user_id": { + "name": "user_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "session_id": { + "name": "session_id", + "type": "text", + "primaryKey": false, + "notNull": false + }, + "profile_id": { + "name": "profile_id", + "type": "text", + "primaryKey": false, + "notNull": false + }, + "query": { + "name": "query", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "sources_used": { + "name": "sources_used", + "type": "jsonb", + "primaryKey": false, + "notNull": true + }, + "citation_count": { + "name": "citation_count", + "type": "integer", + "primaryKey": false, + "notNull": true, + "default": 0 + }, + "response_length": { + "name": "response_length", + "type": "integer", + "primaryKey": false, + "notNull": true + }, + "drift_score": { + "name": "drift_score", + "type": "real", + "primaryKey": false, + "notNull": false + }, + "created_at": { + "name": "created_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true, + "default": "now()" + } + }, + "indexes": { + "content_gen_logs_tenant_created_idx": { + "name": "content_gen_logs_tenant_created_idx", + "columns": [ + { + "expression": "tenant_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "created_at", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + }, + "content_gen_logs_tenant_user_idx": { + "name": "content_gen_logs_tenant_user_idx", + "columns": [ + { + "expression": "tenant_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "user_id", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + }, + "content_gen_logs_profile_created_idx": { + "name": "content_gen_logs_profile_created_idx", + "columns": [ + { + "expression": "profile_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "created_at", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + } + }, + "foreignKeys": {}, + "compositePrimaryKeys": {}, + "uniqueConstraints": {}, + "policies": {}, + "checkConstraints": {}, + "isRLSEnabled": false + }, + "content.items": { + "name": "items", + "schema": "content", + "columns": { + "id": { + "name": "id", + "type": "text", + "primaryKey": true, + "notNull": true + }, + "source_id": { + "name": "source_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "source_ref": { + "name": "source_ref", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "title": { + "name": "title", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "body": { + "name": "body", + "type": "text", + "primaryKey": false, + "notNull": false + }, + "content_type": { + "name": "content_type", + "type": "text", + "primaryKey": false, + "notNull": true, + "default": "'text'" + }, + "metadata": { + "name": "metadata", + "type": "jsonb", + "primaryKey": false, + "notNull": true + }, + "data_tier": { + "name": "data_tier", + "type": "integer", + "primaryKey": false, + "notNull": true, + "default": 1 + }, + "search_vector": { + "name": "search_vector", + "type": "tsvector", + "primaryKey": false, + "notNull": false + }, + "last_synced_at": { + "name": "last_synced_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true + }, + "is_stale": { + "name": "is_stale", + "type": "boolean", + "primaryKey": false, + "notNull": true, + "default": false + }, + "created_at": { + "name": "created_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true, + "default": "now()" + }, + "updated_at": { + "name": "updated_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true, + "default": "now()" + } + }, + "indexes": { + "content_items_source_id_idx": { + "name": "content_items_source_id_idx", + "columns": [ + { + "expression": "source_id", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + }, + "content_items_source_ref_unique": { + "name": "content_items_source_ref_unique", + "columns": [ + { + "expression": "source_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "source_ref", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": true, + "concurrently": false, + "method": "btree", + "with": {} + }, + "content_items_source_stale_idx": { + "name": "content_items_source_stale_idx", + "columns": [ + { + "expression": "source_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "is_stale", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + } + }, + "foreignKeys": { + "items_source_id_sources_id_fk": { + "name": "items_source_id_sources_id_fk", + "tableFrom": "items", + "tableTo": "sources", + "schemaTo": "content", + "columnsFrom": [ + "source_id" + ], + "columnsTo": [ + "id" + ], + "onDelete": "cascade", + "onUpdate": "no action" + } + }, + "compositePrimaryKeys": {}, + "uniqueConstraints": {}, + "policies": {}, + "checkConstraints": {}, + "isRLSEnabled": false + }, + "content.mediation_sessions": { + "name": "mediation_sessions", + "schema": "content", + "columns": { + "id": { + "name": "id", + "type": "text", + "primaryKey": true, + "notNull": true + }, + "tenant_id": { + "name": "tenant_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "api_key_id": { + "name": "api_key_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "user_id": { + "name": "user_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "active_profile_id": { + "name": "active_profile_id", + "type": "text", + "primaryKey": false, + "notNull": false + }, + "message_count": { + "name": "message_count", + "type": "integer", + "primaryKey": false, + "notNull": true, + "default": 0 + }, + "started_at": { + "name": "started_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true, + "default": "now()" + }, + "last_activity_at": { + "name": "last_activity_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true, + "default": "now()" + }, + "ended_at": { + "name": "ended_at", + "type": "timestamp", + "primaryKey": false, + "notNull": false + } + }, + "indexes": { + "content_sessions_tenant_user_idx": { + "name": "content_sessions_tenant_user_idx", + "columns": [ + { + "expression": "tenant_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "user_id", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + }, + "content_sessions_api_key_id_idx": { + "name": "content_sessions_api_key_id_idx", + "columns": [ + { + "expression": "api_key_id", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + }, + "content_sessions_tenant_activity_idx": { + "name": "content_sessions_tenant_activity_idx", + "columns": [ + { + "expression": "tenant_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "last_activity_at", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + } + }, + "foreignKeys": { + "mediation_sessions_api_key_id_api_keys_id_fk": { + "name": "mediation_sessions_api_key_id_api_keys_id_fk", + "tableFrom": "mediation_sessions", + "tableTo": "api_keys", + "schemaTo": "content", + "columnsFrom": [ + "api_key_id" + ], + "columnsTo": [ + "id" + ], + "onDelete": "no action", + "onUpdate": "no action" + } + }, + "compositePrimaryKeys": {}, + "uniqueConstraints": {}, + "policies": {}, + "checkConstraints": {}, + "isRLSEnabled": false + }, + "content.operation_logs": { + "name": "operation_logs", + "schema": "content", + "columns": { + "id": { + "name": "id", + "type": "text", + "primaryKey": true, + "notNull": true + }, + "tenant_id": { + "name": "tenant_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "operation": { + "name": "operation", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "source_id": { + "name": "source_id", + "type": "text", + "primaryKey": false, + "notNull": false + }, + "user_id": { + "name": "user_id", + "type": "text", + "primaryKey": false, + "notNull": false + }, + "duration_ms": { + "name": "duration_ms", + "type": "integer", + "primaryKey": false, + "notNull": true + }, + "success": { + "name": "success", + "type": "boolean", + "primaryKey": false, + "notNull": true + }, + "metadata": { + "name": "metadata", + "type": "jsonb", + "primaryKey": false, + "notNull": true + }, + "created_at": { + "name": "created_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true, + "default": "now()" + } + }, + "indexes": { + "content_op_logs_tenant_op_created_idx": { + "name": "content_op_logs_tenant_op_created_idx", + "columns": [ + { + "expression": "tenant_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "operation", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "created_at", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + }, + "content_op_logs_tenant_created_idx": { + "name": "content_op_logs_tenant_created_idx", + "columns": [ + { + "expression": "tenant_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "created_at", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + } + }, + "foreignKeys": {}, + "compositePrimaryKeys": {}, + "uniqueConstraints": {}, + "policies": {}, + "checkConstraints": {}, + "isRLSEnabled": false + }, + "content.product_profiles": { + "name": "product_profiles", + "schema": "content", + "columns": { + "product_id": { + "name": "product_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "profile_id": { + "name": "profile_id", + "type": "text", + "primaryKey": false, + "notNull": true + } + }, + "indexes": {}, + "foreignKeys": { + "product_profiles_product_id_products_id_fk": { + "name": "product_profiles_product_id_products_id_fk", + "tableFrom": "product_profiles", + "tableTo": "products", + "schemaTo": "content", + "columnsFrom": [ + "product_id" + ], + "columnsTo": [ + "id" + ], + "onDelete": "cascade", + "onUpdate": "no action" + } + }, + "compositePrimaryKeys": { + "product_profiles_product_id_profile_id_pk": { + "name": "product_profiles_product_id_profile_id_pk", + "columns": [ + "product_id", + "profile_id" + ] + } + }, + "uniqueConstraints": {}, + "policies": {}, + "checkConstraints": {}, + "isRLSEnabled": false + }, + "content.product_sources": { + "name": "product_sources", + "schema": "content", + "columns": { + "product_id": { + "name": "product_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "source_id": { + "name": "source_id", + "type": "text", + "primaryKey": false, + "notNull": true + } + }, + "indexes": {}, + "foreignKeys": { + "product_sources_product_id_products_id_fk": { + "name": "product_sources_product_id_products_id_fk", + "tableFrom": "product_sources", + "tableTo": "products", + "schemaTo": "content", + "columnsFrom": [ + "product_id" + ], + "columnsTo": [ + "id" + ], + "onDelete": "cascade", + "onUpdate": "no action" + }, + "product_sources_source_id_sources_id_fk": { + "name": "product_sources_source_id_sources_id_fk", + "tableFrom": "product_sources", + "tableTo": "sources", + "schemaTo": "content", + "columnsFrom": [ + "source_id" + ], + "columnsTo": [ + "id" + ], + "onDelete": "cascade", + "onUpdate": "no action" + } + }, + "compositePrimaryKeys": { + "product_sources_product_id_source_id_pk": { + "name": "product_sources_product_id_source_id_pk", + "columns": [ + "product_id", + "source_id" + ] + } + }, + "uniqueConstraints": {}, + "policies": {}, + "checkConstraints": {}, + "isRLSEnabled": false + }, + "content.products": { + "name": "products", + "schema": "content", + "columns": { + "id": { + "name": "id", + "type": "text", + "primaryKey": true, + "notNull": true + }, + "tenant_id": { + "name": "tenant_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "name": { + "name": "name", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "description": { + "name": "description", + "type": "text", + "primaryKey": false, + "notNull": false + }, + "is_active": { + "name": "is_active", + "type": "boolean", + "primaryKey": false, + "notNull": true, + "default": true + }, + "created_at": { + "name": "created_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true, + "default": "now()" + }, + "updated_at": { + "name": "updated_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true, + "default": "now()" + } + }, + "indexes": { + "content_products_tenant_id_idx": { + "name": "content_products_tenant_id_idx", + "columns": [ + { + "expression": "tenant_id", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + }, + "content_products_tenant_name_unique": { + "name": "content_products_tenant_name_unique", + "columns": [ + { + "expression": "tenant_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "name", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": true, + "concurrently": false, + "method": "btree", + "with": {} + } + }, + "foreignKeys": {}, + "compositePrimaryKeys": {}, + "uniqueConstraints": {}, + "policies": {}, + "checkConstraints": {}, + "isRLSEnabled": false + }, + "content.sources": { + "name": "sources", + "schema": "content", + "columns": { + "id": { + "name": "id", + "type": "text", + "primaryKey": true, + "notNull": true + }, + "tenant_id": { + "name": "tenant_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "name": { + "name": "name", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "type": { + "name": "type", + "type": "content_source_type", + "typeSchema": "content", + "primaryKey": false, + "notNull": true + }, + "sync_strategy": { + "name": "sync_strategy", + "type": "content_sync_strategy", + "typeSchema": "content", + "primaryKey": false, + "notNull": true + }, + "connection_config": { + "name": "connection_config", + "type": "jsonb", + "primaryKey": false, + "notNull": true + }, + "freshness_window_minutes": { + "name": "freshness_window_minutes", + "type": "integer", + "primaryKey": false, + "notNull": true, + "default": 1440 + }, + "status": { + "name": "status", + "type": "content_source_status", + "typeSchema": "content", + "primaryKey": false, + "notNull": true, + "default": "'active'" + }, + "item_count": { + "name": "item_count", + "type": "integer", + "primaryKey": false, + "notNull": true, + "default": 0 + }, + "last_sync_at": { + "name": "last_sync_at", + "type": "timestamp", + "primaryKey": false, + "notNull": false + }, + "last_sync_error": { + "name": "last_sync_error", + "type": "text", + "primaryKey": false, + "notNull": false + }, + "schema_version": { + "name": "schema_version", + "type": "text", + "primaryKey": false, + "notNull": false + }, + "created_at": { + "name": "created_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true, + "default": "now()" + }, + "updated_at": { + "name": "updated_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true, + "default": "now()" + } + }, + "indexes": { + "content_sources_tenant_id_idx": { + "name": "content_sources_tenant_id_idx", + "columns": [ + { + "expression": "tenant_id", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + }, + "content_sources_tenant_type_idx": { + "name": "content_sources_tenant_type_idx", + "columns": [ + { + "expression": "tenant_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "type", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + }, + "content_sources_tenant_status_idx": { + "name": "content_sources_tenant_status_idx", + "columns": [ + { + "expression": "tenant_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "status", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + } + }, + "foreignKeys": {}, + "compositePrimaryKeys": {}, + "uniqueConstraints": {}, + "policies": {}, + "checkConstraints": {}, + "isRLSEnabled": false + }, + "content.sync_runs": { + "name": "sync_runs", + "schema": "content", + "columns": { + "id": { + "name": "id", + "type": "text", + "primaryKey": true, + "notNull": true + }, + "source_id": { + "name": "source_id", + "type": "text", + "primaryKey": false, + "notNull": true + }, + "status": { + "name": "status", + "type": "content_sync_run_status", + "typeSchema": "content", + "primaryKey": false, + "notNull": true + }, + "trigger": { + "name": "trigger", + "type": "content_sync_trigger", + "typeSchema": "content", + "primaryKey": false, + "notNull": true + }, + "items_discovered": { + "name": "items_discovered", + "type": "integer", + "primaryKey": false, + "notNull": true, + "default": 0 + }, + "items_created": { + "name": "items_created", + "type": "integer", + "primaryKey": false, + "notNull": true, + "default": 0 + }, + "items_updated": { + "name": "items_updated", + "type": "integer", + "primaryKey": false, + "notNull": true, + "default": 0 + }, + "items_removed": { + "name": "items_removed", + "type": "integer", + "primaryKey": false, + "notNull": true, + "default": 0 + }, + "cursor": { + "name": "cursor", + "type": "text", + "primaryKey": false, + "notNull": false + }, + "error": { + "name": "error", + "type": "text", + "primaryKey": false, + "notNull": false + }, + "started_at": { + "name": "started_at", + "type": "timestamp", + "primaryKey": false, + "notNull": true, + "default": "now()" + }, + "completed_at": { + "name": "completed_at", + "type": "timestamp", + "primaryKey": false, + "notNull": false + } + }, + "indexes": { + "content_sync_runs_source_started_idx": { + "name": "content_sync_runs_source_started_idx", + "columns": [ + { + "expression": "source_id", + "isExpression": false, + "asc": true, + "nulls": "last" + }, + { + "expression": "started_at", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + }, + "content_sync_runs_status_idx": { + "name": "content_sync_runs_status_idx", + "columns": [ + { + "expression": "status", + "isExpression": false, + "asc": true, + "nulls": "last" + } + ], + "isUnique": false, + "concurrently": false, + "method": "btree", + "with": {} + } + }, + "foreignKeys": { + "sync_runs_source_id_sources_id_fk": { + "name": "sync_runs_source_id_sources_id_fk", + "tableFrom": "sync_runs", + "tableTo": "sources", + "schemaTo": "content", + "columnsFrom": [ + "source_id" + ], + "columnsTo": [ + "id" + ], + "onDelete": "cascade", + "onUpdate": "no action" + } + }, + "compositePrimaryKeys": {}, + "uniqueConstraints": {}, + "policies": {}, + "checkConstraints": {}, + "isRLSEnabled": false + } + }, + "enums": { + "public.service": { + "name": "service", + "schema": "public", + "values": [ + "JIRA", + "SLACK", + "GITHUB", + "GOOGLE" + ] + }, + "public.task_run_status": { + "name": "task_run_status", + "schema": "public", + "values": [ + "PENDING", + "RUNNING", + "COMPLETED", + "FAILED", + "SKIPPED" + ] + }, + "public.task_type": { + "name": "task_type", + "schema": "public", + "values": [ + "JIRA_STANDUP_SUMMARY", + "JIRA_OVERDUE_ALERT", + "JIRA_SPRINT_REPORT", + "SLACK_CHANNEL_DIGEST", + "SLACK_MENTIONS_SUMMARY", + "GITHUB_PR_REMINDER", + "GITHUB_STALE_PR_ALERT", + "GITHUB_RELEASE_NOTES", + "GMAIL_DIGEST", + "WEEKLY_STATUS_REPORT", + "CUSTOM_TOOL_SEQUENCE" + ] + }, + "content.content_source_status": { + "name": "content_source_status", + "schema": "content", + "values": [ + "active", + "syncing", + "error", + "disconnected" + ] + }, + "content.content_source_type": { + "name": "content_source_type", + "schema": "content", + "values": [ + "relational-database", + "rest-api" + ] + }, + "content.content_sync_run_status": { + "name": "content_sync_run_status", + "schema": "content", + "values": [ + "pending", + "running", + "completed", + "failed" + ] + }, + "content.content_sync_strategy": { + "name": "content_sync_strategy", + "schema": "content", + "values": [ + "mirror", + "pass-through", + "hybrid" + ] + }, + "content.content_sync_trigger": { + "name": "content_sync_trigger", + "schema": "content", + "values": [ + "scheduled", + "manual" + ] + } + }, + "schemas": { + "content": "content" + }, + "sequences": {}, + "roles": {}, + "policies": {}, + "views": {}, + "_meta": { + "columns": {}, + "schemas": {}, + "tables": {} + } +} \ No newline at end of file diff --git a/joyus-ai-mcp-server/drizzle/migrations/meta/_journal.json b/joyus-ai-mcp-server/drizzle/migrations/meta/_journal.json index 22e589f..2d27da2 100644 --- a/joyus-ai-mcp-server/drizzle/migrations/meta/_journal.json +++ b/joyus-ai-mcp-server/drizzle/migrations/meta/_journal.json @@ -8,6 +8,13 @@ "when": 1769805467445, "tag": "0000_aromatic_queen_noir", "breakpoints": true + }, + { + "idx": 1, + "version": "7", + "when": 1772748522434, + "tag": "0001_fast_shadowcat", + "breakpoints": true } ] } \ No newline at end of file diff --git a/joyus-ai-mcp-server/src/content/generation/index.ts b/joyus-ai-mcp-server/src/content/generation/index.ts index b1cc99f..70274d2 100644 --- a/joyus-ai-mcp-server/src/content/generation/index.ts +++ b/joyus-ai-mcp-server/src/content/generation/index.ts @@ -7,6 +7,7 @@ import { drizzle } from 'drizzle-orm/node-postgres'; import { createId } from '@paralleldrive/cuid2'; import { contentGenerationLogs, contentOperationLogs } from '../schema.js'; import type { ResolvedEntitlements, GenerationResult } from '../types.js'; +import { assertProfileAccessOrAudit } from '../profiles/access.js'; import { ContentRetriever, type SearchService, type RetrievalResult, type RetrievedItem } from './retriever.js'; import { ContentGenerator, @@ -15,6 +16,7 @@ import { type GenerationOutput, } from './generator.js'; import { CitationManager, type CitationResult } from './citations.js'; +import { HttpGenerationProvider, type HttpGenerationProviderConfig } from './providers.js'; type DrizzleClient = ReturnType; @@ -49,6 +51,14 @@ export class GenerationService { ): Promise { const startMs = Date.now(); + await assertProfileAccessOrAudit(this.db, { + profileId: options?.profileId, + tenantId, + userId, + entitlements, + sessionId: options?.sessionId, + }); + // 1. Retrieve relevant content const retrieval = await this.retriever.retrieve(query, entitlements, { sourceIds: options?.sourceIds, @@ -120,4 +130,5 @@ export { type GenerationProvider, type GenerationOutput, } from './generator.js'; +export { HttpGenerationProvider, type HttpGenerationProviderConfig } from './providers.js'; export { CitationManager, type CitationResult } from './citations.js'; diff --git a/joyus-ai-mcp-server/src/content/generation/providers.ts b/joyus-ai-mcp-server/src/content/generation/providers.ts new file mode 100644 index 0000000..067b168 --- /dev/null +++ b/joyus-ai-mcp-server/src/content/generation/providers.ts @@ -0,0 +1,51 @@ +import axios from 'axios'; + +import type { GenerationProvider } from './generator.js'; + +export interface HttpGenerationProviderConfig { + url: string; + timeoutMs?: number; + apiKey?: string; +} + +/** + * HTTP-backed generation provider. + * + * Expected response body: + * { "text": "..." } + * or plain string body. + */ +export class HttpGenerationProvider implements GenerationProvider { + private readonly timeoutMs: number; + + constructor(private readonly config: HttpGenerationProviderConfig) { + this.timeoutMs = config.timeoutMs ?? 10000; + } + + async generate(prompt: string, systemPrompt: string): Promise { + const headers: Record = { + 'Content-Type': 'application/json', + }; + + if (this.config.apiKey) { + headers.Authorization = `Bearer ${this.config.apiKey}`; + } + + const response = await axios.post( + this.config.url, + { prompt, systemPrompt }, + { headers, timeout: this.timeoutMs }, + ); + + if (typeof response.data === 'string') { + return response.data; + } + + if (response.data && typeof response.data.text === 'string') { + return response.data.text; + } + + throw new Error('Invalid generation provider response shape'); + } +} + diff --git a/joyus-ai-mcp-server/src/content/generation/retriever.ts b/joyus-ai-mcp-server/src/content/generation/retriever.ts index 2ae197f..b8fbb26 100644 --- a/joyus-ai-mcp-server/src/content/generation/retriever.ts +++ b/joyus-ai-mcp-server/src/content/generation/retriever.ts @@ -55,10 +55,11 @@ export class ContentRetriever { const results = await this.searchService.search(query, accessibleSourceIds, { limit: maxSources, }); + const scopedResults = results.filter(result => accessibleSourceIds.includes(result.sourceId)); // 3. Fetch full content for each result const items: RetrievedItem[] = []; - for (const result of results) { + for (const result of scopedResults) { const rows = await this.db .select() .from(contentItems) @@ -82,6 +83,6 @@ export class ContentRetriever { .map((item, i) => `[Source ${i + 1}: "${item.title}"] ${item.body}`) .join('\n\n'); - return { items, contextText, totalSearchResults: results.length }; + return { items, contextText, totalSearchResults: scopedResults.length }; } } diff --git a/joyus-ai-mcp-server/src/content/index.ts b/joyus-ai-mcp-server/src/content/index.ts index dde5fa3..c181275 100644 --- a/joyus-ai-mcp-server/src/content/index.ts +++ b/joyus-ai-mcp-server/src/content/index.ts @@ -11,19 +11,103 @@ import type { DrizzleClient } from './types.js'; import { connectorRegistry } from './connectors/index.js'; import { PgFtsProvider, SearchService } from './search/index.js'; import { EntitlementCache, EntitlementService, HttpEntitlementResolver } from './entitlements/index.js'; -import { GenerationService, PlaceholderGenerationProvider, type SearchService as GenSearchService } from './generation/index.js'; +import { + GenerationService, + PlaceholderGenerationProvider, + HttpGenerationProvider, + type SearchService as GenSearchService, +} from './generation/index.js'; import { SyncEngine, initializeSyncScheduler } from './sync/index.js'; import { HealthChecker } from './monitoring/health.js'; import { MetricsCollector } from './monitoring/metrics.js'; import { DriftMonitor } from './monitoring/drift.js'; -import { StubVoiceAnalyzer } from './monitoring/voice-analyzer.js'; +import { StubVoiceAnalyzer, HttpVoiceAnalyzer } from './monitoring/voice-analyzer.js'; import { createMonitoringRouter } from './monitoring/routes.js'; import { createMediationRouter } from './mediation/router.js'; +import { ProfileIngestionQueue } from './profiles/ingestion-queue.js'; export interface ContentModuleConfig { db: DrizzleClient; } +type ProviderWiringStatus = { + generationProvider: 'real' | 'placeholder'; + voiceAnalyzer: 'real' | 'stub'; +}; + +function createGenerationProvider() { + const mode = process.env.CONTENT_GENERATION_PROVIDER + ?? (process.env.CONTENT_GENERATION_PROVIDER_URL ? 'http' : 'placeholder'); + + if (mode === 'http') { + const url = process.env.CONTENT_GENERATION_PROVIDER_URL; + if (!url) { + throw new Error( + 'CONTENT_GENERATION_PROVIDER=http requires CONTENT_GENERATION_PROVIDER_URL', + ); + } + return { + provider: new HttpGenerationProvider({ + url, + timeoutMs: Number(process.env.CONTENT_GENERATION_PROVIDER_TIMEOUT_MS ?? 10000), + apiKey: process.env.CONTENT_GENERATION_PROVIDER_API_KEY, + }), + status: 'real' as const, + }; + } + + return { + provider: new PlaceholderGenerationProvider(), + status: 'placeholder' as const, + }; +} + +function createVoiceAnalyzer() { + const mode = process.env.CONTENT_VOICE_ANALYZER_PROVIDER + ?? (process.env.CONTENT_VOICE_ANALYZER_URL ? 'http' : 'stub'); + + if (mode === 'http') { + const url = process.env.CONTENT_VOICE_ANALYZER_URL; + if (!url) { + throw new Error( + 'CONTENT_VOICE_ANALYZER_PROVIDER=http requires CONTENT_VOICE_ANALYZER_URL', + ); + } + return { + analyzer: new HttpVoiceAnalyzer({ + url, + timeoutMs: Number(process.env.CONTENT_VOICE_ANALYZER_TIMEOUT_MS ?? 10000), + apiKey: process.env.CONTENT_VOICE_ANALYZER_API_KEY, + }), + status: 'real' as const, + }; + } + + return { + analyzer: new StubVoiceAnalyzer(), + status: 'stub' as const, + }; +} + +function enforceProviderSafety(wiring: ProviderWiringStatus): void { + const strictMode = + process.env.NODE_ENV === 'production' + || process.env.CONTENT_STRICT_INIT === 'true'; + const driftEnabled = process.env.CONTENT_DRIFT_ENABLED === 'true'; + + if (strictMode && wiring.generationProvider === 'placeholder') { + throw new Error( + 'Content module startup blocked: production/strict mode requires a real generation provider', + ); + } + + if (driftEnabled && wiring.voiceAnalyzer === 'stub') { + throw new Error( + 'Content module startup blocked: CONTENT_DRIFT_ENABLED=true requires a real voice analyzer', + ); + } +} + export async function initializeContentModule( app: Express, config: ContentModuleConfig, @@ -50,23 +134,57 @@ export async function initializeContentModule( }); const entitlementService = new EntitlementService(entitlementResolver, entitlementCache, db); - // 3. Generation (bridge search service to generation's expected interface) - const generationProvider = new PlaceholderGenerationProvider(); + // 3. Generation + drift analyzer providers + const generationProviderConfig = createGenerationProvider(); + const voiceAnalyzerConfig = createVoiceAnalyzer(); + const providerWiring: ProviderWiringStatus = { + generationProvider: generationProviderConfig.status, + voiceAnalyzer: voiceAnalyzerConfig.status, + }; + enforceProviderSafety(providerWiring); + + // 4. Generation (explicitly bridge to retriever contract: query + sourceIds) + const generationSearchAdapter: GenSearchService = { + search: async (query, accessibleSourceIds, options) => { + if (accessibleSourceIds.length === 0) { + return []; + } + return searchProvider.search(query, accessibleSourceIds, { + limit: options?.limit ?? 20, + offset: 0, + }); + }, + }; const generationService = new GenerationService( - searchService as unknown as GenSearchService, - generationProvider, + generationSearchAdapter, + generationProviderConfig.provider, db, ); - // 4. Sync + // 5. Sync const syncEngine = new SyncEngine(db, connectorRegistry); - // 5. Monitoring (use module-level db from db/client.js) - const healthChecker = new HealthChecker(); + // 6. Monitoring (use module-level db from db/client.js) + const profileIngestionQueue = new ProfileIngestionQueue( + async () => { + // Queue skeleton for WP03: actual profile-engine processing is injected in later milestones. + }, + { + maxQueueDepth: Number(process.env.CONTENT_PROFILE_QUEUE_MAX_DEPTH ?? 500), + concurrency: Number(process.env.CONTENT_PROFILE_QUEUE_CONCURRENCY ?? 2), + maxRetries: Number(process.env.CONTENT_PROFILE_QUEUE_MAX_RETRIES ?? 2), + retryDelayMs: Number(process.env.CONTENT_PROFILE_QUEUE_RETRY_DELAY_MS ?? 250), + }, + ); + + const healthChecker = new HealthChecker( + providerWiring, + () => profileIngestionQueue.getMetrics(), + ); const metricsCollector = new MetricsCollector(); - const driftMonitor = new DriftMonitor(new StubVoiceAnalyzer(), db); + const driftMonitor = new DriftMonitor(voiceAnalyzerConfig.analyzer, db); - // 6. Background jobs (gated on env vars) + // 7. Background jobs (gated on env vars) if (process.env.CONTENT_SYNC_ENABLED === 'true') { initializeSyncScheduler(syncEngine); } @@ -74,16 +192,25 @@ export async function initializeContentModule( driftMonitor.start(); } - // 7. Mount routes - app.use('/api/content', createMonitoringRouter(healthChecker, metricsCollector)); + // 8. Mount routes + app.use( + '/api/content', + createMonitoringRouter( + healthChecker, + metricsCollector, + () => profileIngestionQueue.getMetrics(), + ), + ); app.use( '/api/mediation', createMediationRouter({ db, entitlementService, generationService, entitlementCache }), ); - console.log('[content] Module initialized successfully'); + console.log( + `[content] Module initialized successfully (generation=${providerWiring.generationProvider}, analyzer=${providerWiring.voiceAnalyzer})`, + ); } catch (err) { - // Log but do not crash — content module failure must not take down the server console.error('[content] Module initialization failed:', err); + throw err; } } diff --git a/joyus-ai-mcp-server/src/content/mediation/router.ts b/joyus-ai-mcp-server/src/content/mediation/router.ts index 5a305a5..6472090 100644 --- a/joyus-ai-mcp-server/src/content/mediation/router.ts +++ b/joyus-ai-mcp-server/src/content/mediation/router.ts @@ -17,6 +17,7 @@ import { Router, type Request, type Response } from 'express'; import { drizzle } from 'drizzle-orm/node-postgres'; import { createAuthMiddleware } from './auth.js'; import { MediationSessionService } from './session.js'; +import { ProfileAccessDeniedError } from '../profiles/access.js'; import type { GenerationService } from '../generation/index.js'; import type { EntitlementService } from '../entitlements/index.js'; import type { EntitlementCache } from '../entitlements/cache.js'; @@ -30,6 +31,17 @@ export interface MediationDependencies { entitlementCache: EntitlementCache; } +type SessionRecord = Awaited>; + +export function isSessionAccessible( + session: SessionRecord, + userId: string, + tenantId: string, +): session is NonNullable { + if (!session || session.endedAt) return false; + return session.userId === userId && session.tenantId === tenantId; +} + export function createMediationRouter(deps: MediationDependencies): Router { const router = Router(); const { db, generationService, entitlementService, entitlementCache } = deps; @@ -74,14 +86,10 @@ export function createMediationRouter(deps: MediationDependencies): Router { // Validate session exists, belongs to this user, and is not closed const session = await sessionService.getSession(sessionId); - if (!session || session.endedAt) { + if (!isSessionAccessible(session, req.userId!, req.tenantId!)) { res.status(404).json({ error: 'session_not_found', message: 'Session not found or already closed' }); return; } - if (session.userId !== req.userId) { - res.status(404).json({ error: 'session_not_found', message: 'Session not found' }); - return; - } // Resolve entitlements for this session const entitlements = await entitlementService.resolve( @@ -117,6 +125,14 @@ export function createMediationRouter(deps: MediationDependencies): Router { }, }); } catch (err) { + if (err instanceof ProfileAccessDeniedError) { + res.status(403).json({ + error: 'profile_access_denied', + message: 'Active profile is not accessible for this tenant/session', + }); + return; + } + console.error('[mediation] message processing failed', err); res.status(500).json({ error: 'internal_error', message: 'Failed to process message' }); } }); @@ -125,7 +141,7 @@ export function createMediationRouter(deps: MediationDependencies): Router { router.get('/sessions/:sessionId', async (req: Request, res: Response): Promise => { try { const session = await sessionService.getSession(req.params.sessionId); - if (!session || session.userId !== req.userId) { + if (!isSessionAccessible(session, req.userId!, req.tenantId!)) { res.status(404).json({ error: 'session_not_found', message: 'Session not found' }); return; } @@ -139,7 +155,7 @@ export function createMediationRouter(deps: MediationDependencies): Router { router.delete('/sessions/:sessionId', async (req: Request, res: Response): Promise => { try { const session = await sessionService.getSession(req.params.sessionId); - if (!session || session.userId !== req.userId) { + if (!isSessionAccessible(session, req.userId!, req.tenantId!)) { res.status(404).json({ error: 'session_not_found', message: 'Session not found' }); return; } diff --git a/joyus-ai-mcp-server/src/content/monitoring/health.ts b/joyus-ai-mcp-server/src/content/monitoring/health.ts index f994349..7ada668 100644 --- a/joyus-ai-mcp-server/src/content/monitoring/health.ts +++ b/joyus-ai-mcp-server/src/content/monitoring/health.ts @@ -12,6 +12,7 @@ import { sql } from 'drizzle-orm'; import { db, contentSources } from '../../db/client.js'; +import type { ProfileQueueMetrics } from '../profiles/ingestion-queue.js'; // ============================================================ // TYPES @@ -30,6 +31,11 @@ export interface HealthReport { timestamp: string; } +export interface ProviderWiringStatus { + generationProvider: 'real' | 'placeholder'; + voiceAnalyzer: 'real' | 'stub'; +} + // ============================================================ // HEALTH CHECKER // ============================================================ @@ -49,13 +55,19 @@ function withTimeout( } export class HealthChecker { + constructor( + private readonly providerWiring?: ProviderWiringStatus, + private readonly profileQueueMetricsProvider?: () => ProfileQueueMetrics, + ) {} + async check(): Promise { - const [database, connectors, searchProvider, entitlementResolver] = + const [database, connectors, searchProvider, entitlementResolver, profileQueue] = await Promise.all([ withTimeout(this.checkDatabase(), { status: 'unhealthy' as HealthStatus, detail: 'timeout' }), withTimeout(this.checkConnectors(), { status: 'unhealthy' as HealthStatus, detail: 'timeout' }), withTimeout(this.checkSearchProvider(), { status: 'degraded' as HealthStatus, detail: 'timeout' }), withTimeout(this.checkEntitlementResolver(), { status: 'degraded' as HealthStatus, detail: 'timeout' }), + withTimeout(this.checkProfileQueue(), { status: 'degraded' as HealthStatus, detail: 'timeout' }), ]); const components: Record = { @@ -63,8 +75,23 @@ export class HealthChecker { connectors, searchProvider, entitlementResolver, + profileQueue, }; + if (this.providerWiring) { + const hasPlaceholder = this.providerWiring.generationProvider === 'placeholder'; + const hasStubAnalyzer = this.providerWiring.voiceAnalyzer === 'stub'; + components.providerWiring = hasPlaceholder || hasStubAnalyzer + ? { + status: 'degraded', + detail: `generation=${this.providerWiring.generationProvider}, voiceAnalyzer=${this.providerWiring.voiceAnalyzer}`, + } + : { + status: 'healthy', + detail: 'generation=real, voiceAnalyzer=real', + }; + } + const statuses = Object.values(components).map((c) => c.status); const overall: HealthStatus = statuses.includes('unhealthy') ? 'unhealthy' @@ -144,4 +171,35 @@ export class HealthChecker { return { status: 'unhealthy', detail }; } } + + private async checkProfileQueue(): Promise { + if (!this.profileQueueMetricsProvider) { + return { status: 'healthy', detail: 'not configured' }; + } + + try { + const metrics = this.profileQueueMetricsProvider(); + if (metrics.saturation >= 1) { + return { + status: 'unhealthy', + detail: `queue saturated depth=${metrics.depth} inFlight=${metrics.inFlight}`, + }; + } + + if (metrics.saturation >= 0.8 || metrics.rejected > 0) { + return { + status: 'degraded', + detail: `queue pressure depth=${metrics.depth} rejected=${metrics.rejected}`, + }; + } + + return { + status: 'healthy', + detail: `depth=${metrics.depth} inFlight=${metrics.inFlight}`, + }; + } catch (err) { + const detail = err instanceof Error ? err.message : 'unknown error'; + return { status: 'degraded', detail }; + } + } } diff --git a/joyus-ai-mcp-server/src/content/monitoring/index.ts b/joyus-ai-mcp-server/src/content/monitoring/index.ts index 885e74c..e665d06 100644 --- a/joyus-ai-mcp-server/src/content/monitoring/index.ts +++ b/joyus-ai-mcp-server/src/content/monitoring/index.ts @@ -23,7 +23,7 @@ export type { export { createMonitoringRouter } from './routes.js'; -export type { DriftAnalysis, VoiceAnalyzer } from './voice-analyzer.js'; -export { StubVoiceAnalyzer } from './voice-analyzer.js'; +export type { DriftAnalysis, VoiceAnalyzer, HttpVoiceAnalyzerConfig } from './voice-analyzer.js'; +export { StubVoiceAnalyzer, HttpVoiceAnalyzer } from './voice-analyzer.js'; export { DriftMonitor, getLatestDriftReport, getDriftSummary } from './drift.js'; diff --git a/joyus-ai-mcp-server/src/content/monitoring/routes.ts b/joyus-ai-mcp-server/src/content/monitoring/routes.ts index d073dac..79b5318 100644 --- a/joyus-ai-mcp-server/src/content/monitoring/routes.ts +++ b/joyus-ai-mcp-server/src/content/monitoring/routes.ts @@ -14,10 +14,12 @@ import { Router, Request, Response } from 'express'; import { HealthChecker } from './health.js'; import { MetricsCollector } from './metrics.js'; +import type { ProfileQueueMetrics } from '../profiles/ingestion-queue.js'; export function createMonitoringRouter( healthChecker: HealthChecker, metricsCollector: MetricsCollector, + getProfileQueueMetrics?: () => ProfileQueueMetrics, ): Router { const router = Router(); @@ -46,5 +48,16 @@ export function createMonitoringRouter( } }); + router.get('/profiles/queue/metrics', (_req: Request, res: Response) => { + if (!getProfileQueueMetrics) { + res.status(404).json({ error: 'profile_queue_not_configured' }); + return; + } + res.json({ + queue: getProfileQueueMetrics(), + collectedAt: new Date().toISOString(), + }); + }); + return router; } diff --git a/joyus-ai-mcp-server/src/content/monitoring/voice-analyzer.ts b/joyus-ai-mcp-server/src/content/monitoring/voice-analyzer.ts index dd08d76..da673d1 100644 --- a/joyus-ai-mcp-server/src/content/monitoring/voice-analyzer.ts +++ b/joyus-ai-mcp-server/src/content/monitoring/voice-analyzer.ts @@ -5,6 +5,8 @@ * downstream packages and injected into DriftMonitor at startup. */ +import axios from 'axios'; + export interface DriftAnalysis { /** 0.0 = perfect match, 1.0 = maximum drift */ overallScore: number; @@ -40,3 +42,73 @@ export class StubVoiceAnalyzer implements VoiceAnalyzer { }; } } + +export interface HttpVoiceAnalyzerConfig { + url: string; + timeoutMs?: number; + apiKey?: string; +} + +/** + * HTTP-backed voice analyzer. + * + * Expected response body: + * { + * overallScore: number, + * dimensionScores: Record, + * sampleSize: number, + * recommendations: string[] + * } + */ +export class HttpVoiceAnalyzer implements VoiceAnalyzer { + private readonly timeoutMs: number; + + constructor(private readonly config: HttpVoiceAnalyzerConfig) { + this.timeoutMs = config.timeoutMs ?? 10000; + } + + async analyze(content: string, profileId: string, tenantId: string): Promise { + const headers: Record = { + 'Content-Type': 'application/json', + }; + + if (this.config.apiKey) { + headers.Authorization = `Bearer ${this.config.apiKey}`; + } + + const response = await axios.post( + this.config.url, + { content, profileId, tenantId }, + { headers, timeout: this.timeoutMs }, + ); + + const data = response.data; + if (!data || typeof data !== 'object') { + throw new Error('Invalid voice analyzer response shape'); + } + + const overallScore = Number((data as { overallScore?: unknown }).overallScore); + const sampleSize = Number((data as { sampleSize?: unknown }).sampleSize); + const dimensionScores = (data as { dimensionScores?: unknown }).dimensionScores; + const recommendations = (data as { recommendations?: unknown }).recommendations; + + if (!Number.isFinite(overallScore) || !Number.isFinite(sampleSize)) { + throw new Error('Invalid voice analyzer response metrics'); + } + + if (!dimensionScores || typeof dimensionScores !== 'object') { + throw new Error('Invalid voice analyzer dimensionScores'); + } + + if (!Array.isArray(recommendations) || !recommendations.every((r) => typeof r === 'string')) { + throw new Error('Invalid voice analyzer recommendations'); + } + + return { + overallScore, + sampleSize, + dimensionScores: dimensionScores as Record, + recommendations, + }; + } +} diff --git a/joyus-ai-mcp-server/src/content/profiles/access.ts b/joyus-ai-mcp-server/src/content/profiles/access.ts new file mode 100644 index 0000000..aa1cf55 --- /dev/null +++ b/joyus-ai-mcp-server/src/content/profiles/access.ts @@ -0,0 +1,70 @@ +/** + * Tenant profile access contract for Feature 008. + * + * Defines lifecycle semantics and fail-closed profile access checks used by + * generation and mediation paths. + */ + +import { createId } from '@paralleldrive/cuid2'; +import { drizzle } from 'drizzle-orm/node-postgres'; +import { contentOperationLogs } from '../schema.js'; +import type { ResolvedEntitlements } from '../types.js'; + +type DrizzleClient = ReturnType; + +export const PROFILE_LIFECYCLE_STATES = ['draft', 'active', 'deprecated', 'archived'] as const; +export type ProfileLifecycleState = (typeof PROFILE_LIFECYCLE_STATES)[number]; + +export class ProfileAccessDeniedError extends Error { + constructor( + public readonly profileId: string, + public readonly tenantId: string, + public readonly userId: string, + ) { + super(`Profile ${profileId} is not accessible for tenant ${tenantId}`); + this.name = 'ProfileAccessDeniedError'; + } +} + +export function isProfileAccessible( + profileId: string | undefined, + entitlements: ResolvedEntitlements, +): boolean { + if (!profileId) return true; + return entitlements.profileIds.includes(profileId); +} + +export async function assertProfileAccessOrAudit( + db: DrizzleClient, + args: { + profileId?: string; + tenantId: string; + userId: string; + entitlements: ResolvedEntitlements; + sessionId?: string; + }, +): Promise { + const { profileId, tenantId, userId, entitlements, sessionId } = args; + if (!profileId || isProfileAccessible(profileId, entitlements)) { + return; + } + + await db.insert(contentOperationLogs).values({ + id: createId(), + tenantId, + operation: 'profile_access_denied', + userId, + durationMs: 0, + success: false, + metadata: { + eventType: 'tenant_profile_access', + decision: 'deny', + profileId, + allowedProfileIds: entitlements.profileIds, + sessionId: sessionId ?? null, + resolvedFrom: entitlements.resolvedFrom, + }, + }); + + throw new ProfileAccessDeniedError(profileId, tenantId, userId); +} diff --git a/joyus-ai-mcp-server/src/content/profiles/index.ts b/joyus-ai-mcp-server/src/content/profiles/index.ts new file mode 100644 index 0000000..a4a7d2e --- /dev/null +++ b/joyus-ai-mcp-server/src/content/profiles/index.ts @@ -0,0 +1,2 @@ +export * from './access.js'; +export * from './ingestion-queue.js'; diff --git a/joyus-ai-mcp-server/src/content/profiles/ingestion-queue.ts b/joyus-ai-mcp-server/src/content/profiles/ingestion-queue.ts new file mode 100644 index 0000000..80b02fd --- /dev/null +++ b/joyus-ai-mcp-server/src/content/profiles/ingestion-queue.ts @@ -0,0 +1,202 @@ +import { createId } from '@paralleldrive/cuid2'; + +export interface ProfileIngestionRequest { + tenantId: string; + profileId: string; + profileVersionId?: string; + payload: Record; +} + +export interface ProfileIngestionJob extends ProfileIngestionRequest { + id: string; + enqueuedAt: Date; + attempt: number; +} + +export interface ProfileIngestionQueueConfig { + maxQueueDepth?: number; + concurrency?: number; + maxRetries?: number; + retryDelayMs?: number; +} + +export interface ProfileQueueMetrics { + depth: number; + inFlight: number; + maxQueueDepth: number; + saturation: number; + enqueued: number; + completed: number; + failed: number; + retried: number; + rejected: number; + avgWaitMs: number; +} + +export class ProfileQueueBackpressureError extends Error { + constructor(public readonly maxQueueDepth: number) { + super(`Profile ingestion queue is saturated (max depth: ${maxQueueDepth})`); + this.name = 'ProfileQueueBackpressureError'; + } +} + +type IngestionHandler = (job: ProfileIngestionJob, signal: AbortSignal) => Promise; + +interface InternalQueuedJob extends ProfileIngestionJob { + attemptsRemaining: number; +} + +const DEFAULT_CONFIG: Required = { + maxQueueDepth: 500, + concurrency: 2, + maxRetries: 2, + retryDelayMs: 250, +}; + +export class ProfileIngestionQueue { + private readonly config: Required; + private readonly pending: InternalQueuedJob[] = []; + private readonly controllers = new Map(); + private inFlight = 0; + private accepting = true; + + private enqueued = 0; + private completed = 0; + private failed = 0; + private retried = 0; + private rejected = 0; + private totalWaitMs = 0; + + constructor( + private readonly handler: IngestionHandler, + config?: ProfileIngestionQueueConfig, + ) { + this.config = { ...DEFAULT_CONFIG, ...(config ?? {}) }; + } + + enqueue(request: ProfileIngestionRequest): string { + if (!this.accepting) { + throw new Error('Profile ingestion queue is not accepting new jobs'); + } + + if (this.depth() >= this.config.maxQueueDepth) { + this.rejected += 1; + throw new ProfileQueueBackpressureError(this.config.maxQueueDepth); + } + + const job: InternalQueuedJob = { + id: createId(), + tenantId: request.tenantId, + profileId: request.profileId, + profileVersionId: request.profileVersionId, + payload: request.payload, + enqueuedAt: new Date(), + attempt: 0, + attemptsRemaining: this.config.maxRetries + 1, + }; + + this.pending.push(job); + this.enqueued += 1; + this.pump(); + return job.id; + } + + enqueueBatch(requests: ProfileIngestionRequest[]): { + accepted: string[]; + rejected: number; + } { + const accepted: string[] = []; + let rejected = 0; + + for (const req of requests) { + try { + accepted.push(this.enqueue(req)); + } catch (err) { + if (err instanceof ProfileQueueBackpressureError) { + rejected += 1; + continue; + } + throw err; + } + } + + return { accepted, rejected }; + } + + cancel(jobId: string): boolean { + const controller = this.controllers.get(jobId); + if (!controller) return false; + controller.abort(); + return true; + } + + stopAccepting(): void { + this.accepting = false; + } + + startAccepting(): void { + this.accepting = true; + } + + getMetrics(): ProfileQueueMetrics { + const saturation = + this.config.maxQueueDepth > 0 ? this.depth() / this.config.maxQueueDepth : 0; + return { + depth: this.depth(), + inFlight: this.inFlight, + maxQueueDepth: this.config.maxQueueDepth, + saturation: Math.min(1, saturation), + enqueued: this.enqueued, + completed: this.completed, + failed: this.failed, + retried: this.retried, + rejected: this.rejected, + avgWaitMs: this.completed > 0 ? Math.round(this.totalWaitMs / this.completed) : 0, + }; + } + + private depth(): number { + return this.pending.length + this.inFlight; + } + + private pump(): void { + while (this.inFlight < this.config.concurrency && this.pending.length > 0) { + const job = this.pending.shift(); + if (!job) return; + this.execute(job).catch(() => { + // Failures are handled in execute; swallow to avoid unhandled promise. + }); + } + } + + private async execute(job: InternalQueuedJob): Promise { + this.inFlight += 1; + const controller = new AbortController(); + this.controllers.set(job.id, controller); + + const startedAt = Date.now(); + job.attempt += 1; + job.attemptsRemaining -= 1; + + try { + await this.handler(job, controller.signal); + this.completed += 1; + this.totalWaitMs += Math.max(0, startedAt - job.enqueuedAt.getTime()); + } catch { + const shouldRetry = job.attemptsRemaining > 0 && !controller.signal.aborted; + if (shouldRetry) { + this.retried += 1; + setTimeout(() => { + this.pending.push(job); + this.pump(); + }, this.config.retryDelayMs); + } else { + this.failed += 1; + } + } finally { + this.controllers.delete(job.id); + this.inFlight -= 1; + this.pump(); + } + } +} diff --git a/joyus-ai-mcp-server/src/content/search/pg-fts-provider.ts b/joyus-ai-mcp-server/src/content/search/pg-fts-provider.ts index 1c1c97b..e6a21fb 100644 --- a/joyus-ai-mcp-server/src/content/search/pg-fts-provider.ts +++ b/joyus-ai-mcp-server/src/content/search/pg-fts-provider.ts @@ -69,7 +69,7 @@ export class PgFtsProvider implements SearchProvider { metadata, is_stale FROM content.items - WHERE source_id = ANY(${sourceIds}) + WHERE source_id IN (${sql.join(sourceIds.map((id) => sql`${id}`), sql`, `)}) AND search_vector @@ plainto_tsquery('english', ${query}) ORDER BY score DESC LIMIT ${options.limit} diff --git a/joyus-ai-mcp-server/src/content/types.ts b/joyus-ai-mcp-server/src/content/types.ts index 8673820..0f573ac 100644 --- a/joyus-ai-mcp-server/src/content/types.ts +++ b/joyus-ai-mcp-server/src/content/types.ts @@ -18,7 +18,13 @@ export type SyncRunStatus = 'pending' | 'running' | 'completed' | 'failed'; export type SyncTrigger = 'scheduled' | 'manual'; -export type ContentOperationType = 'sync' | 'search' | 'resolve' | 'generate' | 'mediate'; +export type ContentOperationType = + | 'sync' + | 'search' + | 'resolve' + | 'generate' + | 'mediate' + | 'profile_access_denied'; // ============================================================ // CONNECTOR CONFIGURATION TYPES diff --git a/joyus-ai-mcp-server/src/index.ts b/joyus-ai-mcp-server/src/index.ts index a647b44..e6bcce5 100644 --- a/joyus-ai-mcp-server/src/index.ts +++ b/joyus-ai-mcp-server/src/index.ts @@ -23,6 +23,7 @@ import { authRouter } from './auth/routes.js'; import { requireBearerToken } from './auth/middleware.js'; import { db, auditLogs } from './db/client.js'; import { initializeContentModule } from './content/index.js'; +import { createPipelineRouter } from './pipelines/routes.js'; import { initializeScheduler } from './scheduler/index.js'; import { taskRouter } from './scheduler/routes.js'; import { executeTool } from './tools/executor.js'; @@ -161,6 +162,9 @@ app.use('/auth', authRouter); // Task management routes (scheduled tasks) app.use('/tasks', taskRouter); +// Automated pipeline pilot routes (auth required) +app.use('/pipelines', requireBearerToken, createPipelineRouter()); + // MCP endpoint with Bearer token auth app.post('/mcp', requireBearerToken, async (req: Request, res: Response) => { const user = req.mcpUser!; diff --git a/joyus-ai-mcp-server/src/pipelines/engine.ts b/joyus-ai-mcp-server/src/pipelines/engine.ts new file mode 100644 index 0000000..739f76e --- /dev/null +++ b/joyus-ai-mcp-server/src/pipelines/engine.ts @@ -0,0 +1,231 @@ +import { createId } from '@paralleldrive/cuid2'; + +import type { + PipelineDefinition, + PipelineRunOptions, + PipelineRunReport, + PipelineRunStatus, + PipelineStageDefinition, + PipelineStageReport, + PipelineStageResult, + StagePolicyGate, +} from './types.js'; + +const DEFAULT_STAGE_TIMEOUT_MS = 30_000; + +function errorMessage(err: unknown): string { + if (err instanceof Error) return err.message; + return String(err); +} + +function withTimeout(promise: Promise, timeoutMs: number, signal: AbortSignal): Promise { + return new Promise((resolve, reject) => { + const timer = setTimeout(() => reject(new Error(`timeout after ${timeoutMs}ms`)), timeoutMs); + + const abortHandler = () => reject(new Error('aborted')); + signal.addEventListener('abort', abortHandler, { once: true }); + + promise + .then((value) => resolve(value)) + .catch((err) => reject(err)) + .finally(() => { + clearTimeout(timer); + signal.removeEventListener('abort', abortHandler); + }); + }); +} + +export class PipelineEngine { + constructor(private readonly stagePolicyGate?: StagePolicyGate) {} + + async run( + definition: PipelineDefinition, + input: Record, + options?: PipelineRunOptions, + ): Promise { + const runId = createId(); + const mode = options?.mode ?? 'dry-run'; + const signal = options?.signal ?? new AbortController().signal; + const startedAtDate = new Date(); + const startedAt = startedAtDate.toISOString(); + const state: Record = {}; + const stages: PipelineStageReport[] = []; + let status: PipelineRunStatus = 'completed'; + + for (const stage of definition.stages) { + if (signal.aborted) { + status = 'canceled'; + stages.push({ + id: stage.id, + status: 'canceled', + attempts: 0, + }); + break; + } + + const report = await this.runStage(runId, mode, stage, input, state, signal); + stages.push(report); + + if (report.status === 'failed') { + status = 'failed'; + break; + } + if (report.status === 'canceled') { + status = 'canceled'; + break; + } + } + + const endedAtDate = new Date(); + return { + runId, + pipelineId: definition.id, + mode, + status, + startedAt, + endedAt: endedAtDate.toISOString(), + totalDurationMs: endedAtDate.getTime() - startedAtDate.getTime(), + stages, + state, + }; + } + + private async runStage( + runId: string, + mode: 'dry-run' | 'apply', + stage: PipelineStageDefinition, + input: Record, + state: Record, + signal: AbortSignal, + ): Promise { + const maxAttempts = Math.max(1, (stage.maxRetries ?? 0) + 1); + const timeoutMs = stage.timeoutMs ?? DEFAULT_STAGE_TIMEOUT_MS; + const startedAt = new Date(); + + let policyDecision; + if (stage.privileged && this.stagePolicyGate) { + policyDecision = await this.stagePolicyGate({ + runId, + mode, + stage, + input, + }); + if (!policyDecision.allow) { + return { + id: stage.id, + status: 'failed', + attempts: 0, + startedAt: startedAt.toISOString(), + endedAt: new Date().toISOString(), + durationMs: Date.now() - startedAt.getTime(), + error: policyDecision.reason ?? 'stage policy denied', + policyDecision, + }; + } + } + + let attempts = 0; + while (attempts < maxAttempts) { + attempts += 1; + const attemptStart = Date.now(); + try { + const result = await withTimeout( + stage.handler({ + runId, + stageId: stage.id, + mode, + input, + state, + signal, + attempt: attempts, + }), + timeoutMs, + signal, + ); + + if (result.output !== undefined) { + state[stage.id] = result.output; + } + + return this.successReport(stage.id, attempts, startedAt, policyDecision, result); + } catch (err) { + const aborted = errorMessage(err) === 'aborted'; + if (aborted) { + return { + id: stage.id, + status: 'canceled', + attempts, + startedAt: startedAt.toISOString(), + endedAt: new Date().toISOString(), + durationMs: Date.now() - startedAt.getTime(), + error: 'canceled', + policyDecision, + }; + } + + const isLastAttempt = attempts >= maxAttempts; + if (isLastAttempt) { + return { + id: stage.id, + status: 'failed', + attempts, + startedAt: startedAt.toISOString(), + endedAt: new Date().toISOString(), + durationMs: Date.now() - startedAt.getTime(), + error: errorMessage(err), + policyDecision, + }; + } + + const elapsed = Date.now() - attemptStart; + if (elapsed < 10) { + await new Promise((resolve) => setTimeout(resolve, 10)); + } + } + } + + return { + id: stage.id, + status: 'failed', + attempts, + startedAt: startedAt.toISOString(), + endedAt: new Date().toISOString(), + durationMs: Date.now() - startedAt.getTime(), + error: 'stage failed unexpectedly', + policyDecision, + }; + } + + private successReport( + stageId: string, + attempts: number, + startedAt: Date, + policyDecision: PipelineStageReport['policyDecision'], + result: PipelineStageResult, + ): PipelineStageReport { + const endedAt = new Date(); + if (result.skip) { + return { + id: stageId, + status: 'skipped', + attempts, + startedAt: startedAt.toISOString(), + endedAt: endedAt.toISOString(), + durationMs: endedAt.getTime() - startedAt.getTime(), + policyDecision, + evidence: result.evidence, + }; + } + + return { + id: stageId, + status: 'completed', + attempts, + startedAt: startedAt.toISOString(), + endedAt: endedAt.toISOString(), + durationMs: endedAt.getTime() - startedAt.getTime(), + policyDecision, + evidence: result.evidence, + }; + } +} diff --git a/joyus-ai-mcp-server/src/pipelines/index.ts b/joyus-ai-mcp-server/src/pipelines/index.ts new file mode 100644 index 0000000..c69200d --- /dev/null +++ b/joyus-ai-mcp-server/src/pipelines/index.ts @@ -0,0 +1,4 @@ +export * from './types.js'; +export * from './engine.js'; +export * from './runner.js'; +export * from './triggers.js'; diff --git a/joyus-ai-mcp-server/src/pipelines/routes.ts b/joyus-ai-mcp-server/src/pipelines/routes.ts new file mode 100644 index 0000000..9ed6a12 --- /dev/null +++ b/joyus-ai-mcp-server/src/pipelines/routes.ts @@ -0,0 +1,161 @@ +import { Router, type Request, type Response } from 'express'; + +import { PipelineEngine } from './engine.js'; +import { + PipelineQueueBackpressureError, + PipelineRunner, + type PipelineQueueMetrics, +} from './runner.js'; +import type { PipelineDefinition, PipelineMode, StagePolicyGate } from './types.js'; + +function estimateSeverity(text: string): 'low' | 'medium' | 'high' | 'critical' { + const t = text.toLowerCase(); + if (/(outage|data loss|breach|security|payments down|production down)/.test(t)) return 'critical'; + if (/(error|fails|broken|regression|timeout|incident)/.test(t)) return 'high'; + if (/(warning|slow|degraded|intermittent)/.test(t)) return 'medium'; + return 'low'; +} + +function createBugTriagePipeline(): PipelineDefinition { + return { + id: 'bug-triage-v1', + name: 'Bug Triage Pilot', + stages: [ + { + id: 'trigger', + async handler(ctx) { + return { + output: { + issueTitle: String(ctx.input.issueTitle ?? ''), + issueBody: String(ctx.input.issueBody ?? ''), + }, + evidence: { source: 'manual-api-trigger' }, + }; + }, + }, + { + id: 'enrich', + async handler(ctx) { + const issueTitle = String(ctx.state.trigger && (ctx.state.trigger as Record).issueTitle || ''); + const issueBody = String(ctx.state.trigger && (ctx.state.trigger as Record).issueBody || ''); + const combined = `${issueTitle} ${issueBody}`.trim(); + const keywords = combined + .toLowerCase() + .split(/[^a-z0-9]+/) + .filter((w) => w.length > 3) + .slice(0, 12); + return { + output: { combined, keywords }, + evidence: { keywordCount: keywords.length }, + }; + }, + }, + { + id: 'analyze', + async handler(ctx) { + const enriched = (ctx.state.enrich as Record) ?? {}; + const combined = String(enriched.combined ?? ''); + const severity = estimateSeverity(combined); + return { + output: { severity, priority: severity === 'critical' ? 'P0' : severity === 'high' ? 'P1' : 'P2' }, + evidence: { model: 'heuristic-v1' }, + }; + }, + }, + { + id: 'act', + privileged: true, + async handler(ctx) { + const analysis = (ctx.state.analyze as Record) ?? {}; + const severity = String(analysis.severity ?? 'low'); + return { + output: { + suggestedOwnerTeam: severity === 'critical' ? 'platform-oncall' : 'feature-team', + suggestedAction: severity === 'critical' ? 'page-oncall-and-open-incident' : 'open-triage-ticket', + }, + evidence: { mode: ctx.mode }, + }; + }, + }, + { + id: 'deliver', + async handler(ctx) { + const analysis = (ctx.state.analyze as Record) ?? {}; + const action = (ctx.state.act as Record) ?? {}; + return { + output: { + summary: `Severity=${analysis.severity ?? 'unknown'} priority=${analysis.priority ?? 'unknown'}`, + recommendedAction: action.suggestedAction ?? 'manual-review', + ownerTeam: action.suggestedOwnerTeam ?? 'unassigned', + }, + evidence: { deliveredAt: new Date().toISOString() }, + }; + }, + }, + ], + }; +} + +export function createPipelineRouter(): Router { + const router = Router(); + + const policyGate: StagePolicyGate = async ({ mode, stage }) => { + if (stage.privileged && mode === 'apply' && process.env.PIPELINE_APPLY_ENABLED !== 'true') { + return { + allow: false, + reason: 'apply mode is disabled for privileged stages', + evidenceRef: 'policy:pipeline-apply-disabled', + }; + } + return { allow: true }; + }; + + const engine = new PipelineEngine(policyGate); + const runner = new PipelineRunner(engine, { + concurrency: Number(process.env.PIPELINE_CONCURRENCY ?? 2), + maxQueueDepth: Number(process.env.PIPELINE_QUEUE_MAX_DEPTH ?? 200), + }); + + router.get('/health', (_req: Request, res: Response) => { + res.json({ status: 'ok' }); + }); + + router.get('/metrics', (_req: Request, res: Response) => { + const metrics: PipelineQueueMetrics = runner.getMetrics(); + res.json({ queue: metrics, collectedAt: new Date().toISOString() }); + }); + + router.post('/bug-triage/run', async (req: Request, res: Response) => { + try { + const mode: PipelineMode = req.body?.mode === 'apply' ? 'apply' : 'dry-run'; + const issueTitle = String(req.body?.issueTitle ?? ''); + const issueBody = String(req.body?.issueBody ?? ''); + + if (!issueTitle) { + res.status(400).json({ error: 'missing_issue_title', message: 'issueTitle is required' }); + return; + } + + const report = await runner.enqueue( + createBugTriagePipeline(), + { issueTitle, issueBody }, + { mode }, + ); + res.json({ report }); + } catch (err) { + if (err instanceof PipelineQueueBackpressureError) { + res.status(429).json({ + error: 'pipeline_queue_saturated', + message: err.message, + }); + return; + } + res.status(500).json({ + error: 'pipeline_execution_failed', + message: err instanceof Error ? err.message : 'internal error', + }); + } + }); + + return router; +} diff --git a/joyus-ai-mcp-server/src/pipelines/runner.ts b/joyus-ai-mcp-server/src/pipelines/runner.ts new file mode 100644 index 0000000..9fd80dc --- /dev/null +++ b/joyus-ai-mcp-server/src/pipelines/runner.ts @@ -0,0 +1,129 @@ +import { createId } from '@paralleldrive/cuid2'; + +import { PipelineEngine } from './engine.js'; +import type { PipelineDefinition, PipelineRunOptions, PipelineRunReport } from './types.js'; + +export interface PipelineRunnerConfig { + concurrency?: number; + maxQueueDepth?: number; +} + +export interface PipelineQueueMetrics { + depth: number; + inFlight: number; + completed: number; + failed: number; + rejected: number; + maxQueueDepth: number; + saturation: number; +} + +export class PipelineQueueBackpressureError extends Error { + constructor(public readonly maxQueueDepth: number) { + super(`Pipeline queue is saturated (max depth: ${maxQueueDepth})`); + this.name = 'PipelineQueueBackpressureError'; + } +} + +interface QueuedPipelineRun { + queueId: string; + pipeline: PipelineDefinition; + input: Record; + options?: PipelineRunOptions; + resolve: (value: PipelineRunReport) => void; + reject: (reason?: unknown) => void; +} + +const DEFAULT_CONFIG = { + concurrency: 2, + maxQueueDepth: 200, +}; + +export class PipelineRunner { + private readonly config: Required; + private readonly pending: QueuedPipelineRun[] = []; + private inFlight = 0; + private completed = 0; + private failed = 0; + private rejected = 0; + + constructor( + private readonly engine: PipelineEngine, + config?: PipelineRunnerConfig, + ) { + this.config = { + concurrency: config?.concurrency ?? DEFAULT_CONFIG.concurrency, + maxQueueDepth: config?.maxQueueDepth ?? DEFAULT_CONFIG.maxQueueDepth, + }; + } + + enqueue( + pipeline: PipelineDefinition, + input: Record, + options?: PipelineRunOptions, + ): Promise { + if (this.depth() >= this.config.maxQueueDepth) { + this.rejected += 1; + throw new PipelineQueueBackpressureError(this.config.maxQueueDepth); + } + + return new Promise((resolve, reject) => { + this.pending.push({ + queueId: createId(), + pipeline, + input, + options, + resolve, + reject, + }); + this.pump(); + }); + } + + getMetrics(): PipelineQueueMetrics { + const saturation = + this.config.maxQueueDepth > 0 ? this.depth() / this.config.maxQueueDepth : 0; + return { + depth: this.depth(), + inFlight: this.inFlight, + completed: this.completed, + failed: this.failed, + rejected: this.rejected, + maxQueueDepth: this.config.maxQueueDepth, + saturation: Math.min(1, saturation), + }; + } + + private depth(): number { + return this.pending.length + this.inFlight; + } + + private pump(): void { + while (this.inFlight < this.config.concurrency && this.pending.length > 0) { + const job = this.pending.shift(); + if (!job) return; + this.execute(job).catch(() => { + // Execution errors are handled by promise reject in execute. + }); + } + } + + private async execute(job: QueuedPipelineRun): Promise { + this.inFlight += 1; + try { + const report = await this.engine.run(job.pipeline, job.input, job.options); + if (report.status === 'failed') { + this.failed += 1; + } else { + this.completed += 1; + } + job.resolve(report); + } catch (err) { + this.failed += 1; + job.reject(err); + } finally { + this.inFlight -= 1; + this.pump(); + } + } +} diff --git a/joyus-ai-mcp-server/src/pipelines/triggers.ts b/joyus-ai-mcp-server/src/pipelines/triggers.ts new file mode 100644 index 0000000..22b2f8f --- /dev/null +++ b/joyus-ai-mcp-server/src/pipelines/triggers.ts @@ -0,0 +1,40 @@ +export interface TriggerEvent { + type: string; + payload: Record; + occurredAt: string; +} + +export type TriggerEventHandler = (event: TriggerEvent) => Promise | void; + +export interface TriggerAdapter { + name: string; + start(handler: TriggerEventHandler): Promise; + stop(): Promise; +} + +/** + * Minimal in-memory trigger adapter for pilot workflows and integration tests. + * Production adapters can wrap webhooks, queues, or schedulers. + */ +export class InMemoryTriggerAdapter implements TriggerAdapter { + public readonly name = 'in-memory-trigger'; + private handler: TriggerEventHandler | null = null; + + async start(handler: TriggerEventHandler): Promise { + this.handler = handler; + } + + async stop(): Promise { + this.handler = null; + } + + async emit(event: Omit): Promise { + if (!this.handler) { + throw new Error('Trigger adapter has not been started'); + } + await this.handler({ + ...event, + occurredAt: new Date().toISOString(), + }); + } +} diff --git a/joyus-ai-mcp-server/src/pipelines/types.ts b/joyus-ai-mcp-server/src/pipelines/types.ts new file mode 100644 index 0000000..1c99985 --- /dev/null +++ b/joyus-ai-mcp-server/src/pipelines/types.ts @@ -0,0 +1,87 @@ +export type PipelineMode = 'dry-run' | 'apply'; + +export type PipelineRunStatus = 'completed' | 'failed' | 'canceled'; + +export type PipelineStageStatus = + | 'pending' + | 'running' + | 'completed' + | 'failed' + | 'skipped' + | 'canceled'; + +export interface PipelineStageResult { + output?: unknown; + evidence?: Record; + skip?: boolean; +} + +export interface PipelineStageContext { + runId: string; + stageId: string; + mode: PipelineMode; + input: Record; + state: Record; + signal: AbortSignal; + attempt: number; +} + +export interface PipelineStageDefinition { + id: string; + privileged?: boolean; + timeoutMs?: number; + maxRetries?: number; + handler: (ctx: PipelineStageContext) => Promise; +} + +export interface PipelineDefinition { + id: string; + name: string; + stages: PipelineStageDefinition[]; +} + +export interface StagePolicyInput { + runId: string; + mode: PipelineMode; + stage: PipelineStageDefinition; + input: Record; +} + +export interface StagePolicyDecision { + allow: boolean; + reason?: string; + evidenceRef?: string; +} + +export type StagePolicyGate = ( + input: StagePolicyInput, +) => Promise | StagePolicyDecision; + +export interface PipelineStageReport { + id: string; + status: PipelineStageStatus; + attempts: number; + startedAt?: string; + endedAt?: string; + durationMs?: number; + error?: string; + policyDecision?: StagePolicyDecision; + evidence?: Record; +} + +export interface PipelineRunReport { + runId: string; + pipelineId: string; + mode: PipelineMode; + status: PipelineRunStatus; + startedAt: string; + endedAt: string; + totalDurationMs: number; + stages: PipelineStageReport[]; + state: Record; +} + +export interface PipelineRunOptions { + mode?: PipelineMode; + signal?: AbortSignal; +} diff --git a/joyus-ai-mcp-server/src/scheduler/index.ts b/joyus-ai-mcp-server/src/scheduler/index.ts index ac49f03..f7beebf 100644 --- a/joyus-ai-mcp-server/src/scheduler/index.ts +++ b/joyus-ai-mcp-server/src/scheduler/index.ts @@ -8,7 +8,7 @@ * - Logs all executions */ -import { parseExpression } from 'cron-parser'; +import cronParser from 'cron-parser'; import { eq } from 'drizzle-orm'; import cron from 'node-cron'; @@ -208,7 +208,7 @@ export async function runTask(taskId: string): Promise { */ async function updateTaskNextRun(taskId: string, schedule: string, timezone: string): Promise { try { - const interval = parseExpression(schedule, { + const interval = cronParser.parseExpression(schedule, { tz: timezone || 'America/New_York' }); const nextRun = interval.next().toDate(); diff --git a/joyus-ai-mcp-server/tests/content/integration/module-wiring.test.ts b/joyus-ai-mcp-server/tests/content/integration/module-wiring.test.ts new file mode 100644 index 0000000..16245d9 --- /dev/null +++ b/joyus-ai-mcp-server/tests/content/integration/module-wiring.test.ts @@ -0,0 +1,63 @@ +import { describe, expect, it, vi, beforeEach, afterEach } from 'vitest'; + +import { initializeContentModule } from '../../../src/content/index.js'; + +const ORIGINAL_ENV = { ...process.env }; + +describe('Content module provider wiring', () => { + beforeEach(() => { + vi.restoreAllMocks(); + process.env = { ...ORIGINAL_ENV }; + }); + + afterEach(() => { + process.env = { ...ORIGINAL_ENV }; + }); + + it('fails closed in strict mode when generation provider is placeholder', async () => { + process.env.NODE_ENV = 'production'; + process.env.CONTENT_GENERATION_PROVIDER = 'placeholder'; + process.env.CONTENT_DRIFT_ENABLED = 'false'; + + const app = { use: vi.fn() } as never; + const db = {} as never; + + await expect(initializeContentModule(app, { db })).rejects.toThrow( + /requires a real generation provider/i, + ); + expect(app.use).not.toHaveBeenCalled(); + }); + + it('fails closed when drift monitoring is enabled with stub analyzer', async () => { + process.env.NODE_ENV = 'development'; + process.env.CONTENT_STRICT_INIT = 'true'; + process.env.CONTENT_GENERATION_PROVIDER = 'http'; + process.env.CONTENT_GENERATION_PROVIDER_URL = 'http://localhost:9999/generate'; + process.env.CONTENT_DRIFT_ENABLED = 'true'; + process.env.CONTENT_VOICE_ANALYZER_PROVIDER = 'stub'; + + const app = { use: vi.fn() } as never; + const db = {} as never; + + await expect(initializeContentModule(app, { db })).rejects.toThrow( + /requires a real voice analyzer/i, + ); + expect(app.use).not.toHaveBeenCalled(); + }); + + it('initializes and mounts routers when real providers are configured', async () => { + process.env.NODE_ENV = 'production'; + process.env.CONTENT_STRICT_INIT = 'true'; + process.env.CONTENT_GENERATION_PROVIDER = 'http'; + process.env.CONTENT_GENERATION_PROVIDER_URL = 'http://localhost:9999/generate'; + process.env.CONTENT_DRIFT_ENABLED = 'false'; + process.env.CONTENT_VOICE_ANALYZER_PROVIDER = 'http'; + process.env.CONTENT_VOICE_ANALYZER_URL = 'http://localhost:9999/analyze'; + + const app = { use: vi.fn() } as never; + const db = {} as never; + + await expect(initializeContentModule(app, { db })).resolves.toBeUndefined(); + expect(app.use).toHaveBeenCalledTimes(2); + }); +}); diff --git a/joyus-ai-mcp-server/tests/content/integration/production-provider.test.ts b/joyus-ai-mcp-server/tests/content/integration/production-provider.test.ts new file mode 100644 index 0000000..0900a86 --- /dev/null +++ b/joyus-ai-mcp-server/tests/content/integration/production-provider.test.ts @@ -0,0 +1,44 @@ +import { describe, expect, it, vi } from 'vitest'; + +import { ContentGenerator, PlaceholderGenerationProvider } from '../../../src/content/generation/index.js'; +import { StubVoiceAnalyzer } from '../../../src/content/monitoring/index.js'; + +describe('Content provider wiring baseline', () => { + it('placeholder provider returns sentinel output', async () => { + const provider = new PlaceholderGenerationProvider(); + const result = await provider.generate('What changed?', 'system prompt'); + + expect(result).toContain('[Generation not configured]'); + }); + + it('content generator passes query/system prompt to provider', async () => { + const provider = { + generate: vi.fn().mockResolvedValue('Provider response text'), + }; + + const generator = new ContentGenerator(provider); + const output = await generator.generate( + 'Explain policy', + { + items: [], + contextText: '[Source 1: "Doc"] Body', + totalSearchResults: 1, + }, + 'profile-1', + ); + + expect(provider.generate).toHaveBeenCalledOnce(); + expect(output.text).toBe('Provider response text'); + expect(output.profileUsed).toBe('profile-1'); + }); + + it('stub voice analyzer returns deterministic zero-drift baseline', async () => { + const analyzer = new StubVoiceAnalyzer(); + const result = await analyzer.analyze('Generated content', 'profile-1', 'tenant-1'); + + expect(result.overallScore).toBe(0); + expect(result.sampleSize).toBe(0); + expect(result.dimensionScores).toEqual({}); + expect(result.recommendations.length).toBeGreaterThan(0); + }); +}); diff --git a/joyus-ai-mcp-server/tests/content/integration/profile-ingestion-queue.test.ts b/joyus-ai-mcp-server/tests/content/integration/profile-ingestion-queue.test.ts new file mode 100644 index 0000000..47c68b1 --- /dev/null +++ b/joyus-ai-mcp-server/tests/content/integration/profile-ingestion-queue.test.ts @@ -0,0 +1,71 @@ +import { describe, expect, it, vi } from 'vitest'; + +import { + ProfileIngestionQueue, + ProfileQueueBackpressureError, +} from '../../../src/content/profiles/ingestion-queue.js'; + +describe('Profile ingestion queue', () => { + it('applies backpressure when queue depth is saturated', async () => { + let release: (() => void) | undefined; + const handler = vi.fn( + async () => + await new Promise((resolve) => { + release = resolve; + }), + ); + + const queue = new ProfileIngestionQueue(handler, { + maxQueueDepth: 1, + concurrency: 1, + maxRetries: 0, + }); + + queue.enqueue({ + tenantId: 'tenant-1', + profileId: 'profile-1', + payload: {}, + }); + + expect(() => + queue.enqueue({ + tenantId: 'tenant-1', + profileId: 'profile-2', + payload: {}, + }), + ).toThrow(ProfileQueueBackpressureError); + + release?.(); + }); + + it('retries failed jobs and eventually completes', async () => { + let attempts = 0; + const handler = vi.fn(async () => { + attempts += 1; + if (attempts < 2) { + throw new Error('transient failure'); + } + }); + + const queue = new ProfileIngestionQueue(handler, { + maxQueueDepth: 5, + concurrency: 1, + maxRetries: 2, + retryDelayMs: 1, + }); + + queue.enqueue({ + tenantId: 'tenant-1', + profileId: 'profile-1', + payload: {}, + }); + + await new Promise((resolve) => setTimeout(resolve, 30)); + + const metrics = queue.getMetrics(); + expect(metrics.completed).toBe(1); + expect(metrics.retried).toBe(1); + expect(metrics.failed).toBe(0); + expect(handler).toHaveBeenCalledTimes(2); + }); +}); diff --git a/joyus-ai-mcp-server/tests/content/integration/profile-isolation.test.ts b/joyus-ai-mcp-server/tests/content/integration/profile-isolation.test.ts new file mode 100644 index 0000000..d3a7509 --- /dev/null +++ b/joyus-ai-mcp-server/tests/content/integration/profile-isolation.test.ts @@ -0,0 +1,65 @@ +import { describe, expect, it, vi } from 'vitest'; + +import { + assertProfileAccessOrAudit, + isProfileAccessible, + ProfileAccessDeniedError, +} from '../../../src/content/profiles/access.js'; +import type { ResolvedEntitlements } from '../../../src/content/types.js'; + +function makeEntitlements(profileIds: string[]): ResolvedEntitlements { + return { + productIds: ['prod-1'], + sourceIds: ['source-1'], + profileIds, + resolvedFrom: 'test', + resolvedAt: new Date(), + }; +} + +describe('Profile isolation contract', () => { + it('treats missing profileId as accessible', () => { + expect(isProfileAccessible(undefined, makeEntitlements([]))).toBe(true); + }); + + it('allows access when profileId is in entitlement scope', async () => { + const values = vi.fn(async () => undefined); + const insert = vi.fn(() => ({ values })); + const db = { insert } as never; + + await expect( + assertProfileAccessOrAudit(db, { + profileId: 'profile-1', + tenantId: 'tenant-1', + userId: 'user-1', + entitlements: makeEntitlements(['profile-1']), + sessionId: 'session-1', + }), + ).resolves.toBeUndefined(); + + expect(insert).not.toHaveBeenCalled(); + }); + + it('denies cross-tenant profile access and writes audit event', async () => { + const values = vi.fn(async () => undefined); + const insert = vi.fn(() => ({ values })); + const db = { insert } as never; + + await expect( + assertProfileAccessOrAudit(db, { + profileId: 'profile-denied', + tenantId: 'tenant-1', + userId: 'user-1', + entitlements: makeEntitlements(['profile-allowed']), + sessionId: 'session-1', + }), + ).rejects.toBeInstanceOf(ProfileAccessDeniedError); + + expect(insert).toHaveBeenCalledOnce(); + expect(values).toHaveBeenCalledOnce(); + const auditRecord = values.mock.calls[0]?.[0] as Record; + expect(auditRecord.operation).toBe('profile_access_denied'); + expect(auditRecord.success).toBe(false); + expect((auditRecord.metadata as Record).profileId).toBe('profile-denied'); + }); +}); diff --git a/joyus-ai-mcp-server/tests/content/integration/tenant-isolation.test.ts b/joyus-ai-mcp-server/tests/content/integration/tenant-isolation.test.ts new file mode 100644 index 0000000..703dc01 --- /dev/null +++ b/joyus-ai-mcp-server/tests/content/integration/tenant-isolation.test.ts @@ -0,0 +1,155 @@ +import { describe, expect, it, vi } from 'vitest'; + +import { ContentRetriever } from '../../../src/content/generation/retriever.js'; +import { isSessionAccessible } from '../../../src/content/mediation/router.js'; +import type { ResolvedEntitlements, SearchResult } from '../../../src/content/types.js'; + +function makeEntitlements(sourceIds: string[]): ResolvedEntitlements { + return { + productIds: ['prod-1'], + sourceIds, + profileIds: [], + resolvedFrom: 'test', + resolvedAt: new Date(), + }; +} + +function makeSearchResult(overrides: Partial = {}): SearchResult { + return { + itemId: 'item-1', + sourceId: 'source-allowed', + title: 'Allowed Content', + excerpt: 'excerpt', + score: 1, + metadata: {}, + isStale: false, + ...overrides, + }; +} + +function makeDb(rowsQueue: Array<{ id: string; sourceId: string; title: string; body: string; metadata: Record }>) { + const limit = vi.fn(async () => { + const row = rowsQueue.shift(); + return row ? [row] : []; + }); + const where = vi.fn(() => ({ limit })); + const from = vi.fn(() => ({ where })); + const select = vi.fn(() => ({ from })); + + return { + db: { select } as never, + limit, + }; +} + +describe('Tenant isolation', () => { + it('passes only entitled source IDs to search when sourceIds are requested', async () => { + const search = vi.fn().mockResolvedValue([ + makeSearchResult({ itemId: 'item-allow', sourceId: 'source-allowed' }), + ]); + const searchService = { search }; + + const { db } = makeDb([ + { + id: 'item-allow', + sourceId: 'source-allowed', + title: 'Allowed Content', + body: 'Allowed body', + metadata: {}, + }, + ]); + + const retriever = new ContentRetriever(searchService, db); + const entitlements = makeEntitlements(['source-allowed']); + + const result = await retriever.retrieve('query', entitlements, { + sourceIds: ['source-allowed', 'source-denied'], + maxSources: 5, + }); + + expect(search).toHaveBeenCalledWith('query', ['source-allowed'], { limit: 5 }); + expect(result.items).toHaveLength(1); + expect(result.items[0]?.sourceId).toBe('source-allowed'); + }); + + it('drops provider results that are outside the entitlement scope', async () => { + const search = vi.fn().mockResolvedValue([ + makeSearchResult({ itemId: 'item-allow', sourceId: 'source-allowed' }), + makeSearchResult({ itemId: 'item-denied', sourceId: 'source-denied' }), + ]); + const searchService = { search }; + + const { db, limit } = makeDb([ + { + id: 'item-allow', + sourceId: 'source-allowed', + title: 'Allowed Content', + body: 'Allowed body', + metadata: {}, + }, + { + id: 'item-denied', + sourceId: 'source-denied', + title: 'Denied Content', + body: 'Denied body', + metadata: {}, + }, + ]); + + const retriever = new ContentRetriever(searchService, db); + const entitlements = makeEntitlements(['source-allowed']); + + const result = await retriever.retrieve('query', entitlements, { + sourceIds: ['source-allowed'], + maxSources: 5, + }); + + expect(search).toHaveBeenCalledWith('query', ['source-allowed'], { limit: 5 }); + expect(limit).toHaveBeenCalledTimes(1); + expect(result.items).toHaveLength(1); + expect(result.items[0]?.itemId).toBe('item-allow'); + expect(result.totalSearchResults).toBe(1); + }); + + it('returns no items when entitlement scope is empty', async () => { + const search = vi.fn().mockResolvedValue([]); + const searchService = { search }; + const { db, limit } = makeDb([]); + + const retriever = new ContentRetriever(searchService, db); + const entitlements = makeEntitlements([]); + + const result = await retriever.retrieve('query', entitlements, { + sourceIds: ['source-denied'], + maxSources: 3, + }); + + expect(search).toHaveBeenCalledWith('query', [], { limit: 3 }); + expect(limit).not.toHaveBeenCalled(); + expect(result.items).toHaveLength(0); + expect(result.totalSearchResults).toBe(0); + }); + + it('denies mediation session access when tenant does not match', () => { + const session = { + id: 'session-1', + tenantId: 'tenant-allowed', + userId: 'user-1', + endedAt: null, + }; + + expect(isSessionAccessible(session as never, 'user-1', 'tenant-other')).toBe(false); + expect(isSessionAccessible(session as never, 'user-1', 'tenant-allowed')).toBe(true); + }); + + it('denies mediation session access when session is closed', () => { + const session = { + id: 'session-1', + tenantId: 'tenant-allowed', + userId: 'user-1', + endedAt: new Date(), + }; + + expect(isSessionAccessible(session as never, 'user-1', 'tenant-allowed')).toBe(false); + }); +}); diff --git a/joyus-ai-mcp-server/tests/pipelines/engine.test.ts b/joyus-ai-mcp-server/tests/pipelines/engine.test.ts new file mode 100644 index 0000000..550beec --- /dev/null +++ b/joyus-ai-mcp-server/tests/pipelines/engine.test.ts @@ -0,0 +1,111 @@ +import { describe, expect, it, vi } from 'vitest'; + +import { PipelineEngine } from '../../src/pipelines/engine.js'; +import type { PipelineDefinition } from '../../src/pipelines/types.js'; + +describe('PipelineEngine', () => { + it('retries a stage before succeeding', async () => { + let attempts = 0; + const engine = new PipelineEngine(); + const pipeline: PipelineDefinition = { + id: 'retry-pipeline', + name: 'Retry Pipeline', + stages: [ + { + id: 'analyze', + maxRetries: 2, + async handler() { + attempts += 1; + if (attempts < 2) { + throw new Error('temporary'); + } + return { output: { ok: true }, evidence: { attempts } }; + }, + }, + ], + }; + + const report = await engine.run(pipeline, {}); + expect(report.status).toBe('completed'); + expect(report.stages[0]?.attempts).toBe(2); + expect(report.state.analyze).toEqual({ ok: true }); + }); + + it('fails privileged stage when policy denies execution', async () => { + const policyGate = vi.fn().mockResolvedValue({ + allow: false, + reason: 'policy denied', + evidenceRef: 'policy/decision/123', + }); + + const engine = new PipelineEngine(policyGate); + const handler = vi.fn().mockResolvedValue({ output: { shouldNotRun: true } }); + const pipeline: PipelineDefinition = { + id: 'policy-pipeline', + name: 'Policy Pipeline', + stages: [ + { + id: 'act', + privileged: true, + handler, + }, + ], + }; + + const report = await engine.run(pipeline, {}, { mode: 'apply' }); + expect(report.status).toBe('failed'); + expect(report.stages[0]?.error).toMatch(/policy denied/i); + expect(handler).not.toHaveBeenCalled(); + expect(policyGate).toHaveBeenCalledOnce(); + }); + + it('marks stage as failed on timeout', async () => { + const engine = new PipelineEngine(); + const pipeline: PipelineDefinition = { + id: 'timeout-pipeline', + name: 'Timeout Pipeline', + stages: [ + { + id: 'enrich', + timeoutMs: 5, + async handler() { + await new Promise((resolve) => setTimeout(resolve, 30)); + return { output: { ok: true } }; + }, + }, + ], + }; + + const report = await engine.run(pipeline, {}); + expect(report.status).toBe('failed'); + expect(report.stages[0]?.error).toMatch(/timeout/i); + }); + + it('cancels run when abort signal is triggered', async () => { + const controller = new AbortController(); + const engine = new PipelineEngine(); + const pipeline: PipelineDefinition = { + id: 'cancel-pipeline', + name: 'Cancel Pipeline', + stages: [ + { + id: 'trigger', + async handler() { + controller.abort(); + return { output: { started: true } }; + }, + }, + { + id: 'deliver', + async handler() { + return { output: { shouldNotRun: true } }; + }, + }, + ], + }; + + const report = await engine.run(pipeline, {}, { signal: controller.signal }); + expect(report.status).toBe('canceled'); + expect(report.stages[1]?.status).toBe('canceled'); + }); +}); diff --git a/joyus-ai-mcp-server/tests/pipelines/runner.test.ts b/joyus-ai-mcp-server/tests/pipelines/runner.test.ts new file mode 100644 index 0000000..fa7be5b --- /dev/null +++ b/joyus-ai-mcp-server/tests/pipelines/runner.test.ts @@ -0,0 +1,55 @@ +import { describe, expect, it, vi } from 'vitest'; + +import { PipelineEngine } from '../../src/pipelines/engine.js'; +import { PipelineQueueBackpressureError, PipelineRunner } from '../../src/pipelines/runner.js'; +import type { PipelineDefinition } from '../../src/pipelines/types.js'; + +describe('PipelineRunner', () => { + it('rejects enqueue when queue is saturated', async () => { + const engine = new PipelineEngine(); + const runner = new PipelineRunner(engine, { concurrency: 1, maxQueueDepth: 1 }); + + const pipeline: PipelineDefinition = { + id: 'queue-saturation', + name: 'Queue Saturation', + stages: [ + { + id: 'trigger', + async handler() { + await new Promise((resolve) => setTimeout(resolve, 25)); + return { output: { ok: true } }; + }, + }, + ], + }; + + const p1 = runner.enqueue(pipeline, {}); + expect(() => runner.enqueue(pipeline, {})).toThrow(PipelineQueueBackpressureError); + await p1; + }); + + it('tracks run outcomes in metrics', async () => { + const policyGate = vi.fn().mockResolvedValue({ allow: false, reason: 'nope' }); + const engine = new PipelineEngine(policyGate); + const runner = new PipelineRunner(engine, { concurrency: 1, maxQueueDepth: 5 }); + + const failingPipeline: PipelineDefinition = { + id: 'failing', + name: 'Failing', + stages: [ + { + id: 'act', + privileged: true, + async handler() { + return { output: { shouldNotRun: true } }; + }, + }, + ], + }; + + await runner.enqueue(failingPipeline, {}); + const metrics = runner.getMetrics(); + expect(metrics.failed).toBe(1); + expect(metrics.completed).toBe(0); + }); +}); diff --git a/kitty-specs/001-mcp-server-aws-deployment/tasks.md b/kitty-specs/001-mcp-server-aws-deployment/tasks.md index f370778..63a8902 100644 --- a/kitty-specs/001-mcp-server-aws-deployment/tasks.md +++ b/kitty-specs/001-mcp-server-aws-deployment/tasks.md @@ -71,7 +71,7 @@ **Parallel opportunities**: T003 and T004 can be built independently of T002. **Success criteria**: `docker compose build` succeeds. `docker compose up -d` starts all 3 containers. Containers can communicate on internal network. **Risks**: Platform container image may be large (~2GB) due to multi-runtime. Monitor build times. -**Prompt file**: [tasks/WP01-docker-compose-containers.md](tasks/WP01-docker-compose-containers.md) +**Prompt file**: `tasks/WP01-docker-compose-containers.md` --- @@ -92,7 +92,7 @@ **Parallel opportunities**: T009 (firewall) independent of T007 (nginx). **Success criteria**: Fresh Ubuntu 24.04 instance can be provisioned from scratch by running `setup-ec2.sh`. Nginx routes requests correctly. TLS works on `ai.example.com`. **Risks**: DNS propagation delay. Certbot requires domain to resolve to EC2 IP first. -**Prompt file**: [tasks/WP02-ec2-provisioning-nginx.md](tasks/WP02-ec2-provisioning-nginx.md) +**Prompt file**: `tasks/WP02-ec2-provisioning-nginx.md` --- @@ -113,7 +113,7 @@ **Parallel opportunities**: T015 (Slack notification) independent of T011-T014. **Success criteria**: Push to main triggers automated build + deploy. New version live within 10 minutes. Failed deploy triggers rollback and Slack alert. **Risks**: GitHub Actions needs `workflow` scope on gh auth. EC2 SSH key must be in GitHub secrets. -**Prompt file**: [tasks/WP03-cicd-pipeline.md](tasks/WP03-cicd-pipeline.md) +**Prompt file**: `tasks/WP03-cicd-pipeline.md` --- @@ -135,7 +135,7 @@ **Parallel opportunities**: T017, T018, T019, T039 all independent of each other. **Success criteria**: `/health` endpoint returns correct service status. Logs rotate automatically. Slack alert fires on simulated downtime. **Risks**: Health check must not create excessive load. Use lightweight checks (TCP for DB, HTTP 200 for services). -**Prompt file**: [tasks/WP04-monitoring-health.md](tasks/WP04-monitoring-health.md) +**Prompt file**: `tasks/WP04-monitoring-health.md` --- @@ -158,7 +158,7 @@ **Parallel opportunities**: T022-T027 all independent of each other (different services/runtimes). **Success criteria**: All MCP endpoints respond to tool calls. All Python packages importable. All CLI tools executable. Memory persists across restart. **Risks**: OAuth tokens may need re-auth for production environment. Playwright may need display config (Xvfb or headless flag). -**Prompt file**: [tasks/WP05-mcp-skill-verification.md](tasks/WP05-mcp-skill-verification.md) +**Prompt file**: `tasks/WP05-mcp-skill-verification.md` --- @@ -179,7 +179,7 @@ **Parallel opportunities**: T032 (Claude Desktop config) independent of T028-T031. **Success criteria**: Chat UI loads on mobile browser. Can send message and receive Claude response with tool call results. Auth prevents unauthorized access. Claude Desktop connects and lists all MCP tools. **Risks**: Claude API streaming may need CORS configuration in nginx. Mobile keyboard handling may need viewport meta tag. -**Prompt file**: [tasks/WP06-web-chat-claude-desktop.md](tasks/WP06-web-chat-claude-desktop.md) +**Prompt file**: `tasks/WP06-web-chat-claude-desktop.md` --- @@ -203,7 +203,7 @@ **Parallel opportunities**: T034 (DNS) can start while T033 (EC2) provisions. **Success criteria**: `ai.example.com` serves HTTPS. All health checks green. 2+ team members connected. Web chat works from phone. Monthly cost under $35. **Risks**: DNS propagation (up to 48h worst case, usually <1h). May need t3.medium if OOM under load. -**Prompt file**: [tasks/WP07-production-launch.md](tasks/WP07-production-launch.md) +**Prompt file**: `tasks/WP07-production-launch.md` --- diff --git a/kitty-specs/005-content-intelligence/tasks.md b/kitty-specs/005-content-intelligence/tasks.md index f595992..ad01ac3 100644 --- a/kitty-specs/005-content-intelligence/tasks.md +++ b/kitty-specs/005-content-intelligence/tasks.md @@ -96,7 +96,7 @@ **Dependencies**: None **Subtasks**: T001-T007 (7 subtasks) **Estimated prompt**: ~450 lines -**Prompt file**: [tasks/WP01-package-foundation.md](tasks/WP01-package-foundation.md) +**Prompt file**: `tasks/WP01-package-foundation.md` Initialize the `joyus-profile-engine/` Python package with all Pydantic data models, domain templates, and test infrastructure. This WP produces the type foundation that all subsequent WPs build on. @@ -120,7 +120,7 @@ Initialize the `joyus-profile-engine/` Python package with all Pydantic data mod **Dependencies**: WP01 **Subtasks**: T008-T011 (4 subtasks) **Estimated prompt**: ~350 lines -**Prompt file**: [tasks/WP02-corpus-ingestion.md](tasks/WP02-corpus-ingestion.md) +**Prompt file**: `tasks/WP02-corpus-ingestion.md` Build the document ingestion pipeline: load from multiple sources and formats, preprocess into normalized chunks ready for feature extraction. @@ -137,7 +137,7 @@ Build the document ingestion pipeline: load from multiple sources and formats, p **Dependencies**: WP02 **Subtasks**: T012-T018 (7 subtasks) **Estimated prompt**: ~500 lines -**Prompt file**: [tasks/WP03-feature-extraction.md](tasks/WP03-feature-extraction.md) +**Prompt file**: `tasks/WP03-feature-extraction.md` Implement all six analyzers that extract the 129-feature stylometric vector from processed corpora. The StylometricAnalyzer wraps faststylometry; others use spaCy and custom NLP. @@ -157,7 +157,7 @@ Implement all six analyzers that extract the 129-feature stylometric vector from **Dependencies**: WP03 **Subtasks**: T019-T025 (7 subtasks) **Estimated prompt**: ~450 lines -**Prompt file**: [tasks/WP04-profile-generation.md](tasks/WP04-profile-generation.md) +**Prompt file**: `tasks/WP04-profile-generation.md` Transform extracted features into structured 12-section AuthorProfiles and emit platform-consumable skill files (SKILL.md + markers.json + stylometrics.json). @@ -177,7 +177,7 @@ Transform extracted features into structured 12-section AuthorProfiles and emit **Dependencies**: WP04 **Subtasks**: T026-T031 (6 subtasks) **Estimated prompt**: ~400 lines -**Prompt file**: [tasks/WP05-verification-cli.md](tasks/WP05-verification-cli.md) +**Prompt file**: `tasks/WP05-verification-cli.md` Implement the two-tier verification system (Tier 1 inline <500ms, Tier 2 deep analysis) and CLI commands for building profiles and verifying content. @@ -196,7 +196,7 @@ Implement the two-tier verification system (Tier 1 inline <500ms, Tier 2 deep an **Dependencies**: WP05 **Subtasks**: T032-T035 (4 subtasks) **Estimated prompt**: ~350 lines -**Prompt file**: [tasks/WP06-mcp-server.md](tasks/WP06-mcp-server.md) +**Prompt file**: `tasks/WP06-mcp-server.md` Expose the profile engine as MCP tools using the official Python `mcp` SDK. Implements build_profile, get_profile, compare_profiles, verify_content, and check_fidelity. @@ -213,7 +213,7 @@ Expose the profile engine as MCP tools using the official Python `mcp` SDK. Impl **Dependencies**: WP06 **Subtasks**: T036-T038 (3 subtasks) **Estimated prompt**: ~300 lines -**Prompt file**: [tasks/WP07-phase-a-testing.md](tasks/WP07-phase-a-testing.md) +**Prompt file**: `tasks/WP07-phase-a-testing.md` Validate Phase A end-to-end: port PoC accuracy tests, run full corpus-to-verification pipeline, measure performance against targets. @@ -233,7 +233,7 @@ Validate Phase A end-to-end: port PoC accuracy tests, run full corpus-to-verific **Dependencies**: WP04 **Subtasks**: T039-T042 (4 subtasks) **Estimated prompt**: ~350 lines -**Prompt file**: [tasks/WP08-composite-builder.md](tasks/WP08-composite-builder.md) +**Prompt file**: `tasks/WP08-composite-builder.md` Build department-level and org-level composite profiles from member profiles using corpus-size weighted mean aggregation. @@ -250,7 +250,7 @@ Build department-level and org-level composite profiles from member profiles usi **Dependencies**: WP08 **Subtasks**: T043-T046 (4 subtasks) **Estimated prompt**: ~350 lines -**Prompt file**: [tasks/WP09-hierarchy-management.md](tasks/WP09-hierarchy-management.md) +**Prompt file**: `tasks/WP09-hierarchy-management.md` CRUD operations for the full profile hierarchy, cascade propagation, diffing, and multi-level skill file emission. @@ -266,7 +266,7 @@ CRUD operations for the full profile hierarchy, cascade propagation, diffing, an **Dependencies**: WP09 **Subtasks**: T047-T050 (4 subtasks) **Estimated prompt**: ~400 lines -**Prompt file**: [tasks/WP10-cascade-attribution.md](tasks/WP10-cascade-attribution.md) +**Prompt file**: `tasks/WP10-cascade-attribution.md` Multi-level attribution engine: person → department → organization → outsider cascade with ranked candidate lists and MCP tool exposure. @@ -282,7 +282,7 @@ Multi-level attribution engine: person → department → organization → outsi **Dependencies**: WP10 **Subtasks**: T051-T056 (6 subtasks) **Estimated prompt**: ~450 lines -**Prompt file**: [tasks/WP11-voice-context-testing.md](tasks/WP11-voice-context-testing.md) +**Prompt file**: `tasks/WP11-voice-context-testing.md` VoiceContext resolution (3-layer opt-in), access control, and comprehensive Phase B integration testing including hierarchy build and attribution accuracy. @@ -304,7 +304,7 @@ VoiceContext resolution (3-layer opt-in), access control, and comprehensive Phas **Dependencies**: WP07, WP11 **Subtasks**: T057-T062 (6 subtasks) **Estimated prompt**: ~400 lines -**Prompt file**: [tasks/WP12-monitoring-drift.md](tasks/WP12-monitoring-drift.md) +**Prompt file**: `tasks/WP12-monitoring-drift.md` Continuous monitoring pipeline with Tier 2 analysis queue, JSON-based score storage, trend aggregation, and five drift detection signals. @@ -322,7 +322,7 @@ Continuous monitoring pipeline with Tier 2 analysis queue, JSON-based score stor **Dependencies**: WP12 **Subtasks**: T063-T068 (6 subtasks) **Estimated prompt**: ~400 lines -**Prompt file**: [tasks/WP13-diagnosis-repair.md](tasks/WP13-diagnosis-repair.md) +**Prompt file**: `tasks/WP13-diagnosis-repair.md` Diagnosis engine that identifies what drifted and why, plus repair action framework with 6 repair types, verification, and revert capability. @@ -340,7 +340,7 @@ Diagnosis engine that identifies what drifted and why, plus repair action framew **Dependencies**: WP13 **Subtasks**: T069-T073 (5 subtasks) **Estimated prompt**: ~350 lines -**Prompt file**: [tasks/WP14-monitoring-mcp-testing.md](tasks/WP14-monitoring-mcp-testing.md) +**Prompt file**: `tasks/WP14-monitoring-mcp-testing.md` Expose monitoring as MCP tools, integrate with Langfuse, and run simulated drift + repair verification scenarios. diff --git a/kitty-specs/007-org-scale-agentic-governance/plan.md b/kitty-specs/007-org-scale-agentic-governance/plan.md index df15e05..3465cbe 100644 --- a/kitty-specs/007-org-scale-agentic-governance/plan.md +++ b/kitty-specs/007-org-scale-agentic-governance/plan.md @@ -1,30 +1,82 @@ # Implementation Plan: Org-Scale Agentic Governance ## Summary -Implement governance controls in four waves: baseline, backlog conversion, remediation rollout, and automated gate enforcement. +Execute governance in six concrete workstreams over four weeks: baseline scoring, remediation backlog conversion, status-canonicalization enforcement, lifecycle contract hardening, CI policy gates, and autonomy-level safeguards. ## Technical Context -- Primary artifacts are markdown governance docs and Python validation tooling. -- Existing spec lifecycle remains in `kitty-specs/` with metadata in `meta.json`. -- Validation output must support both terminal use and CI workflows. - -## Constitution Check -| Principle | Status | Notes | -|---|---|---| -| Multi-tenant from day one | PASS | Governance applies across tenant contexts | -| Monitor everything | PASS | Adds explicit instrumentation and review cadence | -| Feedback loops are first-class | PASS | Weekly and monthly review loops codified | -| Spec-driven development | PASS | Workflow rules updated at source command templates | - -## Phase Breakdown -1. Baseline and scoring matrix publication. -2. P0/P1 backlog conversion with ownership and due dates. -3. Governance doc and metadata remediations. -4. Automated checks and CI enforcement. +- Primary artifacts are governance markdown docs, feature metadata, and CI validation scripts. +- `kitty-specs/*/meta.json` remains the lifecycle source for per-feature state. +- Human-facing status text must be derived from machine-readable status records. + +## Execution Scope +1. Convert governance from narrative guidance to enforceable checks. +2. Remove status drift across roadmap and planning surfaces. +3. Establish objective progression rules for higher-autonomy workflows. + +## Workstream Plan + +### WS01 - Baseline and Maturity Scoring (Week 1) +- Publish a five-level maturity rubric with explicit evidence requirements. +- Score current feature/governance posture against rubric. +- Record P0/P1/P2 gaps with objective severity criteria. + +Exit criteria: +- Baseline matrix published. +- Every identified gap labeled by severity and mapped to an owner role. + +### WS02 - Backlog Conversion and Ownership (Week 1) +- Convert P0/P1 governance gaps into remediation backlog entries. +- Attach owner role, due date, acceptance check, and linked evidence path. + +Exit criteria: +- No P0/P1 gap remains uncaptured in remediation backlog. + +### WS03 - Status Canonicalization and Drift Removal (Week 1-2) +- Introduce canonical status registry and schema validation. +- Add consistency checks to fail CI on lifecycle drift. +- Update roadmap/readme status surfaces to use canonical status language. + +Exit criteria: +- CI fails on intentional status mismatch. +- Status values are synchronized across canonical and human-facing surfaces. + +### WS04 - Spec Lifecycle Contract Hardening (Week 2) +- Define required feature artifact contract (spec, plan, tasks, meta fields). +- Enforce lifecycle transition prerequisites (planning -> execution -> done). +- Validate required metadata fields for every feature. + +Exit criteria: +- Missing required artifacts/metadata causes validation failure. +- Lifecycle transition rules are documented and checked. + +### WS05 - Governance CI and Reporting (Week 2-3) +- Add governance verification workflow in CI. +- Produce machine-readable + human-readable governance report artifact per run. +- Wire report links into PR review expectations. + +Exit criteria: +- Governance workflow runs on PR and main. +- Verification report is generated and archived for each run. + +### WS06 - Autonomy Leveling and Holdout Policy (Week 3-4) +- Define level advancement criteria (L1-L5) with mandatory holdout scenarios. +- Require simulation/digital-twin evidence for high-autonomy promotion. +- Add fail-closed rule for incomplete evidence at L4/L5 gates. + +Exit criteria: +- Leveling policy is published and referenced by governance checks. +- L4/L5 promotions are blocked without holdout+simulation evidence. ## Deliverables -- Baseline matrix document. -- Remediation backlog document. -- Updated governance rules and command templates. -- Validation scripts and CI workflow. -- Final governance verification report. +- Governance maturity rubric and baseline matrix. +- Remediation backlog with ownership and acceptance checks. +- Canonical status schema + registry + CI enforcement. +- Feature lifecycle contract and transition policy. +- Governance verification workflow and generated reports. +- Autonomy-level and holdout-scenario policy. + +## Definition of Done +1. Governance checks run automatically in CI and block policy violations. +2. Status drift is machine-detectable and fails fast. +3. Remediation backlog exists for all P0/P1 gaps with owners. +4. Autonomy progression policy is objective and enforceable. diff --git a/kitty-specs/007-org-scale-agentic-governance/tasks.md b/kitty-specs/007-org-scale-agentic-governance/tasks.md index 35b9f65..ed8bdd7 100644 --- a/kitty-specs/007-org-scale-agentic-governance/tasks.md +++ b/kitty-specs/007-org-scale-agentic-governance/tasks.md @@ -1,33 +1,59 @@ # Tasks: Org-Scale Agentic Governance +## Objective +Implement enforceable governance controls that remove status drift, formalize lifecycle gates, and block unsafe autonomy progression. + +## Subtask Index + +| ID | Description | WP | Parallel | +|----|-------------|----|----------| +| T001 | Define five-level governance maturity rubric with scoring criteria | WP01 | | +| T002 | Produce baseline matrix across active features and governance surfaces | WP01 | | +| T003 | Classify findings into P0/P1/P2 with severity rationale | WP01 | [P] | +| T004 | Convert P0/P1 findings into remediation backlog records | WP02 | | +| T005 | Add owner role, due date, and acceptance check per remediation item | WP02 | [P] | +| T006 | Define canonical status registry schema and required fields | WP03 | | +| T007 | Implement status consistency validator (meta lifecycle vs canonical registry) | WP03 | | +| T008 | Add CI workflow for status and governance validation | WP03 | [P] | +| T009 | Replace hand-maintained status language with canonical terms in public docs | WP03 | [P] | +| T010 | Define feature lifecycle contract (required artifacts + metadata) | WP04 | | +| T011 | Implement lifecycle transition guards (planning->execution->done) | WP04 | | +| T012 | Add artifact-completeness validator (spec/plan/tasks/meta checks) | WP04 | [P] | +| T013 | Generate governance verification report artifact in CI | WP05 | | +| T014 | Add PR review guidance that references governance report outputs | WP05 | [P] | +| T015 | Define autonomy-level advancement policy with objective prerequisites | WP06 | | +| T016 | Define holdout-scenario requirements for L4/L5 readiness | WP06 | [P] | +| T017 | Define simulation/digital-twin evidence expectations for high-autonomy changes | WP06 | [P] | +| T018 | Implement fail-closed policy gate for missing L4/L5 evidence | WP06 | | + ## Work Packages -### WP01 - Baseline and Scoring -- Produce maturity rubric and baseline matrix. -- Tag all identified gaps as P0/P1/P2. - -### WP02 - Backlog and Ownership -- Convert P0/P1 gaps to remediation items. -- Assign owner role, due date, acceptance criteria. - -### WP03 - Governance Remediations -- Align constitutions. -- Resolve broken references. -- Update README and roadmap consistency. -- Fill known feature artifact gaps. - -### WP04 - Workflow and Metadata Contracts -- Update spec generation rules. -- Add required metadata fields to features. -- Update governance policy document. - -### WP05 - Automated Checks and CI -- Implement governance validation script. -- Extend pride-status integrity reporting. -- Add CI workflow and publish verification report. - -### WP06 - Autonomy Leveling and Scenario Policy -- Define and publish five-level operating maturity classification process. -- Add holdout-scenario policy requirements for Level 4/5 workflows. -- Define digital twin/simulation expectation for high-autonomy integration testing. -- Document legacy migration staging and talent pipeline safeguards. +### WP01 - Baseline and Scoring (Week 1) +- T001, T002, T003 +- Output: governance baseline matrix and severity-tagged findings. + +### WP02 - Backlog and Ownership (Week 1) +- T004, T005 +- Output: remediation backlog with ownership and acceptance checks. + +### WP03 - Status Canonicalization and CI Drift Gates (Week 1-2) +- T006, T007, T008, T009 +- Output: canonical status contract + CI mismatch failure path. + +### WP04 - Lifecycle Contract Hardening (Week 2) +- T010, T011, T012 +- Output: enforceable feature artifact and lifecycle-transition policy. + +### WP05 - Governance Reporting and PR Integration (Week 2-3) +- T013, T014 +- Output: machine-generated governance report consumed in PR review. + +### WP06 - Autonomy Leveling and Holdout Policy (Week 3-4) +- T015, T016, T017, T018 +- Output: objective autonomy progression framework with fail-closed high-autonomy gating. + +## Completion Criteria +1. CI fails on status drift, missing required artifacts, and invalid lifecycle transitions. +2. Every P0/P1 governance gap has a backlog item with owner and acceptance check. +3. Governance report is generated and attached to every validation run. +4. L4/L5 promotions are blocked unless holdout and simulation evidence are present. diff --git a/kitty-specs/008-profile-isolation-and-scale/checklists/requirements.md b/kitty-specs/008-profile-isolation-and-scale/checklists/requirements.md new file mode 100644 index 0000000..4b58b83 --- /dev/null +++ b/kitty-specs/008-profile-isolation-and-scale/checklists/requirements.md @@ -0,0 +1,8 @@ +# Requirements Checklist + +- [x] Problem statement is explicit and scoped. +- [x] User stories are present and testable. +- [x] Functional requirements are listed with stable identifiers. +- [x] Dependencies are declared. +- [x] Success criteria are measurable. +- [x] Implementation plan and tasks are present. diff --git a/kitty-specs/008-profile-isolation-and-scale/meta.json b/kitty-specs/008-profile-isolation-and-scale/meta.json new file mode 100644 index 0000000..3ff9e51 --- /dev/null +++ b/kitty-specs/008-profile-isolation-and-scale/meta.json @@ -0,0 +1,13 @@ +{ + "feature_number": "008", + "slug": "008-profile-isolation-and-scale", + "friendly_name": "Profile Isolation and Scale", + "mission": "software-dev", + "source_description": "Tenant-scoped profile storage, access controls, auditability, and throughput scaling for profile ingestion and verification.", + "created_at": "2026-03-05T00:00:00Z", + "vcs": "git", + "measurement_owner": "Content Intelligence", + "review_cadence": "weekly", + "risk_class": "critical", + "lifecycle_state": "execution" +} diff --git a/kitty-specs/008-profile-isolation-and-scale/plan.md b/kitty-specs/008-profile-isolation-and-scale/plan.md new file mode 100644 index 0000000..948e5cf --- /dev/null +++ b/kitty-specs/008-profile-isolation-and-scale/plan.md @@ -0,0 +1,13 @@ +# Implementation Plan: Profile Isolation and Scale + +## Phases +1. Isolation contract and data model. +2. Tenant enforcement and audit logging. +3. Batch processing and performance controls. +4. Observability and SLO validation. + +## Deliverables +- Tenant profile access contract. +- Isolation enforcement tests. +- Batch ingestion queueing strategy. +- Metrics and alerting definitions. diff --git a/kitty-specs/008-profile-isolation-and-scale/research.md b/kitty-specs/008-profile-isolation-and-scale/research.md new file mode 100644 index 0000000..643a281 --- /dev/null +++ b/kitty-specs/008-profile-isolation-and-scale/research.md @@ -0,0 +1,18 @@ +# Research Notes + +## Summary +Initial scope was reviewed against roadmap commitments and current architecture constraints. + +## Inputs Considered +- `ROADMAP.md` planned and under-evaluation items +- `kitty-specs/003-platform-architecture-overview/spec.md` domain and dependency inventory +- `kitty-specs/006-content-infrastructure/spec.md` and `kitty-specs/007-org-scale-agentic-governance/spec.md` + +## Key Decisions +- Keep v1 scope narrow and execution-oriented. +- Reuse existing governance and policy contracts where possible. +- Require explicit tenant isolation and auditability for all privileged paths. + +## Open Research Follow-ups +- Benchmark and load profile assumptions should be validated with pilot data. +- External integration contracts should be hardened before production-readiness promotion. diff --git a/kitty-specs/008-profile-isolation-and-scale/spec.md b/kitty-specs/008-profile-isolation-and-scale/spec.md new file mode 100644 index 0000000..ec34d2b --- /dev/null +++ b/kitty-specs/008-profile-isolation-and-scale/spec.md @@ -0,0 +1,54 @@ +# Feature Specification: Profile Isolation and Scale + +**Feature Branch**: `008-profile-isolation-and-scale` +**Created**: 2026-03-05 +**Status**: Draft + +## Summary +Define tenant-isolated profile storage and high-throughput profile operations so profile-driven generation and verification remain secure and performant at multi-tenant scale. + +## Scope Lock (2026-03-05) +- In scope (v1): tenant-scoped profile CRUD enforcement, profile access audit events, batch ingestion backpressure, and verification latency observability. +- Out of scope (v1): cross-provider profile portability and advanced profile graph analytics. + +## User Stories +1. As a platform operator, I can guarantee profile reads/writes are tenant-scoped and auditable. +2. As a content team, I can ingest and verify large profile batches without queue collapse. +3. As governance owner, I can prove no cross-tenant profile leakage occurred. + +## Functional Requirements +- FR-001: System MUST enforce tenant scoping for all profile create/read/update/delete operations. +- FR-002: System MUST deny cross-tenant profile access by default and emit audit events on denials. +- FR-003: System MUST support batch ingestion with backpressure and retry semantics. +- FR-004: System MUST expose profile verification latency and queue depth metrics. +- FR-005: System MUST define profile lifecycle states and transition rules. + +## Key Entities +- TenantProfile +- ProfileVersion +- VerificationJob +- IsolationAuditEvent + +## Success Criteria +- SC-001: Cross-tenant profile access attempts are blocked 100% in integration tests. +- SC-002: Batch ingestion throughput supports target pilot volume without SLO breach. +- SC-003: p95 verification latency stays within defined SLO under representative load. + +## Dependencies +- Feature 005 (Content Intelligence) +- Feature 006 (Content Infrastructure) +- Feature 007 (Governance) + +## Adoption Plan +- Internal pilot first for one tenant with medium corpus size. +- Expand to two additional tenants after isolation and latency SLO checks pass. + +## ROI Metrics +- Reduction in cross-tenant access incidents (target: zero). +- Profile ingestion throughput improvement at target load. +- Reduction in manual profile triage time. + +## Security + MCP Governance +- Tenant boundary checks are mandatory at every profile access path. +- Governance evidence for isolation tests is required before readiness promotion. +- Failure to enforce tenant scope is a release-blocking condition. diff --git a/kitty-specs/008-profile-isolation-and-scale/tasks.md b/kitty-specs/008-profile-isolation-and-scale/tasks.md new file mode 100644 index 0000000..4442b87 --- /dev/null +++ b/kitty-specs/008-profile-isolation-and-scale/tasks.md @@ -0,0 +1,18 @@ +# Tasks: Profile Isolation and Scale + +## Work Packages +### WP01 - Isolation Contract +- Define tenant-scoped profile APIs and storage boundaries. +- Define lifecycle states for profile and profile version. + +### WP02 - Enforcement +- Implement deny-by-default cross-tenant checks. +- Add profile access audit events. + +### WP03 - Scale Path +- Implement batch ingestion queueing and retries. +- Add backpressure and overload protections. + +### WP04 - Validation +- Add integration tests for tenant isolation. +- Add load tests for ingestion and verification latency. diff --git a/kitty-specs/009-automated-pipelines-framework/checklists/requirements.md b/kitty-specs/009-automated-pipelines-framework/checklists/requirements.md new file mode 100644 index 0000000..4b58b83 --- /dev/null +++ b/kitty-specs/009-automated-pipelines-framework/checklists/requirements.md @@ -0,0 +1,8 @@ +# Requirements Checklist + +- [x] Problem statement is explicit and scoped. +- [x] User stories are present and testable. +- [x] Functional requirements are listed with stable identifiers. +- [x] Dependencies are declared. +- [x] Success criteria are measurable. +- [x] Implementation plan and tasks are present. diff --git a/kitty-specs/009-automated-pipelines-framework/meta.json b/kitty-specs/009-automated-pipelines-framework/meta.json new file mode 100644 index 0000000..ab1f8be --- /dev/null +++ b/kitty-specs/009-automated-pipelines-framework/meta.json @@ -0,0 +1,13 @@ +{ + "feature_number": "009", + "slug": "009-automated-pipelines-framework", + "friendly_name": "Automated Pipelines Framework", + "mission": "software-dev", + "source_description": "Event-driven automation framework for governed enrichment, diagnosis, and delivery workflows.", + "created_at": "2026-03-05T00:00:00Z", + "vcs": "git", + "measurement_owner": "Engineering Operations", + "review_cadence": "weekly", + "risk_class": "platform", + "lifecycle_state": "execution" +} diff --git a/kitty-specs/009-automated-pipelines-framework/plan.md b/kitty-specs/009-automated-pipelines-framework/plan.md new file mode 100644 index 0000000..7dd5d12 --- /dev/null +++ b/kitty-specs/009-automated-pipelines-framework/plan.md @@ -0,0 +1,13 @@ +# Implementation Plan: Automated Pipelines Framework + +## Phases +1. Pipeline model and stage contracts. +2. Trigger adapters and job orchestration. +3. Governance + quality gate integration. +4. Pilot workflows and operational runbooks. + +## Deliverables +- Pipeline stage schema. +- Trigger adapter interface. +- Stage-level audit and policy hooks. +- Pilot workflow definitions and failure-path documentation. diff --git a/kitty-specs/009-automated-pipelines-framework/research.md b/kitty-specs/009-automated-pipelines-framework/research.md new file mode 100644 index 0000000..643a281 --- /dev/null +++ b/kitty-specs/009-automated-pipelines-framework/research.md @@ -0,0 +1,18 @@ +# Research Notes + +## Summary +Initial scope was reviewed against roadmap commitments and current architecture constraints. + +## Inputs Considered +- `ROADMAP.md` planned and under-evaluation items +- `kitty-specs/003-platform-architecture-overview/spec.md` domain and dependency inventory +- `kitty-specs/006-content-infrastructure/spec.md` and `kitty-specs/007-org-scale-agentic-governance/spec.md` + +## Key Decisions +- Keep v1 scope narrow and execution-oriented. +- Reuse existing governance and policy contracts where possible. +- Require explicit tenant isolation and auditability for all privileged paths. + +## Open Research Follow-ups +- Benchmark and load profile assumptions should be validated with pilot data. +- External integration contracts should be hardened before production-readiness promotion. diff --git a/kitty-specs/009-automated-pipelines-framework/spec.md b/kitty-specs/009-automated-pipelines-framework/spec.md new file mode 100644 index 0000000..7e17261 --- /dev/null +++ b/kitty-specs/009-automated-pipelines-framework/spec.md @@ -0,0 +1,48 @@ +# Feature Specification: Automated Pipelines Framework + +**Feature Branch**: `009-automated-pipelines-framework` +**Created**: 2026-03-05 +**Status**: Draft + +## Summary +Create a governed event-driven pipeline framework so workflows (for example bug triage and regulatory updates) execute with the same policy, skill, and audit controls as interactive sessions. + +## Scope Lock (2026-03-05) +- In scope (v1): stage contract, trigger adapters, policy gate integration, and two pilot flows (bug triage + regulatory change). +- Out of scope (v1): fully autonomous production write actions without human review gates. + +## User Stories +1. As an operator, I can trigger multi-stage workflows from external events. +2. As a reviewer, I can inspect stage-by-stage evidence and failure causes. +3. As governance lead, I can enforce quality gates before pipeline delivery actions. + +## Functional Requirements +- FR-001: System MUST support trigger -> enrich -> analyze -> act -> deliver stage flow. +- FR-002: System MUST enforce policy checks at each privileged stage. +- FR-003: System MUST emit structured audit artifacts for every stage transition. +- FR-004: System MUST support retry/timeout/failover semantics per stage. +- FR-005: System MUST support dry-run and apply modes for delivery stages. + +## Success Criteria +- SC-001: One bug-triage pipeline and one regulatory-change pipeline run end-to-end in pilot mode. +- SC-002: Failed stage diagnostics are sufficient for human takeover without re-triage. +- SC-003: No privileged stage executes without policy decision evidence. + +## Dependencies +- Feature 004 (Workflow Enforcement) +- Feature 006 (Content Infrastructure) +- Feature 007 (Governance) + +## Adoption Plan +- Launch with two pilot pipeline families: bug triage and regulatory change detection. +- Keep delivery stages in dry-run/guarded mode until governance evidence stabilizes. + +## ROI Metrics +- Mean time to triage reduction for pilot issue classes. +- Percentage of pipeline runs requiring no manual re-triage. +- Stage failure diagnostics completeness rate. + +## Security + MCP Governance +- Each privileged stage requires policy decision evidence before execution. +- Pipeline audit trail retention follows governance policy for traceability. +- Fail-closed behavior is mandatory on policy uncertainty. diff --git a/kitty-specs/009-automated-pipelines-framework/tasks.md b/kitty-specs/009-automated-pipelines-framework/tasks.md new file mode 100644 index 0000000..99af8f4 --- /dev/null +++ b/kitty-specs/009-automated-pipelines-framework/tasks.md @@ -0,0 +1,18 @@ +# Tasks: Automated Pipelines Framework + +## Work Packages +### WP01 - Pipeline Contract +- Define stage schema and stage state machine. +- Define evidence payload contract per stage. + +### WP02 - Orchestration Core +- Build trigger adapter interface and job execution loop. +- Add timeout, retry, and cancellation controls. + +### WP03 - Governance Integration +- Enforce policy checks and quality gates per stage. +- Emit structured stage audit events. + +### WP04 - Pilot Pipelines +- Implement bug triage pilot pipeline. +- Implement regulatory-change pilot pipeline. diff --git a/kitty-specs/010-multi-location-operations-module/checklists/requirements.md b/kitty-specs/010-multi-location-operations-module/checklists/requirements.md new file mode 100644 index 0000000..4b58b83 --- /dev/null +++ b/kitty-specs/010-multi-location-operations-module/checklists/requirements.md @@ -0,0 +1,8 @@ +# Requirements Checklist + +- [x] Problem statement is explicit and scoped. +- [x] User stories are present and testable. +- [x] Functional requirements are listed with stable identifiers. +- [x] Dependencies are declared. +- [x] Success criteria are measurable. +- [x] Implementation plan and tasks are present. diff --git a/kitty-specs/010-multi-location-operations-module/meta.json b/kitty-specs/010-multi-location-operations-module/meta.json new file mode 100644 index 0000000..d14c584 --- /dev/null +++ b/kitty-specs/010-multi-location-operations-module/meta.json @@ -0,0 +1,13 @@ +{ + "feature_number": "010", + "slug": "010-multi-location-operations-module", + "friendly_name": "Multi-Location Operations Module", + "mission": "software-dev", + "source_description": "Tenant-isolated staffing and operational planning module with manager approval and controlled publish workflows.", + "created_at": "2026-03-05T00:00:00Z", + "vcs": "git", + "measurement_owner": "Product Operations", + "review_cadence": "biweekly", + "risk_class": "platform", + "lifecycle_state": "planning" +} diff --git a/kitty-specs/010-multi-location-operations-module/plan.md b/kitty-specs/010-multi-location-operations-module/plan.md new file mode 100644 index 0000000..d549241 --- /dev/null +++ b/kitty-specs/010-multi-location-operations-module/plan.md @@ -0,0 +1,13 @@ +# Implementation Plan: Multi-Location Operations Module + +## Phases +1. Domain model and approval workflow contract. +2. Dry-run/apply publish pipeline. +3. Audit and rollback controls. +4. Pilot validation on representative multi-location dataset. + +## Deliverables +- Location plan data contract. +- Approval gate workflow. +- Publish/rollback runbook. +- Pilot acceptance report. diff --git a/kitty-specs/010-multi-location-operations-module/research.md b/kitty-specs/010-multi-location-operations-module/research.md new file mode 100644 index 0000000..643a281 --- /dev/null +++ b/kitty-specs/010-multi-location-operations-module/research.md @@ -0,0 +1,18 @@ +# Research Notes + +## Summary +Initial scope was reviewed against roadmap commitments and current architecture constraints. + +## Inputs Considered +- `ROADMAP.md` planned and under-evaluation items +- `kitty-specs/003-platform-architecture-overview/spec.md` domain and dependency inventory +- `kitty-specs/006-content-infrastructure/spec.md` and `kitty-specs/007-org-scale-agentic-governance/spec.md` + +## Key Decisions +- Keep v1 scope narrow and execution-oriented. +- Reuse existing governance and policy contracts where possible. +- Require explicit tenant isolation and auditability for all privileged paths. + +## Open Research Follow-ups +- Benchmark and load profile assumptions should be validated with pilot data. +- External integration contracts should be hardened before production-readiness promotion. diff --git a/kitty-specs/010-multi-location-operations-module/spec.md b/kitty-specs/010-multi-location-operations-module/spec.md new file mode 100644 index 0000000..1e38732 --- /dev/null +++ b/kitty-specs/010-multi-location-operations-module/spec.md @@ -0,0 +1,44 @@ +# Feature Specification: Multi-Location Operations Module + +**Feature Branch**: `010-multi-location-operations-module` +**Created**: 2026-03-05 +**Status**: Draft + +## Summary +Define staffing and publish workflows for multi-location operators with explicit approval gates, dry-run safety, and complete audit history. + +## User Stories +1. As an operator, I can manage staffing plans across multiple locations with shared constraints. +2. As a manager, I can approve or reject publish actions with visible diffs and risk indicators. +3. As an auditor, I can trace who approved and published each operational change. + +## Functional Requirements +- FR-001: System MUST model location-scoped plans under tenant isolation. +- FR-002: System MUST require manager approval before apply/publish actions. +- FR-003: System MUST support dry-run previews and apply confirmations. +- FR-004: System MUST log approval, rejection, and publish actions with actor attribution. +- FR-005: System MUST support rollback of latest publish action. + +## Success Criteria +- SC-001: End-to-end dry-run -> approval -> apply flow works for at least two locations. +- SC-002: Unauthorized publish attempts are blocked and logged. +- SC-003: Rollback path restores last known-good plan in pilot tests. + +## Dependencies +- Feature 004 (Workflow Enforcement) +- Feature 007 (Governance) +- Feature 009 (Automated Pipelines Framework) + +## Adoption Plan +- Start with one tenant and two locations. +- Expand after publish/rollback and approval-audit flows pass pilot acceptance checks. + +## ROI Metrics +- Planning-cycle time reduction across locations. +- Approval-to-publish lead time. +- Publish rollback recovery time. + +## Security + MCP Governance +- Publish/apply actions require explicit manager approval evidence. +- Unauthorized publish attempts must fail and be logged. +- Tenant-isolated audit history is required for all operational changes. diff --git a/kitty-specs/010-multi-location-operations-module/tasks.md b/kitty-specs/010-multi-location-operations-module/tasks.md new file mode 100644 index 0000000..c2d871f --- /dev/null +++ b/kitty-specs/010-multi-location-operations-module/tasks.md @@ -0,0 +1,18 @@ +# Tasks: Multi-Location Operations Module + +## Work Packages +### WP01 - Domain Model +- Define location plan schema and approval state transitions. +- Define dry-run diff output format. + +### WP02 - Approval and Publish +- Implement manager approval workflow. +- Implement dry-run and apply publish stages. + +### WP03 - Audit and Safety +- Add actor-attributed audit logging for all publish actions. +- Add rollback mechanism and verification checks. + +### WP04 - Pilot Validation +- Run two-location pilot scenario. +- Validate unauthorized action blocking and rollback behavior. diff --git a/kitty-specs/011-compliance-policy-modules/checklists/requirements.md b/kitty-specs/011-compliance-policy-modules/checklists/requirements.md new file mode 100644 index 0000000..4b58b83 --- /dev/null +++ b/kitty-specs/011-compliance-policy-modules/checklists/requirements.md @@ -0,0 +1,8 @@ +# Requirements Checklist + +- [x] Problem statement is explicit and scoped. +- [x] User stories are present and testable. +- [x] Functional requirements are listed with stable identifiers. +- [x] Dependencies are declared. +- [x] Success criteria are measurable. +- [x] Implementation plan and tasks are present. diff --git a/kitty-specs/011-compliance-policy-modules/meta.json b/kitty-specs/011-compliance-policy-modules/meta.json new file mode 100644 index 0000000..662ea5a --- /dev/null +++ b/kitty-specs/011-compliance-policy-modules/meta.json @@ -0,0 +1,13 @@ +{ + "feature_number": "011", + "slug": "011-compliance-policy-modules", + "friendly_name": "Compliance Policy Modules", + "mission": "software-dev", + "source_description": "Tenant-declared compliance modules with hard-fail policy enforcement and auditable controls.", + "created_at": "2026-03-05T00:00:00Z", + "vcs": "git", + "measurement_owner": "Security and Compliance", + "review_cadence": "weekly", + "risk_class": "critical", + "lifecycle_state": "planning" +} diff --git a/kitty-specs/011-compliance-policy-modules/plan.md b/kitty-specs/011-compliance-policy-modules/plan.md new file mode 100644 index 0000000..33d26b6 --- /dev/null +++ b/kitty-specs/011-compliance-policy-modules/plan.md @@ -0,0 +1,13 @@ +# Implementation Plan: Compliance Policy Modules + +## Phases +1. Policy declaration and module contract. +2. Pre-action compliance decision engine. +3. Fail-closed behavior and auditing. +4. Pilot module rollout and readiness checks. + +## Deliverables +- Compliance declaration schema. +- Policy decision interface. +- Fail-closed enforcement tests. +- Module onboarding checklist and pilot report. diff --git a/kitty-specs/011-compliance-policy-modules/research.md b/kitty-specs/011-compliance-policy-modules/research.md new file mode 100644 index 0000000..643a281 --- /dev/null +++ b/kitty-specs/011-compliance-policy-modules/research.md @@ -0,0 +1,18 @@ +# Research Notes + +## Summary +Initial scope was reviewed against roadmap commitments and current architecture constraints. + +## Inputs Considered +- `ROADMAP.md` planned and under-evaluation items +- `kitty-specs/003-platform-architecture-overview/spec.md` domain and dependency inventory +- `kitty-specs/006-content-infrastructure/spec.md` and `kitty-specs/007-org-scale-agentic-governance/spec.md` + +## Key Decisions +- Keep v1 scope narrow and execution-oriented. +- Reuse existing governance and policy contracts where possible. +- Require explicit tenant isolation and auditability for all privileged paths. + +## Open Research Follow-ups +- Benchmark and load profile assumptions should be validated with pilot data. +- External integration contracts should be hardened before production-readiness promotion. diff --git a/kitty-specs/011-compliance-policy-modules/spec.md b/kitty-specs/011-compliance-policy-modules/spec.md new file mode 100644 index 0000000..0d1e086 --- /dev/null +++ b/kitty-specs/011-compliance-policy-modules/spec.md @@ -0,0 +1,44 @@ +# Feature Specification: Compliance Policy Modules + +**Feature Branch**: `011-compliance-policy-modules` +**Created**: 2026-03-05 +**Status**: Draft + +## Summary +Establish a compliance policy framework where tenants declare required modules and platform actions fail closed when controls are missing or violated. + +## User Stories +1. As a tenant admin, I can declare required compliance modules for my workspace. +2. As a system, I block non-compliant actions before they execute. +3. As an auditor, I can verify compliance decisions and evidence trails. + +## Functional Requirements +- FR-001: System MUST support tenant-level declaration of active compliance modules. +- FR-002: System MUST evaluate policy controls before privileged actions. +- FR-003: System MUST fail closed on unresolved compliance checks. +- FR-004: System MUST log policy decision inputs, outputs, and evidence references. +- FR-005: System MUST provide module-level readiness checks for onboarding. + +## Success Criteria +- SC-001: Non-compliant privileged actions are blocked 100% in policy integration tests. +- SC-002: Compliance decision logs are queryable by tenant, action, and module. +- SC-003: Pilot tenants can enable/disable modules through declared policy configuration. + +## Dependencies +- Feature 007 (Governance) +- Feature 009 (Automated Pipelines Framework) +- Feature 010 (Multi-Location Operations Module) + +## Adoption Plan +- Roll out with a baseline compliance declaration flow for pilot tenants. +- Introduce additional modules incrementally with readiness checklists. + +## ROI Metrics +- Reduction in non-compliant privileged actions reaching execution. +- Time to onboard a tenant to required compliance module set. +- Audit retrieval time for compliance decision evidence. + +## Security + MCP Governance +- Compliance checks run before privileged actions and fail closed on uncertainty. +- Decision logs must preserve inputs, outputs, and evidence references. +- Module readiness checks are required before tenant activation. diff --git a/kitty-specs/011-compliance-policy-modules/tasks.md b/kitty-specs/011-compliance-policy-modules/tasks.md new file mode 100644 index 0000000..8592a54 --- /dev/null +++ b/kitty-specs/011-compliance-policy-modules/tasks.md @@ -0,0 +1,18 @@ +# Tasks: Compliance Policy Modules + +## Work Packages +### WP01 - Compliance Contract +- Define compliance module declaration schema. +- Define per-module control requirements. + +### WP02 - Decision Engine +- Implement pre-action compliance evaluation flow. +- Integrate decision outputs with policy enforcement points. + +### WP03 - Fail-Closed + Audit +- Block privileged actions on unresolved/failed checks. +- Emit compliance decision audit records with evidence references. + +### WP04 - Pilot Rollout +- Configure pilot module set for selected tenants. +- Validate onboarding readiness checks and operational runbook. diff --git a/scripts/generate-status-snippets.py b/scripts/generate-status-snippets.py new file mode 100755 index 0000000..814cf7b --- /dev/null +++ b/scripts/generate-status-snippets.py @@ -0,0 +1,77 @@ +#!/usr/bin/env python3 +"""Generate status markdown snippets from status/feature-readiness.json. + +Usage: + python scripts/generate-status-snippets.py # write files + python scripts/generate-status-snippets.py --check # fail if out of date +""" + +from __future__ import annotations + +import argparse +import json +import sys +from pathlib import Path + +ROOT = Path(__file__).resolve().parent.parent +STATUS_FILE = ROOT / "status" / "feature-readiness.json" +OUTPUT_FILE = ROOT / "status" / "generated" / "feature-table.md" + + +def load_status() -> dict: + return json.loads(STATUS_FILE.read_text()) + + +def render_table(status: dict) -> str: + rows = [] + features: dict[str, dict] = status["features"] + for fid in sorted(features.keys()): + item = features[fid] + rows.append( + f"| {fid} | {item['lifecycle_state']} | {item['implementation_state']} | " + f"{item['production_readiness']} | {item['notes']} |" + ) + + header = [ + "# Generated Feature Readiness", + "", + f"Source: `status/feature-readiness.json` (updated_at: {status['updated_at']})", + "", + "| Feature | Lifecycle | Implementation | Readiness | Notes |", + "|---|---|---|---|---|", + ] + + return "\n".join(header + rows) + "\n" + + +def main() -> int: + parser = argparse.ArgumentParser() + parser.add_argument("--check", action="store_true", help="verify generated files are up to date") + args = parser.parse_args() + + if not STATUS_FILE.exists(): + print(f"ERROR: missing status file: {STATUS_FILE}", file=sys.stderr) + return 1 + + status = load_status() + content = render_table(status) + + if args.check: + if not OUTPUT_FILE.exists(): + print(f"ERROR: missing generated file: {OUTPUT_FILE}", file=sys.stderr) + return 1 + current = OUTPUT_FILE.read_text() + if current != content: + print("ERROR: generated status snippets are out of date", file=sys.stderr) + return 1 + print("Generated status snippets are up to date") + return 0 + + OUTPUT_FILE.parent.mkdir(parents=True, exist_ok=True) + OUTPUT_FILE.write_text(content) + print(f"Wrote {OUTPUT_FILE}") + return 0 + + +if __name__ == "__main__": + raise SystemExit(main()) diff --git a/scripts/verify-status-consistency.py b/scripts/verify-status-consistency.py new file mode 100755 index 0000000..13fefec --- /dev/null +++ b/scripts/verify-status-consistency.py @@ -0,0 +1,166 @@ +#!/usr/bin/env python3 +"""Validate canonical feature readiness and lifecycle consistency. + +Checks: +1. status/feature-readiness.json structure and enum values +2. feature entries exist for every kitty-specs meta.json +3. lifecycle_state in status matches kitty-specs/*/meta.json +4. production_ready cannot coexist with placeholder/stub provider readiness +""" + +from __future__ import annotations + +import json +import re +import sys +from datetime import datetime +from pathlib import Path + +ROOT = Path(__file__).resolve().parent.parent +STATUS_FILE = ROOT / "status" / "feature-readiness.json" + +LIFECYCLE_VALUES = {"spec-only", "planning", "execution", "done", "blocked", "deprecated"} +IMPL_VALUES = {"none", "scaffolded", "integrated", "validated"} +READINESS_VALUES = {"not_ready", "pilot_ready", "production_ready"} +GEN_PROVIDER_VALUES = {"n/a", "placeholder", "configured", "validated"} +VOICE_ANALYZER_VALUES = {"n/a", "stub", "configured", "validated"} + + +def parse_json(path: Path) -> dict: + try: + return json.loads(path.read_text()) + except Exception as exc: # pragma: no cover + raise RuntimeError(f"invalid JSON at {path}: {exc}") from exc + + +def parse_iso(ts: str) -> bool: + try: + datetime.fromisoformat(ts.replace("Z", "+00:00")) + return True + except Exception: + return False + + +def collect_meta_states() -> dict[str, str]: + states: dict[str, str] = {} + for meta_path in sorted((ROOT / "kitty-specs").glob("*/meta.json")): + data = parse_json(meta_path) + fid = str(data.get("feature_number", "")).zfill(3) + lifecycle = str(data.get("lifecycle_state", "")) + if not re.fullmatch(r"\d{3}", fid): + raise RuntimeError(f"invalid or missing feature_number in {meta_path}") + if lifecycle not in LIFECYCLE_VALUES: + raise RuntimeError( + f"invalid lifecycle_state '{lifecycle}' in {meta_path}; expected one of {sorted(LIFECYCLE_VALUES)}" + ) + states[fid] = lifecycle + return states + + +def validate_status(status: dict, meta_states: dict[str, str]) -> list[str]: + errors: list[str] = [] + + if not isinstance(status, dict): + return ["status file root must be an object"] + + updated_at = status.get("updated_at") + features = status.get("features") + + if not isinstance(updated_at, str) or not parse_iso(updated_at): + errors.append("updated_at must be a valid ISO 8601 timestamp") + + if not isinstance(features, dict): + return errors + ["features must be an object"] + + for fid, entry in features.items(): + if not re.fullmatch(r"\d{3}", str(fid)): + errors.append(f"feature key '{fid}' is not NNN format") + continue + if not isinstance(entry, dict): + errors.append(f"feature {fid} entry must be an object") + continue + + for key in ["lifecycle_state", "implementation_state", "production_readiness", "notes"]: + if key not in entry: + errors.append(f"feature {fid} missing required key '{key}'") + + lifecycle = entry.get("lifecycle_state") + impl = entry.get("implementation_state") + readiness = entry.get("production_readiness") + notes = entry.get("notes") + + if lifecycle not in LIFECYCLE_VALUES: + errors.append(f"feature {fid} invalid lifecycle_state '{lifecycle}'") + if impl not in IMPL_VALUES: + errors.append(f"feature {fid} invalid implementation_state '{impl}'") + if readiness not in READINESS_VALUES: + errors.append(f"feature {fid} invalid production_readiness '{readiness}'") + if not isinstance(notes, str) or not notes.strip(): + errors.append(f"feature {fid} notes must be a non-empty string") + + provider = entry.get("provider_readiness") + if provider is not None: + if not isinstance(provider, dict): + errors.append(f"feature {fid} provider_readiness must be an object") + else: + gen = provider.get("generation_provider") + voice = provider.get("voice_analyzer") + if gen is not None and gen not in GEN_PROVIDER_VALUES: + errors.append(f"feature {fid} invalid generation_provider '{gen}'") + if voice is not None and voice not in VOICE_ANALYZER_VALUES: + errors.append(f"feature {fid} invalid voice_analyzer '{voice}'") + if readiness == "production_ready" and gen == "placeholder": + errors.append( + f"feature {fid} cannot be production_ready with generation_provider=placeholder" + ) + if readiness == "production_ready" and voice == "stub": + errors.append( + f"feature {fid} cannot be production_ready with voice_analyzer=stub" + ) + + status_ids = set(features.keys()) + meta_ids = set(meta_states.keys()) + + missing_from_status = sorted(meta_ids - status_ids) + for fid in missing_from_status: + errors.append(f"feature {fid} exists in kitty-specs meta but missing in status registry") + + missing_from_meta = sorted(status_ids - meta_ids) + for fid in missing_from_meta: + errors.append(f"feature {fid} exists in status registry but has no kitty-specs meta.json") + + for fid in sorted(status_ids & meta_ids): + status_lifecycle = features[fid].get("lifecycle_state") + if status_lifecycle != meta_states[fid]: + errors.append( + f"feature {fid} lifecycle mismatch: status={status_lifecycle} meta={meta_states[fid]}" + ) + + return errors + + +def main() -> int: + if not STATUS_FILE.exists(): + print(f"ERROR: missing status file: {STATUS_FILE}", file=sys.stderr) + return 1 + + try: + status = parse_json(STATUS_FILE) + meta_states = collect_meta_states() + except Exception as exc: + print(f"ERROR: {exc}", file=sys.stderr) + return 1 + + errors = validate_status(status, meta_states) + if errors: + print("Status consistency checks FAILED:", file=sys.stderr) + for err in errors: + print(f"- {err}", file=sys.stderr) + return 1 + + print("Status consistency checks PASSED") + return 0 + + +if __name__ == "__main__": + raise SystemExit(main()) diff --git a/spec/operations/feature-006-alerts.md b/spec/operations/feature-006-alerts.md new file mode 100644 index 0000000..81e1957 --- /dev/null +++ b/spec/operations/feature-006-alerts.md @@ -0,0 +1,46 @@ +# Feature 006 Alert Definitions + +**Scope:** `/api/mediation/*` and `/api/content/*` +**Source metrics:** `GET /api/content/metrics`, `content.operation_logs` + +## Alert Rules + +1. **Mediation 5xx Spike (critical)** +- Condition: 5xx rate > 2% over 5 minutes for `/api/mediation/sessions/:id/messages` +- Signal: `content.operation_logs.operation = 'generate'` with `success = false` +- Action: page on-call immediately + +2. **Entitlement Resolver Failures (high)** +- Condition: entitlement failure rate > 5% over 10 minutes +- Signal: `entitlements.failureRate` from `/api/content/metrics` +- Action: switch to degraded mode notice and investigate resolver upstream + +3. **Generation Latency Regression (high)** +- Condition: `generation.p95DurationMs > 5000` for 15 minutes +- Action: inspect provider latency and retrieval fan-out (`maxSources`) + +4. **Search Latency Regression (medium)** +- Condition: `search.p95DurationMs > 1500` for 15 minutes +- Action: inspect `search_vector` index health and recent sync activity + +5. **Citation Collapse (medium)** +- Condition: `generation.avgCitationCount < 1` for 30 minutes +- Action: inspect entitlement source scope and retriever output quality + +6. **Drift Risk Spike (medium)** +- Condition: `drift.profilesAboveThreshold > 0` for 2 consecutive windows +- Action: review latest drift reports and profile-specific recommendations + +## Dashboard Panels + +1. Mediation request volume and 5xx rate (5m, 1h) +2. Generation `avgDurationMs` and `p95DurationMs` +3. Search `avgDurationMs` and `p95DurationMs` +4. Entitlement `failureRate` and `cacheHitRate` +5. Average citation count and response length +6. Drift: monitored profiles, average drift score, profiles above threshold + +## Ownership + +- Primary: Platform on-call +- Secondary: Content infrastructure owner diff --git a/spec/runbooks/feature-006-mediation-runbook.md b/spec/runbooks/feature-006-mediation-runbook.md new file mode 100644 index 0000000..dffe649 --- /dev/null +++ b/spec/runbooks/feature-006-mediation-runbook.md @@ -0,0 +1,79 @@ +# Feature 006 Mediation Runbook + +**Applies to:** Content Infrastructure mediation flow (Feature 006) +**Routes:** `/api/mediation/*`, `/api/content/health`, `/api/content/metrics` + +## 1. Triage Checklist (first 10 minutes) + +1. Confirm endpoint health: +```bash +curl -sS "$BASE_URL/api/content/health" | jq +curl -sS "$BASE_URL/api/content/metrics" | jq +``` +2. Confirm provider wiring is safe: +- `generationProvider` must not be `placeholder` in production. +- if `driftMonitoringEnabled=true`, `voiceAnalyzer` must not be `stub`. +3. Confirm auth failure pattern: +- spikes in `missing_api_key`, `invalid_api_key`, `missing_user_token`, `invalid_user_token` indicate integration/client issue. +4. Confirm entitlement degradation: +- check entitlement failure rate from `/api/content/metrics`. + +## 2. Structured Log Fields to Inspect + +Every mediation event should include: +- `requestId` +- `tenantId` +- `sessionId` +- `profileId` +- `userId` +- `event` +- `timestamp` + +Use these to trace one failing request end-to-end. + +## 3. Common Failure Signatures + +1. `mediation.message.session_not_found` +- Cause: closed/invalid session id or cross-tenant/user/key access. +- Response: verify caller tenant/user/API key tuple matches session owner. + +2. `mediation.message.failed` with entitlement errors +- Cause: resolver outage or timeout. +- Response: verify resolver URL/key; system should fail closed to restricted entitlements. + +3. `mediation.message.failed` with generation provider errors +- Cause: model provider latency/outage. +- Response: verify provider URL, auth, timeout budget; reduce source fan-out if needed. + +## 4. DB Diagnostic Queries + +```sql +-- Recent failed generate operations +select created_at, tenant_id, operation, success, duration_ms, metadata +from content.operation_logs +where operation = 'generate' + and created_at > now() - interval '1 hour' +order by created_at desc; + +-- Entitlement resolver failures +select created_at, tenant_id, success, duration_ms, metadata +from content.operation_logs +where operation = 'resolve' + and created_at > now() - interval '1 hour' +order by created_at desc; +``` + +## 5. Mitigation and Rollback + +1. If drift service is unstable: set `CONTENT_DRIFT_ENABLED=false` and redeploy. +2. If generation provider is unstable: fail over provider endpoint/config and redeploy. +3. If resolver is unstable: keep fail-closed posture, communicate degraded response quality. +4. If index degradation is detected: execute index maintenance plan before re-enabling full load. + +## 6. Exit Criteria + +Incident can be closed when: +1. Mediation 5xx rate is below threshold for 30 minutes. +2. Entitlement failure rate is back within SLO. +3. Generation/search p95 latencies are within baseline. +4. No unresolved critical alerts remain. diff --git a/spec/runbooks/feature-006-staging-cutover.md b/spec/runbooks/feature-006-staging-cutover.md new file mode 100644 index 0000000..8644ecb --- /dev/null +++ b/spec/runbooks/feature-006-staging-cutover.md @@ -0,0 +1,90 @@ +# Feature 006 Staging Cutover Command Sheet + +**Scope:** staging validation + rollback rehearsal for Feature 006 (`content` paths) +**Precondition:** PR #2 and PR #3 are merged to `main`. + +## 1. Preflight + +1. Select release SHA and export context: +```bash +export RELEASE_SHA="" +export BASE_URL="https://" +export DATABASE_URL="postgresql://:@:5432/" +``` +2. Ensure required tools exist on runner/jumpbox: `git`, `npm`, `psql`, `curl`. +3. Optional for full mediation happy-path smoke: +```bash +export MEDIATION_API_KEY="" +export MEDIATION_BEARER_TOKEN="" +export MEDIATION_PROFILE_ID="" +``` + +## 2. Deploy Candidate + +```bash +git fetch origin +git checkout "$RELEASE_SHA" +cd joyus-ai-mcp-server +npm ci +npm run build +``` + +Deploy using your staging release mechanism (container rollout/systemd/etc). + +## 3. Apply/Verify Schema State + +1. Run schema sync for current model: +```bash +cd joyus-ai-mcp-server +npm run db:push +``` +2. Confirm content schema objects + query plan: +```bash +cd .. +./deploy/scripts/feature-006-search-vector-check.sh +``` +3. Optional single-command rehearsal (migration + search_vector check + optional smoke + optional rollback): +```bash +./deploy/scripts/feature-006-staging-rehearsal.sh +``` + +## 4. Smoke Test Staging + +```bash +./deploy/scripts/feature-006-smoke.sh +``` + +Minimum pass conditions: +- `/api/content/health` returns 200 +- `/api/content/metrics` returns 200 +- `/api/mediation/health` returns 200 +- auth negative-path checks return expected 401 codes +- if credentials provided: create session, send message, close session all succeed + +## 5. Rollback Rehearsal + +1. Capture current stable SHA before release: +```bash +export PREVIOUS_SHA="" +``` +2. Trigger rollback to previous SHA using same staging deploy mechanism. +3. Re-run smoke: +```bash +./deploy/scripts/feature-006-smoke.sh +``` +4. Record rollback time-to-recovery and any manual actions required. + +## 6. Evidence to Attach to Release Record + +- Output from `feature-006-search-vector-check.sh` +- Output from `feature-006-smoke.sh` +- Release SHA + rollback SHA +- Any query-plan anomalies and remediation notes + +## 7. Exit Criteria + +Feature 006 staging cutover is complete when: +1. Schema sync succeeded without manual hotfix SQL. +2. Search-vector check passes with valid plan output. +3. Smoke script passes. +4. Rollback rehearsal succeeds and service health is restored. diff --git a/status/feature-readiness.json b/status/feature-readiness.json new file mode 100644 index 0000000..767837a --- /dev/null +++ b/status/feature-readiness.json @@ -0,0 +1,75 @@ +{ + "updated_at": "2026-03-06T00:55:00Z", + "features": { + "001": { + "lifecycle_state": "execution", + "implementation_state": "integrated", + "production_readiness": "not_ready", + "notes": "MCP deployment is active work; deployment hardening and rollout evidence still required." + }, + "002": { + "lifecycle_state": "done", + "implementation_state": "validated", + "production_readiness": "pilot_ready", + "notes": "Session/context management is shipped with completed work packages and integration tests." + }, + "003": { + "lifecycle_state": "spec-only", + "implementation_state": "none", + "production_readiness": "not_ready", + "notes": "Umbrella architecture spec; decomposed execution is carried by downstream features." + }, + "004": { + "lifecycle_state": "done", + "implementation_state": "validated", + "production_readiness": "pilot_ready", + "notes": "Workflow enforcement shipped and is consumable by human and automated pipeline paths." + }, + "005": { + "lifecycle_state": "done", + "implementation_state": "validated", + "production_readiness": "pilot_ready", + "notes": "Content intelligence foundation shipped with profile and fidelity capabilities." + }, + "006": { + "lifecycle_state": "done", + "implementation_state": "integrated", + "production_readiness": "not_ready", + "provider_readiness": { + "generation_provider": "placeholder", + "voice_analyzer": "stub" + }, + "notes": "Content infrastructure has deterministic content-schema migration, local rollback rehearsal, and local real-provider smoke evidence; named staging migration/smoke records and staging soak evidence are still required before pilot_ready promotion." + }, + "007": { + "lifecycle_state": "planning", + "implementation_state": "scaffolded", + "production_readiness": "not_ready", + "notes": "Governance plan/tasks are now execution-grade and moving into CI enforcement rollout." + }, + "008": { + "lifecycle_state": "execution", + "implementation_state": "integrated", + "production_readiness": "not_ready", + "notes": "WP01/WP02 enforcement is active and WP03 has started with profile ingestion queueing, retries, and backpressure metrics; production load validation is still pending." + }, + "009": { + "lifecycle_state": "execution", + "implementation_state": "integrated", + "production_readiness": "not_ready", + "notes": "Core pipeline stage contract and orchestration engine (policy checks, retry, timeout, cancel, queue backpressure) are implemented; pilot workflows and governance wiring remain." + }, + "010": { + "lifecycle_state": "planning", + "implementation_state": "none", + "production_readiness": "not_ready", + "notes": "Multi-location operations module scope established at spec/plan/tasks level." + }, + "011": { + "lifecycle_state": "planning", + "implementation_state": "none", + "production_readiness": "not_ready", + "notes": "Compliance policy modules scope established at spec/plan/tasks level." + } + } +} diff --git a/status/feature-readiness.schema.json b/status/feature-readiness.schema.json new file mode 100644 index 0000000..1130e20 --- /dev/null +++ b/status/feature-readiness.schema.json @@ -0,0 +1,67 @@ +{ + "$schema": "https://json-schema.org/draft/2020-12/schema", + "type": "object", + "required": ["updated_at", "features"], + "properties": { + "updated_at": { + "type": "string", + "format": "date-time" + }, + "features": { + "type": "object", + "patternProperties": { + "^[0-9]{3}$": { + "type": "object", + "required": [ + "lifecycle_state", + "implementation_state", + "production_readiness", + "notes" + ], + "properties": { + "lifecycle_state": { + "type": "string", + "enum": [ + "spec-only", + "planning", + "execution", + "done", + "blocked", + "deprecated" + ] + }, + "implementation_state": { + "type": "string", + "enum": ["none", "scaffolded", "integrated", "validated"] + }, + "production_readiness": { + "type": "string", + "enum": ["not_ready", "pilot_ready", "production_ready"] + }, + "provider_readiness": { + "type": "object", + "properties": { + "generation_provider": { + "type": "string", + "enum": ["n/a", "placeholder", "configured", "validated"] + }, + "voice_analyzer": { + "type": "string", + "enum": ["n/a", "stub", "configured", "validated"] + } + }, + "additionalProperties": false + }, + "notes": { + "type": "string", + "minLength": 1 + } + }, + "additionalProperties": false + } + }, + "additionalProperties": false + } + }, + "additionalProperties": false +} diff --git a/status/generated/feature-table.md b/status/generated/feature-table.md new file mode 100644 index 0000000..cdafe2c --- /dev/null +++ b/status/generated/feature-table.md @@ -0,0 +1,17 @@ +# Generated Feature Readiness + +Source: `status/feature-readiness.json` (updated_at: 2026-03-06T00:55:00Z) + +| Feature | Lifecycle | Implementation | Readiness | Notes | +|---|---|---|---|---| +| 001 | execution | integrated | not_ready | MCP deployment is active work; deployment hardening and rollout evidence still required. | +| 002 | done | validated | pilot_ready | Session/context management is shipped with completed work packages and integration tests. | +| 003 | spec-only | none | not_ready | Umbrella architecture spec; decomposed execution is carried by downstream features. | +| 004 | done | validated | pilot_ready | Workflow enforcement shipped and is consumable by human and automated pipeline paths. | +| 005 | done | validated | pilot_ready | Content intelligence foundation shipped with profile and fidelity capabilities. | +| 006 | done | integrated | not_ready | Content infrastructure has deterministic content-schema migration, local rollback rehearsal, and local real-provider smoke evidence; named staging migration/smoke records and staging soak evidence are still required before pilot_ready promotion. | +| 007 | planning | scaffolded | not_ready | Governance plan/tasks are now execution-grade and moving into CI enforcement rollout. | +| 008 | execution | integrated | not_ready | WP01/WP02 enforcement is active and WP03 has started with profile ingestion queueing, retries, and backpressure metrics; production load validation is still pending. | +| 009 | execution | integrated | not_ready | Core pipeline stage contract and orchestration engine (policy checks, retry, timeout, cancel, queue backpressure) are implemented; pilot workflows and governance wiring remain. | +| 010 | planning | none | not_ready | Multi-location operations module scope established at spec/plan/tasks level. | +| 011 | planning | none | not_ready | Compliance policy modules scope established at spec/plan/tasks level. |