SuperMarks now targets a Cloudflare-plus-Render stack with a local-first development loop.
- Phase 1 (active): run local backend + local frontend for iteration and verification.
- Phase 2: use Render-hosted backend checks for release verification when needed.
- Phase 3 (current hosted target): host the frontend on Cloudflare Pages, run the backend on Render, use a Cloudflare Worker for the D1 bridge, use Cloudflare D1 for metadata, and use Cloudflare R2 for uploaded files.
Today, normal development still runs locally first. Hosted direction is Cloudflare for frontend/data edge and Render for FastAPI compute.
backend/: FastAPI + SQLModel API with local-first development and a Render + Cloudflare D1 bridge hosted target.frontend/: Vite + React SPA hosted statically on Cloudflare Pages.
This repository is locked to Strategy B: direct backend API calls only.
- Frontend must call backend using
VITE_API_BASE_URL:/apifor local Vite dev (proxied tohttp://127.0.0.1:8000)https://<backend>/apifor hosted backend
- No frontend
/apiproxy functions. - Do not add frontend
/apirewrites.
See docs/ARCHITECTURE.md for guardrails and docs/EXPERIMENTATION.md for the future A/B testing reference.
VITE_API_BASE_URL=/apiin local development.VITE_API_BASE_URL=https://<backend-domain>/apiin Cloudflare Pages deployments.VITE_BACKEND_API_KEY=<backend-api-key>(optional, only for API-key admin/testing mode)VITE_APP_VERSION=<git-sha-or-release-tag>(optional, shown in UI diagnostics)
BACKEND_API_KEY=<backend-api-key>(used for API-key admin mode and bridge auth)SUPERMARKS_CORS_ALLOW_ORIGINS=https://<cloudflare-pages-frontend-domain>APP_VERSION=<git-sha-or-build-id>(optional, served byGET /version)SUPERMARKS_REPOSITORY_BACKEND=d1-bridge(hosted Cloudflare default)SUPERMARKS_STORAGE_BACKEND=s3SUPERMARKS_S3_ENDPOINT_URL=https://<account-id>.r2.cloudflarestorage.comSUPERMARKS_S3_BUCKET=<r2-bucket-name>SUPERMARKS_S3_ACCESS_KEY_ID=<r2-access-key-id>SUPERMARKS_S3_SECRET_ACCESS_KEY=<r2-secret-access-key>SUPERMARKS_S3_REGION=autoSUPERMARKS_S3_PUBLIC_BASE_URL=https://<public-r2-domain>(optional)SUPERMARKS_AUTH_SESSION_SECRET=<strong-random-secret>SUPERMARKS_AUTH_ALLOWED_RETURN_ORIGINS=https://<cloudflare-pages-frontend-domain>,http://localhost:5173SUPERMARKS_MAGIC_LINK_LOGIN_ENABLED=1SUPERMARKS_EMAIL_PROVIDER=logorresendSUPERMARKS_DEV_LOGIN_ENABLED=1(optional hidden browser-testing login)SUPERMARKS_DEV_LOGIN_KEY=<developer-testing-passphrase>(optional hidden browser-testing login)SUPERMARKS_ALLOW_PRODUCTION_SQLITE=1(supported for self-hosted low-cost production on your own machine)
Backend (uv-based, preferred on this machine):
./scripts/dev-backend.shFrontend:
./scripts/dev-frontend.shVerification:
./scripts/verify-local.sh- Backend:
http://localhost:8000 - Frontend:
http://localhost:5173
Set frontend .env with backend values:
VITE_API_BASE_URL=/api
VITE_BACKEND_API_KEY=<your-local-key>There are now two local hosting modes:
Use this only for active iteration:
./scripts/host-supermarks.shNotes:
- Frontend stays on Vite, so frontend code changes hot-reload automatically.
- Backend stays on
uvicorn --reload, so backend code changes restart automatically. frontend/.env.localcan stay onVITE_API_BASE_URL=/apibecause Vite proxies/apito the backend.
Use this for reboot-safe public hosting from your machine:
./scripts/prepare-local-prod.sh
./scripts/install-supermarks-service.sh
./scripts/verify-local-prod.shTo make this system-level (restarts automatically on reboot), run this one command:
sudo /home/graham/repos/SuperMarks/scripts/install-supermarks-service.sh --systemAfter that, future reconnects are one short command:
supermarks-reconnectThe shortest command is:
smarksReadable alias:
supermarks-reconnectWhat this does:
- builds the frontend once to
frontend/dist - runs one backend service on port
8000 - serves the built SPA directly from the backend
- re-applies Tailscale Funnel to the backend on boot
- avoids
vite dev,npm install, anduvicorn --reloadin the runtime boot path
Use this only when you intentionally need local public verification. It is not the canonical hosted path or the normal rollback path.
Backend setup:
- copy
.env.hosted-local.exampletobackend/.env.hosted-local - fill in:
- D1 bridge URL/token
- R2 credentials
BACKEND_API_KEY- production Pages origin in
SUPERMARKS_CORS_ALLOW_ORIGINS
- install or refresh the reboot-safe backend service:
sudo /home/graham/repos/SuperMarks/scripts/install-supermarks-service.sh --system- verify the local API runtime:
./scripts/verify-local-prod.sh http://127.0.0.1:8000Frontend setup:
- point Cloudflare Pages production
VITE_API_BASE_URLat the machine's public Tailscale Funnel URL ending in/api - keep preview frontends unsupported in this temporary mode
- Frontend build passes locally
- Backend tests run locally under
uv - Current backend failures are concentrated in blob-mock behavior and answer-key parser dependency injection, not broad repo instability
This is Phase 3 and should only be used once the backend is deployed on Render.
- Host
frontend/as a static SPA on Cloudflare Pages. - Build command:
npm run build - Output directory:
dist - Keep
VITE_API_BASE_URLpointed at the Render backend URL, ending in/api. - Hosted previews are only usable when that backend URL is reachable and allowed by backend CORS.
- SPA fallback routing is provided by
frontend/public/_redirects. frontend/wrangler.tomlis the canonical hosted frontend config.
- Normal hosted sign-in uses magic link.
- Current hosted email delivery is
logmode, so links are emitted to Render logs instead of sent by email. - Google/Apple OIDC are not enabled in the current hosted configuration.
- A hidden developer login exists for browser automation/testing when
SUPERMARKS_DEV_LOGIN_ENABLED=1.
Cloudflare Pages is the canonical hosted frontend, Render is the canonical hosted FastAPI backend, Cloudflare Workers host the D1 bridge, Cloudflare D1 is the canonical hosted metadata store, and Cloudflare R2 is the canonical durable object store. When you are ready to ship:
- deploy the D1 bridge Worker from
backend/with Wrangler - deploy the FastAPI backend on Render from render.yaml with
SUPERMARKS_D1_BRIDGE_URLpointed at that Worker - set backend secrets for
BACKEND_API_KEY,SUPERMARKS_D1_BRIDGE_TOKEN, CORS, and R2 credentials - point frontend
VITE_API_BASE_URLat the Render backend URL or custom domain
- Start backend:
./scripts/dev-backend.sh - Start frontend:
./scripts/dev-frontend.sh - Verify local stack:
./scripts/verify-local.sh - Confirm browser calls are
http://localhost:8000/api(or/apiwith Vite proxy) in DevTools.
For day-to-day product polish, do not use production deploys as the feedback loop.
- Run the frontend locally with Vite for fast UI iteration.
- Run the backend locally for normal development, or point the local frontend at the hosted backend only when you specifically need to verify hosted behavior.
- Use the local/Tailscale-hosted flow for browser and mobile checks while iterating.
- Batch related UI fixes together, then do one production deploy after the slice is actually ready.
Practical rule:
- local build + browser smoke first
- production redeploy only for release-ready batches, not for each small tweak
For easier deploy verification, set VITE_APP_VERSION on the Cloudflare Pages deployment.
Cloudflare-side D1 bridge hosting now lives under backend/wrangler.toml and backend/cloudflare/index.js. The canonical hosted backend deployment spec is render.yaml.
One-time setup:
cd backend
npm install
wrangler loginSet Worker secrets and deploy the D1 bridge:
wrangler secret put SUPERMARKS_D1_BRIDGE_TOKEN
./scripts/deploy-cloudflare-d1-bridge.shNon-secret defaults and local Wrangler examples live in backend/.dev.vars.example.
Render backend setup:
render blueprints validateThen create or sync the supermarks-backend web service from render.yaml and load the hosted env values from backend/.env.render.example.
- Cloudflare R2 stores uploaded files/binaries in the hosted direction.
- Cloudflare D1 stores hosted metadata (exams, questions, key files, submissions, pages, parse jobs) through the Worker-side bridge.
- Hosted production should use R2 plus D1.
- Self-hosted low-cost production can use local files plus SQLite on disk instead.
For 0–10 users, the cheapest practical path is:
- frontend and/or backend hosted from your own machine
- backend on SQLite
- backend file storage on local disk
- public reachability through the existing Tailscale-hosted flow
Recommended env shape for that mode:
SUPERMARKS_ENV=production
SUPERMARKS_ALLOW_PRODUCTION_SQLITE=1
SUPERMARKS_STORAGE_BACKEND=local
SUPERMARKS_SERVE_FRONTEND=1
SUPERMARKS_DATA_DIR=/absolute/path/to/supermarks-data
SUPERMARKS_SQLITE_PATH=/absolute/path/to/supermarks-data/supermarks.dbBackup helper:
./scripts/backup-supermarks.shReboot-safe verification:
./scripts/verify-local-prod.shThe active product wedge now runs:
- parse answer key
- review and confirm parsed data, including direct flagged-page drilldown and real single-page parse retry from the exam workspace without reprocessing already-good pages
- prepare a submission for marking, with stale-vs-missing asset detection after template changes
- mark inside the teacher workspace, with auto-recovery blocked when teacher manual marking has already started on questions that would need rebuilt assets
- monitor exam-level marking progress from the exam dashboard
- download or share the class results table
Each submission now exposes a dashboard workflow status:
blocked: preparation is missing or cannot proceed automaticallyready: prepared and ready for teacher markingin_progress: teacher has started manual marking but not finished all questionscomplete: every question has a teacher-entered mark
The dashboard also shows flagged-question count, running total, per-objective totals, and a teacher-facing class results/reporting table that makes export readiness clear per student. Objective rows now surface incomplete blockers and the weakest complete result together, with explicit open-first/then-review wording so teachers can see what to open first when both kinds of follow-up exist.
Current teacher-facing exports include:
GET /api/exams/{exam_id}/export.csv— full-detail class CSV with totals, objectives, and per-question marksGET /api/exams/{exam_id}/export-summary.csv— one row per student with export posture, next return point, and reporting attentionGET /api/exams/{exam_id}/export-objectives-summary.csv— one row per objective with class coverage and strongest/weakest complete resultsGET /api/exams/{exam_id}/export-student-summaries.zip— zip package with a top-levelREADME.txt, classmanifest.csv, one teacher-readablesummary.txt, a printablesummary.html, and question-level evidence files (evidence/README.txt,evidence/manifest.csv, answer crops, transcription text/json when available) per student
In the current frontend UI, the export card exposes:
Download tableShare table
The CSV endpoints still exist on the backend, but the primary teacher-facing UI is now table download/share instead of a visible CSV-vs-Excel picker.