End-to-end self-hosting guide. The whole stack — Postgres, PostgREST, the FastAPI backend, the static frontend, and a reverse proxy with TLS — runs on a single Docker host. Anything that runs Docker works: a small VPS, a home server, a spare laptop.
If you only want a high-level pitch, see README.md. This file is the operational walkthrough.
- Prerequisites
- Clone the repo
- Generate secrets and write the env files
- Bring up the stack
- Build and serve the frontend
- Put it on a public domain (optional, required for hosted Claude clients)
- Connect an MCP client
- Day-2 operations
- Troubleshooting
On the box that will run the stack:
- Docker ≥ 24 with Compose v2.30+ (we rely on the
format: rawenv-file flag) - Bash + standard GNU tools (
openssl,curl,sed,grep)
On a workstation (which can be the same box) for building the frontend:
- Node 20+ + pnpm (
corepack enable && corepack prepare pnpm@latest --activate)
That's it. No Python on the host — the backend builds inside Docker.
git clone https://github.com/openstudy-dev/OpenStudy /opt/openstudy
cd /opt/openstudy/opt/openstudy is the conventional install path used in the rest of this
guide. Anywhere else works; just adjust the paths below.
OpenStudy reads two env files, both kept out of git:
.env— application secrets used by the FastAPI container (login password hash, session secret, optional integrations)..env.docker— Postgres credentials only, used to spin up the database and PostgREST containers.
Create them:
# .env — copy the template, then fill in the placeholders
cp .env.example .env
# Argon2id-hash a login password you'll use to sign in
docker run --rm python:3.12-slim sh -c \
'pip install -q argon2-cffi && python -c "from argon2 import PasswordHasher; print(PasswordHasher().hash(input(\"password: \")))"'
# → paste the resulting hash as APP_PASSWORD_HASH in .env
# Generate a session-cookie signing secret
python3 -c 'import secrets; print(secrets.token_urlsafe(48))'
# → paste as SESSION_SECRET in .env
# Generate a strong Postgres password and write the docker-only env file
cat > .env.docker <<EOF
POSTGRES_USER=openstudy
POSTGRES_PASSWORD=$(openssl rand -hex 24)
POSTGRES_DB=openstudy
EOF
chmod 600 .env .env.dockerThe other variables in .env are optional and document themselves —
Telegram bot credentials, the internal API secret used by webhooks, the
public URL the app advertises in OAuth flows.
./deploy.shdeploy.sh does, in order:
- Pre-flight (compose validation, disk space, env-file presence).
- Tags the currently running image as
openstudy:previousso it can roll back. - Builds a fresh
openstudy:latestimage from theDockerfile. - Starts the Postgres container, waits for
pg_isready. - Runs
scripts/run_migrations.pyagainst Postgres (idempotent — re-applies nothing that's already been recorded in the_migrationstable). - Starts the PostgREST container, then the openstudy container.
- Polls
GET http://127.0.0.1:8000/api/healthfor up to 60 s. - Pass: prunes dangling images, exits 0.
Fail: re-tags
:previousas:latest, recreates the container, waits for the previous build's health to come back, exits non-zero.
Verify:
curl http://127.0.0.1:8000/api/health
# → {"ok":true,"version":"...","db":"ok","storage":"ok ..."}You should also see three running containers (openstudy, openstudy-postgres,
openstudy-postgrest) when you run ./deploy.sh --status.
If you're moving from another deployment (or restoring a backup), apply your data dump after step 5 but before bringing up the openstudy container:
docker compose --env-file .env.docker up -d postgres
docker compose --env-file .env.docker run --rm --no-deps openstudy \
uv run --no-sync python scripts/run_migrations.py
docker exec -i openstudy-postgres psql -U openstudy -d openstudy < your-data.sql
./deploy.sh --skip-build # then continue with the normal deployThe frontend builds and runs as a container too — ./deploy.sh already
brought it up alongside the backend. It listens on 127.0.0.1:8080 and
internally proxies API traffic to the openstudy container, so the rest
of your stack only needs to talk to one port.
To customise the build (your domain in canonical/OG tags, your site
name in the manifest), set these in .env.docker before deploying:
PUBLIC_SITE_URL=https://your-domain.tld
PUBLIC_SITE_NAME=Your Name
PUBLIC_SHOW_LANDING=false
Then ./deploy.sh rebuilds the frontend image with your values baked in.
For local frontend development (Vite hot-reload, faster iteration):
cd web
echo "VITE_API_BASE_URL=http://localhost:8000" > .env.local
pnpm install
pnpm dev # → http://localhost:5173, hot-reloads against the dockerised backendRequired if you want Claude.ai or the Claude iOS app to reach the MCP
endpoint. Skip for Claude-Code-only setups (http://localhost:8080/mcp
works fine).
The example below uses Caddy because it's two lines for automatic TLS via Let's Encrypt. Nginx + Certbot, Traefik, anything else works the same way.
/etc/caddy/Caddyfile:
your-domain.tld {
encode gzip
reverse_proxy 127.0.0.1:8080 {
flush_interval -1 # required for /mcp Streamable HTTP
}
}
That's it. The host's outer Caddy terminates TLS and hands everything off to the frontend container, which then routes static assets locally and backend traffic to the openstudy service on the internal docker network.
Reload Caddy: sudo systemctl reload caddy. Visit https://your-domain.tld
— TLS gets provisioned automatically on the first hit.
The MCP endpoint is at /mcp, OAuth 2.1-protected. The same URL works
across every Claude surface.
Settings → Connectors → Add custom connector → paste
https://your-domain.tld/mcp → complete the OAuth consent in the popup.
claude mcp add --transport http --scope user \
openstudy https://your-domain.tld/mcpFor development:
claude mcp add --transport http --scope user \
openstudy-local http://localhost:8000/mcpFrom a Claude session: "list my courses". The agent should call
mcp__openstudy__list_courses and respond with whatever you've created.
git pull
./deploy.shThat's it — the script handles build, migrate, health-check, and rollback.
If you only changed env vars (no code), use --skip-build:
./deploy.sh --skip-buildA daily systemd timer (or any cron equivalent) should:
rsync /opt/courses/ → /opt/backup/<date>/courses/docker exec openstudy-postgres pg_dump -U openstudy openstudy | gzip > /opt/backup/<date>/postgres.sql.gz- Push
/opt/backup/somewhere off-host (S3, Backblaze, your NAS) — the whole point of a backup is surviving the box dying.
Add a new SQL file under migrations/ with a timestamp greater than
00000000000000_baseline.sql:
fn="migrations/$(date -u +%Y%m%d%H%M%S)_my_change.sql"
cat > "$fn" <<'SQL'
-- Describe the change
alter table courses add column if not exists tagline text;
SQLThen ./deploy.sh — the migration runs in a transaction, gets recorded
in _migrations with its sha256, and any future re-run is a no-op.
Files are immutable once recorded; if you need to amend, write a new
migration.
Routine rollback happens automatically when health fails. To roll back manually after a successful but unwanted deploy:
docker tag openstudy:previous openstudy:latest
docker compose --env-file .env.docker up -d --force-recreate openstudydocker compose --env-file .env.docker pull # postgres + postgrest
./deploy.sh # rebuilds openstudy./deploy.sh says "compose validation failed".
Run docker compose --env-file .env.docker config to see the error. Most
common cause: .env.docker missing a key, or compose version below 2.30.
/api/health returns {"ok": false, "db": "error: ..."}.
The FastAPI container can't reach Postgres. Check docker logs openstudy-postgres
for boot errors; verify the values in .env.docker match what's in
/opt/postgres-data/ (the directory persists between runs — if you
changed the password after first boot, Postgres is still using the old one).
/api/health returns {"ok": false, "storage": "error: ..."}.
The bind-mount for /opt/courses failed. Check the path exists on the host
and is readable by the container. docker exec openstudy ls /opt/courses
should show your course tree.
Login returns 401 with the right password.
Re-hash the password and update APP_PASSWORD_HASH in .env. Common
gotcha: the hash starts with $argon2id$… — those $ characters are
literal, not env interpolation. The compose env_file: format: raw
directive prevents mangling, but if you've manually exported the variable
in a shell, the $ chars need to be single-quoted.
MCP returns 401 from a Claude client.
The OAuth token cached by the client expired or was revoked. Reconnect:
in claude.ai disconnect and re-add the connector; in Claude Code run
/mcp and re-authenticate openstudy.