-
Notifications
You must be signed in to change notification settings - Fork 0
feat(infra): Add Docker Compose deployment and GitHub Actions CI/CD pipeline #216
Description
feat: Docker Compose + CI/CD pipeline for staging & production
Summary
Set up the foundational infrastructure for running the Faculytics API on the Hostinger VPS, including Docker Compose configurations for both environments and an automated GitHub Actions deployment pipeline.
Background
The API previously had no containerised deployment setup. The full data layer (Postgres + Redis) runs self-hosted in Docker on the VPS. Postgres uses the pgvector/pgvector:pg16 image to support vector similarity search. ML inference remains on RunPod Serverless endpoints.
What this issue covers
Docker Compose (staging + production)
docker-compose.staging.yml— API on port3001, Postgres (pgvector), Redis capped at 256 MBdocker-compose.prod.yml— API on port3000, Postgres (pgvector), Redis capped at 512 MB- Both use
allkeys-lrueviction +appendonly yes/appendfsync everysecfor Redis durability - Postgres and Redis are isolated to internal Docker bridge networks (not exposed to host)
- API
depends_onPostgres with aservice_healthycondition (pg_isready healthcheck) — prevents startup race conditions - Images pulled from GHCR
CI/CD — .github/workflows/deploy.yml
- Triggers on push to
main(production) andstaging(staging) - Builds and pushes API image to GHCR:
main→ghcr.io/ctrlaltelite-devs/api.faculytics:lateststaging→ghcr.io/ctrlaltelite-devs/api.faculytics:staging
- SCPs the relevant compose file to the VPS on every deploy (no manual VPS updates needed)
- Ensures Postgres and Redis are up (
up -d postgres redis) — no-op if already running - Restarts only the
apiservice via--no-deps— database and Redis state are preserved - Runs
mikro-orm migration:upinside the new API container after restart - Existing lint and test workflows are unaffected
Developer docs — docs/tableplus-ssh-tunnel.md
- How to connect to VPS Postgres via TablePlus SSH tunnel (including pgvector setup)
- How to connect to VPS Redis via TablePlus SSH tunnel
- Read-only Postgres role setup for safer local browsing
- Tips on color-coding connections, migration discipline, SSH key format
Required GitHub Secrets
| Secret | Description |
|---|---|
| VPS_HOST | Hostinger VPS IP address |
| VPS_USER | SSH username on the VPS |
| VPS_SSH_KEY | SSH private key (OpenSSH format) |
| GHCR_TOKEN | PAT with read:packages scope — used by VPS to pull images from GHCR |
Note:
GITHUB_TOKENis used by the Actions runner to push images;GHCR_TOKENis a separate PAT used by the VPS to pull them.
VPS directory structure
The deploy workflow expects these directories to exist on the VPS:
/opt/faculytics/
├── staging/
│ ├── docker-compose.staging.yml ← SCP'd by workflow
│ └── .env.staging ← provisioned manually
└── prod/
├── docker-compose.prod.yml ← SCP'd by workflow
└── .env.prod ← provisioned manually
Required .env variables
POSTGRES_DB=
POSTGRES_USER=
POSTGRES_PASSWORD=
# ...other API env vars
pgvector setup note
The vector extension must be enabled once per database after first boot:
CREATE EXTENSION IF NOT EXISTS vector;
This should be included as the first MikroORM migration so it runs automatically on deploy.
Out of scope (follow-up issues)
- Nginx reverse proxy + TLS termination
- Postgres database branching / snapshot workflow
- Automated Postgres backups
Acceptance criteria
- Push to
staging→ image tagged:stagingappears in GHCR, API restarts on VPS port3001, Postgres and Redis untouched - Push to
main→ image tagged:latestappears in GHCR, API restarts on VPS port3000, Postgres and Redis untouched -
mikro-orm migration:upruns successfully as part of the deploy step -
CREATE EXTENSION IF NOT EXISTS vectormigration exists and has been applied -
docs/tableplus-ssh-tunnel.mdreviewed by at least one teammate - All four secrets set on the repo