Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions umami-postgres/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Thin wrapper around umami's official image at the version this
# sample tracks. Pin lives here (not in CI lane scripts) so a
# future umami release that changes the bug-triggering shape is a
# one-line retag, not a hunt across keploy/integrations and
# keploy/enterprise.
#
# Upstream: https://github.com/umami-software/umami
# Image: docker.io/umamisoftware/umami:postgresql-v2.18.1
Copy link

Copilot AI May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Dockerfile comment says the pinned upstream image is docker.io/umamisoftware/umami:postgresql-v2.18.1, but the FROM line uses ghcr.io/umami-software/umami:postgresql-v2.18.1. Please align the comment with the actual registry to avoid confusion when updating the pin.

Suggested change
# Image: docker.io/umamisoftware/umami:postgresql-v2.18.1
# Image: ghcr.io/umami-software/umami:postgresql-v2.18.1

Copilot uses AI. Check for mistakes.
FROM ghcr.io/umami-software/umami:postgresql-v2.18.1
66 changes: 66 additions & 0 deletions umami-postgres/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
# umami-postgres — keploy compat lane sample

Reproducer for the umami / postgres-v3 compat lane. Mirrors the architectural pattern of the [doccano-django sample in `samples-python`](https://github.com/keploy/samples-python/tree/main/doccano-django): the sample owns orchestration (compose + bootstrap + traffic), the keploy CI lanes consume it as a thin wrapper.

The sample drives the full umami v2 API surface keploy needs to gate on a record/replay round-trip — auth + me + admin lists, users CRUD, websites CRUD, all eight report types, share tokens + public share access, batch + identify event ingest, sessions deep-dive, replays, boards lifecycle, pixel tracker, metric/pageview parser-branch variants, and logout.

## Layout

```
umami-postgres/
├── Dockerfile # FROM ghcr.io/umami-software/umami:postgresql-v2.18.1
├── docker-compose.yml # postgres-15 + umami v2 on a fixed subnet, env-driven
├── flow.sh # bootstrap | record-traffic | coverage
├── keploy.yml.template # globalNoise for createdAt/updatedAt/Date/uuid id fields
└── README.md # this file
```

## Contract

The sample is keploy-independent: `docker compose up && bash flow.sh bootstrap && bash flow.sh record-traffic` runs end-to-end against bare umami. Lane scripts wrap that exact same path inside `keploy record` / `keploy test`.

* `bootstrap` — log in as admin via `/api/auth/login`, capture the JWT-style auth token, persist it to `/tmp/umami-token-${UMAMI_PHASE}` so subsequent calls share a deterministic Authorization header.
* `record-traffic` — drive the umami v2 API. Calls are fire-and-forget (`|| true` semantics) so a single endpoint regression in umami itself does not abort the run — keploy is the assertion layer at replay.
* `coverage` — no-op stub. The upstream umami image ships compiled+minified Next.js without sourcemaps, so source-line coverage is not meaningful without rebuilding from source. Returns 0 cleanly so `flow.sh coverage || true` informational hooks keep working.

## Local run

### Without keploy — smoke check

```sh
docker compose up -d
bash flow.sh bootstrap 240
bash flow.sh record-traffic
docker compose down -v
```

This is what the keploy/enterprise compat lane wraps in `keploy record` / `keploy test` — the base compose runs unchanged inside that lane.

### With keploy — record + replay

```sh
docker compose up -d
bash flow.sh bootstrap 240

# In one shell:
keploy record -c "docker compose up" --container-name umami_app \
--proxy-port 13081 --dns-port 13082

# In another shell:
bash flow.sh record-traffic
# SIGINT keploy when traffic returns

keploy test -c "docker compose up" --containerName umami_app \
--apiTimeout 60 --delay 30 --proxy-port 13081 --dns-port 13082
```

### Coverage

This sample does not emit a coverage metric. The upstream `ghcr.io/umami-software/umami:postgresql-v2.18.1` image ships a compiled + minified Next.js standalone build with no source tree or sourcemaps; V8 line coverage on minified output doesn't map back to anything a reviewer can act on, so a coverage gate would be misleading. The keploy/enterprise compat lane uses the record/replay assertions as its correctness gate, which is the meaningful test here.

If real source-line coverage becomes a hard requirement, the path is to rebuild umami from its own source (npm install + `next build` without minification) inside a `Dockerfile.coverage` overlay — a separate, larger change.

## Consumers

* `keploy/enterprise` `.woodpecker/umami-linux.yml` — record/replay matrix delegates compose + bootstrap + traffic to this sample.
* `keploy/integrations` may add a `.woodpecker/umami-postgres.yml` falsifying lane in a future PR.
58 changes: 58 additions & 0 deletions umami-postgres/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# umami-postgres sample compose. Postgres-15 + umami v2 on a fixed
# subnet, every name env-driven so multiple matrix cells can run
# in parallel on the same docker daemon. Two-phase boot pattern
# matches the doccano-django sibling: SKIP_INIT=0 first time so
# umami's `npx umami-app db:up` runs migrations and seeds; volume
# is retained; SKIP_INIT=1 second time launches the app against
Comment on lines +4 to +6
Copy link

Copilot AI May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The header comment describes a SKIP_INIT=0/1 two-phase boot, but the compose file actually uses UMAMI_SKIP_INIT/UMAMI_SKIP_INIT. This mismatch makes it unclear which env var users should set. Consider updating the comment to match the real variable name (or vice versa) so the “two-phase boot” contract is unambiguous.

Suggested change
# matches the doccano-django sibling: SKIP_INIT=0 first time so
# umami's `npx umami-app db:up` runs migrations and seeds; volume
# is retained; SKIP_INIT=1 second time launches the app against
# matches the doccano-django sibling: UMAMI_SKIP_INIT=0 first time so
# umami's `npx umami-app db:up` runs migrations and seeds; volume
# is retained; UMAMI_SKIP_INIT=1 second time launches the app against

Copilot uses AI. Check for mistakes.
# the populated volume.
services:
app:
build:
context: .
dockerfile: Dockerfile
container_name: ${UMAMI_APP_CONTAINER:-umami_app}
init: true
stop_grace_period: 5s
ports:
- "${UMAMI_APP_PORT:-3001}:3000"
environment:
DATABASE_URL: postgresql://umami:umami@${UMAMI_DB_IP:-172.35.0.10}:5432/umami
DATABASE_TYPE: postgresql
APP_SECRET: ${UMAMI_APP_SECRET:-keploy-fixed-app-secret-for-deterministic-recordings}
DISABLE_TELEMETRY: "1"
DISABLE_UPDATES: "1"
UMAMI_SKIP_INIT: "${UMAMI_SKIP_INIT:-0}"
depends_on:
postgres:
condition: service_healthy
networks:
- umami-net

postgres:
image: postgres:15-alpine
container_name: ${UMAMI_DB_CONTAINER:-umami_db}
stop_grace_period: 5s
environment:
POSTGRES_USER: umami
POSTGRES_PASSWORD: umami
POSTGRES_DB: umami
healthcheck:
test: ["CMD-SHELL", "pg_isready -U umami -d umami"]
interval: 5s
timeout: 5s
retries: 20
volumes:
- umami-db-data:/var/lib/postgresql/data
networks:
umami-net:
ipv4_address: ${UMAMI_DB_IP:-172.35.0.10}

networks:
umami-net:
driver: bridge
ipam:
config:
- subnet: ${UMAMI_NETWORK_SUBNET:-172.35.0.0/24}

volumes:
umami-db-data:
Loading