This project implements:
- CRUD for shows, seasons, episodes, characters, and actors
- Many-to-many: associate characters with episodes via
episode_characters - Simulated long-running job queries with polling + downloadable results
- Auto DB initialization on start (runs
schema.sql) - API discovery via OpenAPI at
/openapi.json(+ optional Swagger UI at/docs) - GraphQL endpoint at
/graphqlmirroring CRUD operations and jobs - GraphQL discovery at
/graphql.json(also/.well-known/graphql.json) - Season episodes endpoints:
GET /seasons/:id/episodesandGET /shows/:showId/seasons/:seasonNumber/episodes
npm install
cp .env.example .env # set your MySQL creds
npm start
curl -s http://localhost:3000/health | jq .The server reads a handful of environment variables so you can mirror the production deployment locally.
Core service settings
| Variable | Default | Description |
|---|---|---|
PORT |
3000 |
HTTP port for the Express server. |
DB_HOST |
localhost |
MySQL host name. |
DB_PORT |
3306 |
MySQL port. |
DB_USER |
root |
Database user used for schema initialization and pool connections. |
DB_PASSWORD |
empty string | Database password. |
DB_NAME |
tvdb |
Database/schema name created on startup. |
DB_RETRY_ATTEMPTS |
3 |
Number of times database operations are retried after fatal connection errors. |
DB_RETRY_DELAY_MS |
200 |
Delay (ms) between retry attempts. |
APP_VERSION |
package version | Overrides the version tag emitted in logs and /deployment-version. |
BUILD_NUMBER |
unset | Optional numeric suffix appended to the version tag. |
Authentication & admin UI
| Variable | Default | Description |
|---|---|---|
ENABLE_ADMIN_UI |
true in non-production, false otherwise |
Controls whether the /admin console is served. |
ADMIN_USERNAME / ADMIN_PASSWORD |
unset | If both are provided, /admin is protected with HTTP Basic Auth. |
API_TOKEN |
unset | When set, all JSON/GraphQL endpoints require the token via the x-api-token header or a Bearer authorization header. |
/deployment-versionis intentionally left unauthenticated so load balancers and uptime checks can read the running build metadata even when the rest of the API is locked down withAPI_TOKEN.
When API_TOKEN is enabled, every request outside the allowlist goes through the auth middleware before routing, so curl "$API/shows" (or any other endpoint) will return {"error":"invalid api token"} unless you pass the header:
curl -s "$API/shows" -H "x-api-token: $API_TOKEN" | jq .
# or
curl -s "$API/shows" -H "Authorization: Bearer $API_TOKEN" | jq .Unset API_TOKEN (and restart npm start) if you want to disable authentication locally.
Need to retrieve the token programmatically? POST /auth with any JSON payload containing username and password (both strings) and the server will respond with {"token":"<API_TOKEN value>"}. The endpoint sits on the public allowlist so you can call it before adding auth headers elsewhere.
If you send username="known.bad" the server deliberately returns 401 {"error":"invalid credentials"} to mimic a failure case.
Background job retention
| Variable | Default | Description |
|---|---|---|
JOB_TTL_MS |
600000 |
Time (ms) before completed/failed job results expire. Set to 0 to retain indefinitely. |
MAX_JOB_HISTORY |
100 |
Maximum number of job records kept in memory before the oldest entries are evicted. |
See .env.example for a compose-ready set of defaults—the seed scripts automatically source it so tokens and host overrides apply everywhere.
scripts/db-dump.sh [--api URL] <file|- >calls/admin/database-dumpand writes the JSON export locally (use-to stream to stdout).scripts/db-import.sh [--api URL] <file|- >posts a dump back to/admin/database-import, upserting rows in dependency order.- Both scripts reuse the seeding helpers, so
API/API_TOKENfrom.envor the environment apply automatically (the--apiflag overrides the base URL per invocation). - The dump endpoint accepts
limit/offsetquery parameters if you need paginated slices (for example,/admin/database-dump?limit=500&offset=1000). Pagination trims every table’s rows to the requested window but preserves the same object-of-arrays structure in the response. - You can scope exports by passing comma-separated ID filters:
showIds,seasonIds,episodeIds,characterIds, andactorIds. Each filter automatically pulls the related hierarchy (for example,showIds=1,2returns only those shows plus their seasons, episodes, characters, episode-character links, and linked actors).
The repository ships with a standalone client experience at /explorer that consumes the same REST API as the admin console. The page is compiled from static assets under public/explorer and offers:
- Token-based authentication: when
API_TOKENis set, the app prompts for the token and stores it in the browser'slocalStorage. You can re-enter or clear the token at any time with the Change API Token button in the header. - Cascading dropdown navigation: pick a show, then drill into its seasons, episodes, and characters with keyboard-friendly select menus and focused detail panes.
- Character overviews: every show's characters (and their actors, when known) are displayed alongside the season/episode explorer.
- Deployment awareness: the header surfaces the running application version/build using the
/deployment-versionendpoint so you can verify what you're testing.
Start the server (npm start) and open the /explorer route in a browser to try it out.
When you're done experimenting you can tear everything down:
- Local Node.js install: stop
npm start(Ctrl+C), delete the install artifacts withrm -rf node_modules package-lock.json, and drop the schema the API created by connecting to MySQL and runningDROP DATABASE <your DB_NAME>;(the default istvdb). - Docker Compose: run
docker compose down -vto stop the stack and remove the bind-mounted database volume. - Helm release: run
helm uninstall tvdb -n tvdb(or your namespace) and, if you created persistent volumes, clean them up withkubectl delete pvc -n <namespace> -l app=tvdb.
npm run docker:build
# publish multi-arch images
npm run docker:build -- --pushBy default the helper script builds an x86_64 image locally (loaded into your Docker daemon). Pass --push to publish the multi-architecture image set.
Each run increments a local .docker-build-number counter and tags the image with the semantic package version (for example 1.8.12) plus a numeric suffix such as 1.8.12.7. Set the optional APP_VERSION/BUILD_NUMBER build arguments (or the matching environment variables consumed by docker-compose.yaml) if you need to override either value manually.
Version bumps are published automatically once they land on main. The Release on version bump GitHub Actions workflow reads the version field from package.json, tags the commit as v<version>, and creates a GitHub release with autogenerated notes. Pushes that do not change the version field simply skip the release job.
Render the manifests without installing:
helm template tvdb charts/tvdbInstall (or upgrade) the release. If the tvdb namespace doesn't exist, choose one of the following:
Option 1: create the namespace manually:
kubectl create namespace tvdb
helm install tvdb charts/tvdb -f charts/tvdb/values.yaml --namespace tvdbOption 2: let Helm create the namespace automatically:
helm install tvdb charts/tvdb -f charts/tvdb/values.yaml --namespace tvdb --create-namespaceTo upgrade and ensure the namespace exists:
helm upgrade --install tvdb charts/tvdb -n tvdb --create-namespaceNeed to expose the app through a static address? Override appService.type (e.g., LoadBalancer or NodePort) and add one or more appService.externalIPs entries so Kubernetes assigns those IPs through the rendered tvdb-app service:
appService:
type: LoadBalancer
externalIPs:
- 203.0.113.42If the install fails with a message like:
PersistentVolume "tvdb-storage-pv" ... missing key "app.kubernetes.io/managed-by"
label and annotate the existing volume so Helm can manage it and retry the install:
kubectl label pv tvdb-storage-pv app.kubernetes.io/managed-by=Helm --overwrite
kubectl annotate pv tvdb-storage-pv meta.helm.sh/release-name=tvdb meta.helm.sh/release-namespace=tvdb --overwriteUse -f my-values.yaml or --set key=value to customize any of the values.
Secrets: The chart expects either an existing Kubernetes secret (see
storage.existingSecret/app.existingSecret) or plain values forstorage.passwordand optional admin credentials. Sample manifests for manual deployments live underk8s/(e.g.k8s/tvdb-secrets.yaml.example).
- JSON spec:
GET /openapi.json(also/specand/.well-known/openapi.json) - Swagger UI (optional):
npm i swagger-ui-expressthen openhttp://localhost:3000/docs
chmod +x seed_doctor_who.sh unseed_doctor_who.sh
./seed_doctor_who.sh
# Remove everything seeded above
./unseed_doctor_who.shSeeds seasons 1–26, a subset of actors & characters, one opener episode per season, and links characters to each opener.
chmod +x seed_sapphire_and_steel.sh unseed_sapphire_and_steel.sh
./seed_sapphire_and_steel.sh
# Remove everything seeded above
./unseed_sapphire_and_steel.shSeeds ITV's Sapphire & Steel with six seasons of story arcs, creates the principal cast, and links each episode's characters to their matching actors. All seed scripts source scripts/seed_common.sh, which loads $API_TOKEN from .env (if present) and automatically retries requests while the API or database warms up.
chmod +x seed_twilight_zone.sh unseed_twilight_zone.sh
./seed_twilight_zone.sh
# Remove everything seeded above
./unseed_twilight_zone.shCreates the 1959 anthology series with five seasons and six signature episodes—each pre-populated with iconic characters and their actors (plus Rod Serling's narration).
chmod +x seed_massive_dummy_show.sh unseed_massive_dummy_show.sh
./seed_massive_dummy_show.sh
# Remove everything seeded above
./unseed_massive_dummy_show.shGenerates a synthetic show with thousands of oversized episodes so scripts/db-dump.sh produces roughly a 30MB JSON export. Values such as seasons, episode counts, and payload size are configurable via environment variables; expect this script to run for several minutes and to create a large dataset intended purely for load testing.
./reset_database.sh # prompts before dropping data
./reset_database.sh --force # skip the confirmation promptTargets the running API at $API_BASE_URL (default http://localhost:$PORT) and
invokes POST /admin/reset-database using the optional $API_TOKEN header.
Make sure the server is running so reseed scripts start from an empty database.
The API can emit synthetic rate-limit headers (and optional 429 responses) for client testing. The feature is disabled by default; toggle it via the admin endpoint:
curl -s -X POST "$API/admin/fake-rate-limit" \
-H 'Content-Type: application/json' \
-d '{"enabled":true,"limit":100,"windowMs":60000,"reset":true}' | jq .
# Disable and clear counters
curl -s -X POST "$API/admin/fake-rate-limit" -H 'Content-Type: application/json' -d '{"enabled":false,"reset":true}' | jq .When enabled, responses include X-RateLimit-* headers and return HTTP 429 with
Retry-After once the configured limit is exceeded.
Assumes your API runs at
http://localhost:3000and you havejqinstalled. You can set a base URL once:export API=http://localhost:3000
curl -s -X POST "$API/init" | jq .
curl -s "$API/health" | jq .
curl -s "$API/deployment-version" | jq .curl -s "$API/openapi.json" | jq .info
curl -s "$API/spec" | jq .info
curl -s "$API/.well-known/openapi.json" | jq .infocurl -s "$API/graphql.json" | jq .
curl -s "$API/.well-known/graphql.json" | jq .All REST functionality is also exposed via a lightweight GraphQL endpoint at /graphql.
# Health check
curl -s -X POST "$API/graphql" \
-H 'Content-Type: application/json' \
-d '{"query":"{ health { ok db } }"}' | jq .
# Create an actor via mutation
curl -s -X POST "$API/graphql" \
-H 'Content-Type: application/json' \
-d '{"query":"mutation { createActor(name:\"Sarah Jane\") { id name } }"}' | jq .
# Fetch a show with nested seasons and episodes
curl -s -X POST "$API/graphql" \
-H 'Content-Type: application/json' \
-d '{"query":"{ show(id:1) { title seasons { season_number episodes { title } } } }"}' | jq .All collection GET endpoints accept optional limit and offset query parameters in addition to the existing start, end, and include filters. Use limit to cap the number of rows returned and offset (requires limit) to skip a number of rows before results begin. Responses also emit Shopify-style Link headers with opaque page_info cursors so clients can advance or rewind without calculating offsets. The cursor is returned on every paginated endpoint (including /admin/database-dump) and should be treated as an opaque token.
# Fetch five shows starting at the sixth row, ordered the same way as the default response
curl -s "$API/shows?limit=5&offset=5" | jq .
# Page through characters linked to a specific episode
curl -s "$API/episodes/1/characters?limit=10&offset=10" | jq .
# Inspect the Link header and follow the next page cursor
curl -I "$API/shows?limit=5" | grep -i '^Link:'
# Copy the page_info value from rel="next" and pass it straight through:
NEXT=$(curl -sI "$API/shows?limit=5" | awk -F'[?&;]' '/rel="next"/ { for (i=1;i<=NF;i++) if ($i ~ /^page_info=/) { sub(/^page_info=/, "", $i); print $i; break } }')
curl -s "$API/shows?limit=5&page_info=$NEXT" | jq .
# The rel="previous" link returns you to the prior window using the same token dance:
PREV=$(curl -sI "$API/shows?limit=5&page_info=$NEXT" | awk -F'[?&;]' '/rel="previous"/ { for (i=1;i<=NF;i++) if ($i ~ /^page_info=/) { sub(/^page_info=/, "", $i); print $i; break } }')
curl -s "$API/shows?limit=5&page_info=$PREV" | jq .ACTOR_ID=$(curl -s -X POST "$API/actors" -H 'Content-Type: application/json' -d '{"name":"Tom Baker"}' | jq -r '.id'); echo "$ACTOR_ID"
curl -s "$API/actors" | jq .
curl -s "$API/actors/$ACTOR_ID" | jq .
curl -s -X PUT "$API/actors/$ACTOR_ID" -H 'Content-Type: application/json' -d '{"name":"Thomas Baker"}' | jq .
# curl -i -X DELETE "$API/actors/$ACTOR_ID"SHOW_ID=$(curl -s -X POST "$API/shows" -H 'Content-Type: application/json' -d '{"title":"Doctor Who","description":"BBC sci‑fi","year":1963}' | jq -r '.id'); echo "$SHOW_ID"
curl -s "$API/shows" | jq .
curl -s "$API/shows/$SHOW_ID" | jq .
curl -s -X PUT "$API/shows/$SHOW_ID" -H 'Content-Type: application/json' -d '{"title":"Doctor Who","description":"Classic era","year":1963}' | jq .
# curl -i -X DELETE "$API/shows/$SHOW_ID"# Only shows created in 2024
curl -s "$API/shows?start=2024-01-01T00:00:00Z&end=2024-12-31T23:59:59Z" | jq .
# Fetch a show with seasons, episodes, and characters embedded
curl -s "$API/shows/$SHOW_ID?include=seasons,seasons.episodes,seasons.episodes.characters" | jq .
# Only episodes created in 2024
curl -s "$API/episodes?start=2024-01-01T00:00:00Z&end=2024-12-31T23:59:59Z" | jq .
# Fetch episodes with character+actor data embedded
curl -s "$API/episodes?include=characters,characters.actor" | jq .
# Only actors created in 2024
curl -s "$API/actors?start=2024-01-01T00:00:00Z&end=2024-12-31T23:59:59Z" | jq .
# Fetch an actor with their characters and shows embedded
curl -s "$API/actors/$ACTOR_ID?include=characters,characters.show" | jq .SEASON_ID=$(curl -s -X POST "$API/shows/$SHOW_ID/seasons" -H 'Content-Type: application/json' -d '{"season_number":12, "year":1974}' | jq -r '.id'); echo "$SEASON_ID"
curl -s "$API/shows/$SHOW_ID/seasons" | jq .
curl -s "$API/seasons/$SEASON_ID" | jq .
curl -s -X PUT "$API/seasons/$SEASON_ID" -H 'Content-Type: application/json' -d '{"season_number":12, "year":1975}' | jq .
# curl -i -X DELETE "$API/seasons/$SEASON_ID"EPISODE_ID=$(curl -s -X POST "$API/shows/$SHOW_ID/episodes" -H 'Content-Type: application/json' -d '{"season_number":12, "air_date":"1974-12-28", "title":"Robot", "description":"Season 12 opener."}' | jq -r '.id'); echo "$EPISODE_ID"
curl -s "$API/shows/$SHOW_ID/episodes" | jq .
curl -s "$API/seasons/$SEASON_ID/episodes" | jq .
curl -s "$API/shows/$SHOW_ID/seasons/12/episodes" | jq .
curl -s "$API/episodes/$EPISODE_ID" | jq .
curl -s -X PUT "$API/episodes/$EPISODE_ID" -H 'Content-Type: application/json' -d '{"title":"Robot (Part 1)", "season_number":12}' | jq .
# curl -i -X DELETE "$API/episodes/$EPISODE_ID"CHAR_ID=$(curl -s -X POST "$API/shows/$SHOW_ID/characters" -H 'Content-Type: application/json' -d "{"name":"The Doctor (Fourth Doctor)", "actor_id": $ACTOR_ID}" | jq -r '.id'); echo "$CHAR_ID"
ALT_CHAR_ID=$(curl -s -X POST "$API/shows/$SHOW_ID/characters" -H 'Content-Type: application/json' -d '{"name":"Sarah Jane Smith", "actor_name":"Elisabeth Sladen"}' | jq -r '.id'); echo "$ALT_CHAR_ID"
curl -s "$API/shows/$SHOW_ID/characters" | jq .
curl -s "$API/characters/$CHAR_ID" | jq .
curl -s -X PUT "$API/characters/$CHAR_ID" -H 'Content-Type: application/json' -d '{"name":"The Doctor (4)", "actor_name":"Tom Baker"}' | jq .
# curl -i -X DELETE "$API/characters/$CHAR_ID"curl -s -X POST "$API/episodes/$EPISODE_ID/characters" -H 'Content-Type: application/json' -d "{"character_id": $CHAR_ID}" | jq .
curl -s -X POST "$API/episodes/$EPISODE_ID/characters" -H 'Content-Type: application/json' -d '{"character_name":"K9", "actor_name":"John Leeson"}' | jq .
curl -s "$API/episodes/$EPISODE_ID/characters" | jq .
curl -i -X DELETE "$API/episodes/$EPISODE_ID/characters/$CHAR_ID"Supports shows, seasons, episodes, characters, and actors.
JOB_ID=$(curl -s -X POST "$API/shows/query-jobs" -H 'Content-Type: application/json' -d '{"title":"Doctor","year_min":1900,"year_max":2100,"delay_ms":2500}' | jq -r '.job_id'); echo "$JOB_ID"
curl -s "$API/jobs/$JOB_ID" | jq .
curl -s -L "$API/jobs/$JOB_ID/download" -o shows_query_${JOB_ID}.json && jq . shows_query_${JOB_ID}.json | head
curl -i -X DELETE "$API/jobs/$JOB_ID"
# Endpoints also exist at /seasons/query-jobs, /episodes/query-jobs, /characters/query-jobs, and /actors/query-jobsSee CHANGELOG.md.