Tested sentiment analysis API β’ Python-based tests β’ Reproducible β’ CI/CD-style β’ One container per test suite β’ Shared aggregated log
This repository implements a Docker Compose test pipeline for the sentiment analysis API image datascientest/fastapi:1.0.0.
β
API container exposed on host port 8000 (endpoints: /status, /permissions, /v1/sentiment, /v2/sentiment)
β
3 separate Python test containers (one per suite) that validate:
- Authentication (
/permissions) - Authorization (
/v1/sentimentvs/v2/sentiment) - Content (positive/negative score checks for given sentences)
β
Automatic sequential execution via Compose depends_on conditions: API β Authentication β Authorization β Content
β
LOG=1 support: all suites append into a single shared api_test.log (kept in ./shared/).
β
setup.sh runs the whole pipeline reproducibly and produces log.txt (submission artifact)
π Docker / Docker Compose | π Python 3.12 | π requests | βοΈ Makefile orchestration
While the exam only requires β3 test containers + test scripts + a shared logβ, I deliberately invested extra effort to keep the solution abstract, reusable, and maintainableβso each suite only defines its test cases in a (mroe or less) DSL-like manner, while the execution (incl. assertions) + logging pipeline stays consistent across all suites.
-
Central config loading (
tests/_shared/config.py)
All suites use the same env contract (API_ADDRESS,API_PORT,LOG,LOG_PATH,HTTP_TIMEOUT) so behavior is consistent across containers and host runs. -
One generic request runner (
tests/_shared/runner.py)
A single function executes HTTP requests, validates status codes, and (only when required) validates sentiment score direction.
β Suites donβt duplicate request/validation logic. -
Unified, deterministic logging (
tests/_shared/logging.py)
Consistent suite headers/footers + per-test formatting for stdout and (whenLOG=1) a shared append-only log file.
β The aggregatedapi_test.logstays readable and stable across runs. -
Generic params handling (
tests/_shared/params.py)
iter_params(...)normalizes suite-specific param objects (dicts, dataclasses, NamedTuples, etc.) into(key, value)pairs for logging and request execution.
β Each suite can model its test parameters however it wants without changing the logger/runner. -
Shared types for clarity (
tests/_shared/types.py)
CommonTestCase+TestResultstructures keep the contract between suite definitions and the shared engine explicit.
Each suite module focuses on only:
- defining test cases (endpoint + params + expected outcomes)
- invoking the shared runner/logger
- returning an exit code suitable for CI/CD
Everything else (config, readiness waiting, request execution, output format, file logging) is handled once in tests/_shared/.
(host)
./shared/ + ./log.txt
β² β²
β β snapshot copy
bind mount β ββ setup.sh / make snapshot-log
./shared:/shared β
β
+-------------------------------------------------------------------------------+
| docker compose project |
| |
| +--------------------------+ +------------------------------+ |
| | API service |<---------->| internal network | |
| | datascientest/fastapi | HTTP | sentiment_net (DNS: api) | |
| | host 8000 -> :8000 | +------------------------------+ |
| +-------------+------------+ |
| ^ |
| | (all test suites call http://api:8000/...) |
| | |
| +-------------+--------------------------------------------------------+ |
| | | |
| | +-------------------+ +-------------------+ +----------------+ |
| | | auth_test (suite) | --> | authz_test (suite)| --> | content_test | |
| | | /permissions | | /v1 + /v2 access | | /v1 + /v2 score | |
| | +---------+---------+ +---------+---------+ +--------+--------+ |
| | | | | |
| | | append | append | append |
| | v v v |
| | +--------------------------------------------------------------+ |
| | | shared bind mount: ./shared : /shared | |
| | | aggregated log: /shared/api_test.log | |
| | +--------------------------------------------------------------+ |
| | | |
| +----------------------------------------------------------------------+ |
| |
+-------------------------------------------------------------------------------+
Sequential order is enforced by docker-compose `depends_on` conditions:
- `auth_test` waits for `api` to start (service_started) + polls /status until ready
- `authz_test` starts only after `auth_test` finished successfully (service_completed_successfully)
- `content_test` starts only after `authz_test` finished successfully (service_completed_successfully)
All suites append into the same shared file: /shared/api_test.log
At the end, setup.sh snapshots it to ./log.txt (exam artifact).
.
βββ docker-compose.yml
βββ Makefile
βββ setup.sh
βββ README.md
βββ log.txt # exam artifact (snapshotted from ./shared/api_test.log)
βββ docs/
βββ IMPLEMENTATION.md
βββ shared/
β βββ api_test.log # aggregated suite logs (written by test containers when LOG=1)
βββ tests/
βββ _shared/ # common helpers (config, logging, readiness, runner, types)
βββ authentication/
β βββ Dockerfile
β βββ test_authentication.py
βββ authorization/
β βββ Dockerfile
β βββ test_authorization.py
βββ content/
βββ Dockerfile
βββ test_content.py
./setup.shThis will:
- reset to a clean state (containers/ports/logs)
- start the API + test containers
- run suites in order: AUTHENTICATION β AUTHORIZATION β CONTENT
- write the aggregated log to
./shared/api_test.log(exam requirement via LOG=1) - copy it to
./log.txt(submission artifact) - stop everything (rerun-safe)
curl -s "http://localhost:8000/status"; echo
curl -s -o /dev/null -w "%{http_code}\n" "http://localhost:8000/docs"make start-projectβ start stack (detached) and build imagesmake stop-projectβ stop stack (normal down)make stop-allβ stop stack + remove orphans (quiet + idempotent)make resetβ guaranteed clean state (stop-all + kill-api + free-port-8000 + reset-logs)make logsβ follow logs for the whole stackmake logs-auth/make logs-authz/make logs-contentβ print suite logs (tail)make snapshot-logβ copy./shared/api_test.logβ./log.txt
Instead of maintaining a separate README_student.md, this project keeps a single detailed build diary:
β‘οΈ See docs/IMPLEMENTATION.md for step-by-step implementation notes, decisions, and commands:
The test containers write into a bind-mounted folder (./shared:/shared).
To avoid root-owned files on the host, the test services run as the host user:
setup.shexportsHOST_UIDandHOST_GIDdocker-compose.ymlusesuser: "${HOST_UID}:${HOST_GID}"for each test service
This keeps ./shared/api_test.log writable and removable without sudo, and makes reruns deterministic.
Goal: Build a small CI/CD-style Docker Compose pipeline that automatically tests a provided sentiment analysis FastAPI container image.
- API image:
datascientest/fastapi:1.0.0 - Endpoints:
/status,/permissions,/v1/sentiment,/v2/sentiment - Pipeline requirement: Docker Compose must launch 4 containers total:
- 1Γ API container
- 3Γ separate test containers (Authentication, Authorization, Content) β one python test suite per container
- Logging requirement: When
LOG=1, each suite must append its report intoapi_test.log(single aggregated file) - Expected test coverage:
- Authentication:
/permissionsreturns 200 foralice:wonderlandandbob:builder, and 403 forclementine:mandarine - Authorization:
bobcan use v1 only,alicecan use v1 and v2 - Content: using
alice, sentences "life is beautiful" (positive score) and "that sucks" (negative score) must be validated for both v1 and v2
- Authentication:
- Final deliverables include:
docker-compose.yml, Python test scripts, Dockerfiles,setup.sh, and a submissionlog.txtcontaining the aggregated results.
- β
docker-compose.ymlcontains the sequence of tests (API + 3 suites) - β Python test files for Authentication / Authorization / Content
- β Dockerfiles to build each test image
- β
setup.shto build + launch the compose pipeline - β
log.txtcontaining the aggregated logs (snapshotted from./shared/api_test.log) - β Optional remarks file: docs/IMPLEMENTATION.md