A production-ready Inventory Management API built with FastAPI and PostgreSQL. It uses a clean Controller–Service–Repository architecture, strict typing, and robust transactional logic to ensure consistency under concurrent operations.
- ACID transactional stock transfers with row-level locking and atomic updates
- Immutable audit log of movements; UPDATE/DELETE blocked via triggers
- Clear error semantics and HTTP status codes (404/400/422, plus 503 for deadlocks)
- Async SQLAlchemy 2.x +
asyncpgfor performance - Docker Compose for one-command startup with schema and seed data
- Controllers: request validation, HTTP error mapping
- Services: business rules and transactional orchestration
- Repositories: direct SQL operations (lock, upsert, atomic decrement)
- Database: normalized schema with proper indexes and constraints
app/
core/config.py
db/session.py
db/tables.py
controllers/transfer_controller.py
repositories/
inventory_repository.py
location_repository.py
product_repository.py
stock_movement_repository.py
schemas/transfer.py
services/
errors.py
stock_service.py
main.py
sql/
init.sql
seed.sql
Dockerfile
docker-compose.yml
pyproject.toml
.env.example
Start Postgres and the API; init.sql and seed.sql run automatically:
docker compose up --buildHealth check:
curl http://localhost:8000/healthTransfer example:
curl -X POST http://localhost:8000/api/transfer \
-H "Content-Type: application/json" \
-d '{"product_id":1,"from_location_id":1,"to_location_id":2,"quantity":5}'Note: IDs depend on your seed data; query the DB or add list endpoints for discovery.
py -m venv .venv
.venv\Scripts\activate
pip install -e .
psql "host=localhost user=postgres dbname=inventory" -f sql/init.sql
psql "host=localhost user=postgres dbname=inventory" -f sql/seed.sql
set APP_DATABASE_URL=postgresql+asyncpg://postgres:postgres@localhost:5432/inventory
uvicorn app.main:app --reload- Environment variable:
APP_DATABASE_URL(see.env.example). - Defaults to
postgresql+asyncpg://postgres:postgres@localhost:5432/inventoryif unset.
products(id, sku unique, name, description, price, created_at)locations(id, name unique, type, address)inventory(product_id, location_id, quantity)— composite PK;quantity >= 0stock_movements(id, product_id, from_location_id, to_location_id, quantity, timestamp, type)— immutable audit log- Indexes for scale:
inventory.product_id,inventory.location_id,stock_movements(product_id,timestamp),stock_movements(timestamp),stock_movements(from_location_id),stock_movements(to_location_id)
Transfers stock between locations within a single transaction.
Request body:
{ "product_id": 1, "from_location_id": 1, "to_location_id": 2, "quantity": 5 }Response (201 Created):
{ "movement_id": 123, "product_id": 1, "from_location_id": 1, "to_location_id": 2, "quantity": 5 }Errors:
- 404 Not Found — product or location does not exist
- 400 Bad Request — insufficient stock at source location
- 422 Unprocessable Entity — invalid payload (negative quantity, same locations)
- 503 Service Unavailable — deadlock detected; client should retry
- Source/destination rows are locked in deterministic order to reduce deadlocks.
- Source decrement is atomic (
UPDATE ... SET quantity = quantity - :qty WHERE quantity >= :qty). - Destination increment uses upsert to avoid races.
- All operations run inside a single transaction; any error rolls back changes.
- Query paths are indexed for common filters and growth to 100k+ rows.
- Avoids N+1 by using direct point lookups.
- Async engine with connection pooling; health-checked via Docker.
- Deadlock (503): Retry the request; consider client-side exponential backoff.
- Connection issues: verify
APP_DATABASE_URLand that the DB service is healthy. - Schema/seed errors: re-create containers with a clean volume.
docker compose down -v && docker compose up --buildSee LICENSE for details.