A production-style event-driven backend system built with:
-
FastAPI (API layer)
-
AWS SQS (Message Queue)
-
Worker Consumer
-
Neo4j (Graph Database)
-
Prometheus (Metrics)
-
Grafana (Observability)
-
Docker Compose (Multi-service orchestration)
Client
↓
FastAPI (Backend)
↓
AWS SQS
↓
Worker Service
↓
Neo4j (Graph DB)
Worker → Prometheus → Grafana
This system demonstrates:
-
Event-driven architecture
-
Async processing
-
Graph modeling
-
DLQ-safe message handling
-
Structured JSON logging
-
Prometheus metrics
-
Full Dockerized environment
-
Task creation API
-
Publishes event to SQS
-
Clean architecture structure
-
Health endpoint
-
Polls SQS
-
Processes task dependencies
-
Idempotent graph writes using
MERGE -
DLQ-safe error handling
-
Structured JSON logging
-
Prometheus metrics exposure
-
Neo4j for dependency relationships
-
Efficient graph traversal
-
Indexed node lookup
-
Prometheus metrics scraping
-
Grafana dashboard visualization
-
Latency histogram
-
Success / failure counters
-
Python 3.11
-
FastAPI
-
boto3
-
Neo4j
-
Prometheus
-
Grafana
-
Docker
-
Docker Compose
Create a .env file in project root:
AWS_REGION=your_region
SQS_QUEUE_URL=your_sqs_queue_url
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
Alternatively mount ~/.aws into Docker container.
docker compose up --build
Service Endpoints
Backend API http://localhost:8000/docs
Neo4j http://localhost:7474
Worker Metrics http://localhost:8001/metrics
Prometheus http://localhost:9090
Grafana http://localhost:3000
Grafana Login
Default credentials: admin / admin
Add Prometheus datasource: URL: http://prometheus:9090
Available Metrics
• tasks_processed_total
• tasks_failed_total
• task_processing_seconds
Example PromQL :
rate(tasks_processed_total[1m])
histogram_quantile(0.95, rate(task_processing_seconds_bucket[5m]))
Example API Call :
curl -X POST http://localhost:8000/tasks/ \
-H "Content-Type: application/json" \
-d '{"title":"Task A","depends_on":["2","3"]}'
Worker processes asynchronously.
License
MIT