A lightweight, concurrent image processing API built with Go. This service accepts image processing jobs (such as resizing and generating thumbnails) and processes them asynchronously using a worker pool pattern.
You upload an image, tell it what you want done (resize to 800x600, make a thumbnail, etc.), and get back a job ID. Check the status endpoint to see when it's done, then download your processed image. Simple.
The worker pool handles multiple concurrent jobs efficiently, so if you send 50 resize requests, they'll be processed in parallel instead of one-by-one.
# Install dependencies
go mod tidy
# Run the server
go run main.go
# Server starts on port 8080 by defaultSet these environment variables if you want to customize things:
PORT=8080 # HTTP port
DB_PATH=./data/jobs.db # SQLite database location
UPLOAD_DIR=./uploads # Where uploaded images go
OUTPUT_DIR=./outputs # Where processed images go
WORKER_POOL_SIZE=10 # Number of concurrent workers
MAX_UPLOAD_SIZE=10485760 # Max file size in bytes (10MB default)curl -X POST http://localhost:8080/jobs \
-H "Content-Type: application/json" \
-d '{
"image_data": "<base64-encoded-image-data>",
"operation": "resize",
"width": 800,
"height": 600,
"output_format": "jpeg"
}'Response:
{
"job_id": "550e8400-e29b-41d4-a716-446655440000",
"status": "pending",
"message": "Image processing job submitted"
}curl http://localhost:8080/jobs/550e8400-e29b-41d4-a716-446655440000Response:
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"status": "completed",
"input_path": "./uploads/...",
"output_path": "./outputs/...",
"operation": "resize",
"width": 800,
"height": 600,
"created_at": "2024-...",
"updated_at": "2024-..."
}curl -O http://localhost:8080/jobs/550e8400-e29b-41d4-a716-446655440000/downloadcurl http://localhost:8080/healthresize- Resize to exact dimensions (may distort aspect ratio)thumbnail- Resize to fit within dimensions while maintaining aspect ratio
Output formats: jpeg, png (optional, defaults to input format)
Image processing is CPU-intensive. Without a worker pool:
- Each request would spawn a goroutine
- 1000 concurrent uploads = 1000 goroutines fighting for CPU
- System thrashes, performance tanks
With the worker pool:
- Fixed number of workers (configurable, default 10)
- Jobs queue up if workers are busy
- Controlled concurrency, predictable resource usage
- Better throughput under load
Tested on Intel Core i7-1260P (12th Gen, 12 cores / 16 threads, up to 4.7 GHz)
Worker Pool Size Comparison (50 concurrent resize jobs to 200x200):
| Workers | Duration | Throughput | Notes |
|---|---|---|---|
| 1 | ~15s | 3.3 jobs/s | Sequential processing |
| 10 | ~3s | 16 jobs/s | Good balance |
| 16 | ~2s | 25 jobs/s | Matches CPU core count |
| 50 | ~2s | 25 jobs/s | Diminishing returns |
Key Finding: 16 workers ≈ 50 workers performance-wise. Beyond your CPU core count (16), adding more workers doesn't help as there is no CPU left to run them in parallel.
Production Tip: Size your worker pool to your CPU core count.
go test ./workerpool -vIncludes tests for:
- Multiple concurrent jobs
- Worker parallelism
- Graceful shutdown
- Pool resizing
What's already handled:
- Graceful shutdown (waits for in-progress jobs)
- Request validation
- Structured logging
- Configurable via environment variables
- SQLite for persistence
What you'd want to add for real production:
- S3/MinIO instead of local filesystem for storage
- Redis for job queue (instead of in-memory)
- Authentication/authorization
- Rate limiting
- Metrics (Prometheus)
- Circuit breakers for external services
- Go - Language of choice for concurrency
- Gin - HTTP framework (fast, minimal)
- GORM - ORM for database operations
- SQLite - Zero-config database
- Worker Pool - Custom implementation (see
workerpool/)
I wanted to demonstrate:
- Concurrency patterns - Worker pools, channels, goroutines
- Clean architecture - Separation of concerns, dependency injection
- Production awareness - Graceful shutdown, logging, config management
- Async job processing - Real-world pattern for CPU-intensive tasks
The experiment is a solid foundation for any image processing service, and can be extended to support more operations, storage backends, and scaling strategies.
