Parkit is an intelligent parking detection system that uses YOLOv8 deep learning model to detect motorcycles and calculate available parking spaces in real-time. The system provides both admin dashboard for management and public client interface for viewing parking availability.
The system provides annotated images showing:
- Red bounding boxes around detected motorcycles
- Green rectangles for empty parking spaces
- Blue lines indicating parking row positions
- Labels showing available spaces
- Real-time Detection: Motorcycle detection using YOLOv8
- Empty Space Calculation: Automatic calculation of available parking spaces
- Parking Row Calibration: Configure parking rows for accurate space detection
- Occupancy Tracking: Real-time parking occupancy rate monitoring
- Camera Integration: Support for DroidCam and IP cameras
- Session Management: Multi-session support with history tracking
- RESTful API: Clean API for integration with other systems
- Responsive UI: Mobile-friendly admin dashboard and client interface
┌─────────────┐ ┌──────────────┐ ┌─────────────┐
│ Camera │────▶│ Frontend │────▶│ Backend │
│ (DroidCam) │ │ (Admin/UI) │ │ (FastAPI) │
└─────────────┘ └──────────────┘ └─────────────┘
│
▼
┌─────────────┐
│ YOLO │
│ Engine │
└─────────────┘
│
▼
┌─────────────┐
│ Storage │
│ (Results) │
└─────────────┘
flowchart TB
subgraph User["User"]
Browser[Web Browser]
Camera[DroidCam/IP Camera]
end
subgraph Frontend["Frontend SPA"]
Router[Router]
UploadUI[Upload Page]
ResultsUI[Results Page]
CalibUI[Calibration Page]
APIClient[API Client]
end
subgraph Backend["Backend API"]
FastAPI[FastAPI Server]
UploadEndpoint[Upload Endpoint]
CompleteEndpoint[Complete Endpoint]
ResultsEndpoint[Results Endpoint]
CalibEndpoint[Calibration Endpoint]
end
subgraph Processing["Processing Engine"]
YOLO[YOLO Detection Model]
CalibEngine[Calibration Engine]
ImageProc[Image Processing]
end
subgraph Storage["Storage"]
FrameStore[(Frame Storage)]
ResultStore[(Result Storage)]
CalibStore[(Calibration Data)]
SessionDB[(Session Database)]
end
Browser -->|Navigate| Router
Camera -->|Stream| UploadUI
Router -->|Route| UploadUI
Router -->|Route| ResultsUI
Router -->|Route| CalibUI
UploadUI -->|API Call| APIClient
ResultsUI -->|API Call| APIClient
CalibUI -->|API Call| APIClient
APIClient -->|POST /upload| UploadEndpoint
APIClient -->|POST /complete| CompleteEndpoint
APIClient -->|GET /results/*| ResultsEndpoint
APIClient -->|POST /calibration| CalibEndpoint
UploadEndpoint -->|Save| FrameStore
UploadEndpoint -->|Update| SessionDB
CompleteEndpoint -->|Load Frames| FrameStore
CompleteEndpoint -->|Detect| YOLO
CompleteEndpoint -->|Load Calibration| CalibStore
YOLO -->|Detections| CalibEngine
CalibEngine -->|Calculate Spaces| ImageProc
ImageProc -->|Annotate| ResultStore
CompleteEndpoint -->|Update| SessionDB
ResultsEndpoint -->|Query| SessionDB
ResultsEndpoint -->|Load| ResultStore
CalibEndpoint -->|Save/Load| CalibStore
ResultsEndpoint -->|JSON Response| APIClient
CompleteEndpoint -->|JSON Response| APIClient
UploadEndpoint -->|JSON Response| APIClient
CalibEndpoint -->|JSON Response| APIClient
APIClient -->|Update UI| ResultsUI
APIClient -->|Update UI| UploadUI
APIClient -->|Update UI| CalibUI
For detailed flow diagrams, see:
Raw Mermaid files:
- Python 3.9+
- Docker & Docker Compose (optional)
- DroidCam or IP Camera (for live detection)
# Clone repository
git clone <repository-url>
cd parkit
# Start services
docker-compose up -d
# Access the application
# Backend API: http://localhost:8000
# Frontend: http://localhost:8080# Backend setup
cd backend
pip install -r requirements.txt
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
# Frontend setup (separate terminal)
cd frontend
python -m http.server 8080Create .env file in backend directory:
# Model Configuration
MODEL_PATH=models/best.pt
CONFIDENCE_THRESHOLD=0.5
# Storage
UPLOAD_DIR=uploads
RESULTS_DIR=results
CALIBRATION_DIR=calibration
# Server
HOST=0.0.0.0
PORT=8000- Navigate to Calibration page
- Upload reference parking image
- Click to mark parking row positions
- Enter row details (spacing, motorcycle width)
- Save calibration
Option A: Camera Stream
- Enter camera URL (e.g.,
http://192.168.1.100:4747/video) - Connect to camera
- Capture frames from live preview
- Upload captured frames
Option B: Manual Upload
- Select image files from device
- Preview selected images
- Upload frames
- Enter Session ID and Camera ID
- Click "Upload Frames"
- Click "Complete Session" to process
- View detection results
- Live View: Real-time detection results
- History: Browse past detection sessions
- Statistics: Motorcycles count, empty spaces, occupancy rate
- Annotated Image: Visual representation with bounding boxes
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/results/live |
Get active detection session |
| GET | /api/results/{session_id} |
Get specific session results |
| GET | /api/results/{session_id}/image |
Get annotated result image |
| GET | /api/results/latest |
Get latest sessions list |
| Method | Endpoint | Description |
|---|---|---|
| POST | /upload |
Upload frame image |
| POST | /complete/{session_id} |
Process detection |
| POST | /calibration |
Save calibration data |
| GET | /calibration/{camera_id} |
Get calibration |
| DELETE | /calibration/{camera_id} |
Delete calibration |
See API Documentation for detailed information.
- Framework: FastAPI (Python)
- ML Model: YOLOv8 (Ultralytics)
- Image Processing: OpenCV, Pillow
- Server: Uvicorn (ASGI)
- Architecture: Single Page Application (SPA)
- Language: Vanilla JavaScript
- Styling: CSS3
- API Client: Fetch API
- Images: File system
- Calibration: JSON files
- Sessions: In-memory (can be extended to database)
parkit/
├── backend/
│ ├── app/
│ │ ├── main.py # FastAPI application
│ │ ├── models/ # Data models
│ │ ├── routers/ # API routes
│ │ └── services/ # Business logic
│ ├── models/ # YOLO model files
│ ├── uploads/ # Uploaded frames
│ ├── results/ # Detection results
│ ├── calibration/ # Calibration data
│ └── requirements.txt
├── frontend/
│ ├── index.html
│ ├── css/
│ │ └── style.css
│ └── js/
│ ├── app.js # Main application
│ ├── router.js # SPA router
│ ├── api-client.js # API communication
│ └── pages/ # Page components
├── docs/
│ ├── backend-flow.mmd # Backend flowchart
│ ├── frontend-flow.mmd # Frontend flowchart
│ ├── client-frontend-flow.mmd # Client flowchart
│ ├── system-architecture.mmd # Architecture diagram
│ └── user-journey.mmd # User journey sequence
├── docker-compose.yml
└── README.md
cd backend
pytest tests/# Format code
black app/
# Lint
flake8 app/
# Type checking
mypy app/- Create feature branch
- Implement changes
- Add tests
- Update documentation
- Submit pull request
- Detection Speed: ~100-200ms per frame (GPU)
- API Response: <50ms (cached results)
- Image Processing: ~500ms per frame
- Concurrent Sessions: Supports multiple simultaneous sessions
Camera Connection Failed
- Check camera URL is correct
- Ensure camera is on same network
- Verify CORS settings on camera
Detection Not Working
- Verify YOLO model is loaded
- Check confidence threshold settings
- Ensure images are clear and well-lit
Empty Spaces Not Calculated
- Verify calibration is configured
- Check camera ID matches calibration
- Ensure parking rows are properly marked
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create feature branch (
git checkout -b feature/AmazingFeature) - Commit changes (
git commit -m 'Add AmazingFeature') - Push to branch (
git push origin feature/AmazingFeature) - Open Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- YOLOv8 by Ultralytics
- FastAPI framework
- OpenCV community
- DroidCam for camera streaming
For questions or support, please open an issue on GitHub.
Built with Python, FastAPI, and YOLOv8
