Skip to content

Parkit is an intelligent parking detection system that uses YOLOv8 deep learning model to detect motorcycles and calculate available parking spaces in real-time

License

Notifications You must be signed in to change notification settings

fikriaf/UPJ-Parking-Detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Parkit - Smart Parking Detection System

Python FastAPI YOLOv8 License Status

Parkit is an intelligent parking detection system that uses YOLOv8 deep learning model to detect motorcycles and calculate available parking spaces in real-time. The system provides both admin dashboard for management and public client interface for viewing parking availability.

Screenshots

Detection Result Example

Detection Result

The system provides annotated images showing:

  • Red bounding boxes around detected motorcycles
  • Green rectangles for empty parking spaces
  • Blue lines indicating parking row positions
  • Labels showing available spaces

Features

  • Real-time Detection: Motorcycle detection using YOLOv8
  • Empty Space Calculation: Automatic calculation of available parking spaces
  • Parking Row Calibration: Configure parking rows for accurate space detection
  • Occupancy Tracking: Real-time parking occupancy rate monitoring
  • Camera Integration: Support for DroidCam and IP cameras
  • Session Management: Multi-session support with history tracking
  • RESTful API: Clean API for integration with other systems
  • Responsive UI: Mobile-friendly admin dashboard and client interface

Architecture

System Components

┌─────────────┐     ┌──────────────┐     ┌─────────────┐
│   Camera    │────▶│   Frontend   │────▶│   Backend   │
│  (DroidCam) │     │  (Admin/UI)  │     │  (FastAPI)  │
└─────────────┘     └──────────────┘     └─────────────┘
                                                 │
                                                 ▼
                                          ┌─────────────┐
                                          │    YOLO     │
                                          │   Engine    │
                                          └─────────────┘
                                                 │
                                                 ▼
                                          ┌─────────────┐
                                          │   Storage   │
                                          │  (Results)  │
                                          └─────────────┘

System Architecture Diagram

flowchart TB
    subgraph User["User"]
        Browser[Web Browser]
        Camera[DroidCam/IP Camera]
    end
    
    subgraph Frontend["Frontend SPA"]
        Router[Router]
        UploadUI[Upload Page]
        ResultsUI[Results Page]
        CalibUI[Calibration Page]
        APIClient[API Client]
    end
    
    subgraph Backend["Backend API"]
        FastAPI[FastAPI Server]
        UploadEndpoint[Upload Endpoint]
        CompleteEndpoint[Complete Endpoint]
        ResultsEndpoint[Results Endpoint]
        CalibEndpoint[Calibration Endpoint]
    end
    
    subgraph Processing["Processing Engine"]
        YOLO[YOLO Detection Model]
        CalibEngine[Calibration Engine]
        ImageProc[Image Processing]
    end
    
    subgraph Storage["Storage"]
        FrameStore[(Frame Storage)]
        ResultStore[(Result Storage)]
        CalibStore[(Calibration Data)]
        SessionDB[(Session Database)]
    end
    
    Browser -->|Navigate| Router
    Camera -->|Stream| UploadUI
    
    Router -->|Route| UploadUI
    Router -->|Route| ResultsUI
    Router -->|Route| CalibUI
    
    UploadUI -->|API Call| APIClient
    ResultsUI -->|API Call| APIClient
    CalibUI -->|API Call| APIClient
    
    APIClient -->|POST /upload| UploadEndpoint
    APIClient -->|POST /complete| CompleteEndpoint
    APIClient -->|GET /results/*| ResultsEndpoint
    APIClient -->|POST /calibration| CalibEndpoint
    
    UploadEndpoint -->|Save| FrameStore
    UploadEndpoint -->|Update| SessionDB
    
    CompleteEndpoint -->|Load Frames| FrameStore
    CompleteEndpoint -->|Detect| YOLO
    CompleteEndpoint -->|Load Calibration| CalibStore
    
    YOLO -->|Detections| CalibEngine
    CalibEngine -->|Calculate Spaces| ImageProc
    ImageProc -->|Annotate| ResultStore
    
    CompleteEndpoint -->|Update| SessionDB
    
    ResultsEndpoint -->|Query| SessionDB
    ResultsEndpoint -->|Load| ResultStore
    
    CalibEndpoint -->|Save/Load| CalibStore
    
    ResultsEndpoint -->|JSON Response| APIClient
    CompleteEndpoint -->|JSON Response| APIClient
    UploadEndpoint -->|JSON Response| APIClient
    CalibEndpoint -->|JSON Response| APIClient
    
    APIClient -->|Update UI| ResultsUI
    APIClient -->|Update UI| UploadUI
    APIClient -->|Update UI| CalibUI
Loading

Additional Flow Diagrams

For detailed flow diagrams, see:

Raw Mermaid files:

Documentation

Quick Start

Prerequisites

  • Python 3.9+
  • Docker & Docker Compose (optional)
  • DroidCam or IP Camera (for live detection)

Installation

Option 1: Docker (Recommended)

# Clone repository
git clone <repository-url>
cd parkit

# Start services
docker-compose up -d

# Access the application
# Backend API: http://localhost:8000
# Frontend: http://localhost:8080

Option 2: Manual Setup

# Backend setup
cd backend
pip install -r requirements.txt
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000

# Frontend setup (separate terminal)
cd frontend
python -m http.server 8080

Configuration

Create .env file in backend directory:

# Model Configuration
MODEL_PATH=models/best.pt
CONFIDENCE_THRESHOLD=0.5

# Storage
UPLOAD_DIR=uploads
RESULTS_DIR=results
CALIBRATION_DIR=calibration

# Server
HOST=0.0.0.0
PORT=8000

Usage

1. Camera Calibration (One-time Setup)

  1. Navigate to Calibration page
  2. Upload reference parking image
  3. Click to mark parking row positions
  4. Enter row details (spacing, motorcycle width)
  5. Save calibration

2. Upload Frames

Option A: Camera Stream

  1. Enter camera URL (e.g., http://192.168.1.100:4747/video)
  2. Connect to camera
  3. Capture frames from live preview
  4. Upload captured frames

Option B: Manual Upload

  1. Select image files from device
  2. Preview selected images
  3. Upload frames

3. Process Detection

  1. Enter Session ID and Camera ID
  2. Click "Upload Frames"
  3. Click "Complete Session" to process
  4. View detection results

4. View Results

  • Live View: Real-time detection results
  • History: Browse past detection sessions
  • Statistics: Motorcycles count, empty spaces, occupancy rate
  • Annotated Image: Visual representation with bounding boxes

API Endpoints

Public Endpoints (No Authentication)

Method Endpoint Description
GET /api/results/live Get active detection session
GET /api/results/{session_id} Get specific session results
GET /api/results/{session_id}/image Get annotated result image
GET /api/results/latest Get latest sessions list

Admin Endpoints

Method Endpoint Description
POST /upload Upload frame image
POST /complete/{session_id} Process detection
POST /calibration Save calibration data
GET /calibration/{camera_id} Get calibration
DELETE /calibration/{camera_id} Delete calibration

See API Documentation for detailed information.

Technology Stack

Backend

  • Framework: FastAPI (Python)
  • ML Model: YOLOv8 (Ultralytics)
  • Image Processing: OpenCV, Pillow
  • Server: Uvicorn (ASGI)

Frontend

  • Architecture: Single Page Application (SPA)
  • Language: Vanilla JavaScript
  • Styling: CSS3
  • API Client: Fetch API

Storage

  • Images: File system
  • Calibration: JSON files
  • Sessions: In-memory (can be extended to database)

Project Structure

parkit/
├── backend/
│   ├── app/
│   │   ├── main.py              # FastAPI application
│   │   ├── models/              # Data models
│   │   ├── routers/             # API routes
│   │   └── services/            # Business logic
│   ├── models/                  # YOLO model files
│   ├── uploads/                 # Uploaded frames
│   ├── results/                 # Detection results
│   ├── calibration/             # Calibration data
│   └── requirements.txt
├── frontend/
│   ├── index.html
│   ├── css/
│   │   └── style.css
│   └── js/
│       ├── app.js               # Main application
│       ├── router.js            # SPA router
│       ├── api-client.js        # API communication
│       └── pages/               # Page components
├── docs/
│   ├── backend-flow.mmd         # Backend flowchart
│   ├── frontend-flow.mmd        # Frontend flowchart
│   ├── client-frontend-flow.mmd # Client flowchart
│   ├── system-architecture.mmd  # Architecture diagram
│   └── user-journey.mmd         # User journey sequence
├── docker-compose.yml
└── README.md

Development

Running Tests

cd backend
pytest tests/

Code Quality

# Format code
black app/

# Lint
flake8 app/

# Type checking
mypy app/

Adding New Features

  1. Create feature branch
  2. Implement changes
  3. Add tests
  4. Update documentation
  5. Submit pull request

Performance

  • Detection Speed: ~100-200ms per frame (GPU)
  • API Response: <50ms (cached results)
  • Image Processing: ~500ms per frame
  • Concurrent Sessions: Supports multiple simultaneous sessions

Troubleshooting

Common Issues

Camera Connection Failed

  • Check camera URL is correct
  • Ensure camera is on same network
  • Verify CORS settings on camera

Detection Not Working

  • Verify YOLO model is loaded
  • Check confidence threshold settings
  • Ensure images are clear and well-lit

Empty Spaces Not Calculated

  • Verify calibration is configured
  • Check camera ID matches calibration
  • Ensure parking rows are properly marked

Contributing

Contributions are welcome! Please follow these steps:

  1. Fork the repository
  2. Create feature branch (git checkout -b feature/AmazingFeature)
  3. Commit changes (git commit -m 'Add AmazingFeature')
  4. Push to branch (git push origin feature/AmazingFeature)
  5. Open Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • YOLOv8 by Ultralytics
  • FastAPI framework
  • OpenCV community
  • DroidCam for camera streaming

Contact

For questions or support, please open an issue on GitHub.


Built with Python, FastAPI, and YOLOv8

About

Parkit is an intelligent parking detection system that uses YOLOv8 deep learning model to detect motorcycles and calculate available parking spaces in real-time

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published