Skip to content
This repository was archived by the owner on Aug 15, 2025. It is now read-only.
/ Smart-Count Public archive

Project Files of the Graduation Project "Smart Count: Efficient and Affordable Object Counting"

License

Notifications You must be signed in to change notification settings

Winnerxl/Smart-Count

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Smart Count: Efficient and Affordable Object Counting

Smart Count is a low-cost, real-time object counting system built on an embedded platform using sensor fusion. It combines camera-based object detection with depth sensing from a Time-of-Flight (ToF) sensor to improve counting accuracy, especially in occlusion-heavy or stacked scenarios. Designed with affordability and portability in mind, the system runs on a Raspberry Pi Zero 2W and an ESP32 microcontroller.

🔍 Project Highlights

  • Sensor Fusion of camera and VL53L5CX ToF sensor
  • Real-time object detection with MobileNetV2-SSD (TensorFlow Lite)
  • Accurate depth sensing using 8×8 low-resolution grid
  • Adaptive object height calibration
  • Calibration via perspective transform
  • Simple and responsive web interface (Flask + Socket.IO)
  • MQTT-based lightweight communication between components

📦 Hardware Requirements

Component Role
Raspberry Pi Zero 2W Main processing unit: runs detection, fusion, web UI
ESP32 Collects and transmits ToF depth data via MQTT
VL53L5CX ToF Sensor 8×8 multizone depth sensing
USB Webcam Captures side-view image frames
Wi-Fi Access Point For MQTT communication between devices

🧠 Software Stack

  • Language: Python (Raspberry Pi) & C++ (ESP32)
  • Libraries (Pi): OpenCV, Flask, Flask-SocketIO, NumPy, SciPy, Paho-MQTT, TensorFlow Lite
  • Libraries (ESP32): Arduino IDE, SparkFun VL53L5CX Library, PubSubClient, WiFi

🖥️ System Architecture

+------------------------+                  +----------------------------+
|   Raspberry Pi Zero    |                  |         ESP32             |
|                        |                  |                            |
|  - Object Detection    | <----- MQTT ---- |  - VL53L5CX ToF Sensor     |
|  - Sensor Fusion       |                  |  - JSON Data Publisher     |
|  - Web Interface       |                  +----------------------------+
|  - Centroid Tracking   |
+------------------------+

⚙️ Installation & Setup

🧰 1. Raspberry Pi Setup

sudo apt update && sudo apt upgrade
sudo apt install python3-opencv python3-pip
pip3 install flask flask-socketio paho-mqtt numpy scipy
  • Add TensorFlow Lite runtime:
pip3 install tflite-runtime
  • Place the TFLite model and COCO label file in model/ folder:
model/
  ├── mobilenet_ssd_v2_coco_quant_postprocess.tflite
  └── coco_labels.txt

📡 2. ESP32 Firmware

  • Use Arduino IDE
  • Required libraries:
    • Wire.h, WiFi.h, PubSubClient, SparkFun_VL53L5CX_Library
  • Configure your Wi-Fi and MQTT broker IP in the code
  • Upload the firmware to ESP32

🎯 3. Calibration

Run calibrate.py on Raspberry Pi:

python3 calibrate.py

This will:

  • Guide you to click 4 sensor corner markers on live camera feed
  • Save calibration_matrix.npy for perspective transform

▶️ 4. Start Counting!

Run main script:

python3 counting.py

Access live web interface:

http://<your-pi-ip>:5000

🌐 Web Interface

  • Live camera view with bounding boxes
  • Real-time count from camera, sensor, and fused result
  • 8×8 heatmap of depth sensor
  • FPS and BASE_DEPTH indicators

🧪 Experimental Results

Scenario Camera-Only Accuracy Fused System Accuracy
Well-Separated Objects 92.8% 100%
Partial Occlusion ~70–80% 100%
Stacked Objects ~80% 90–100% (with height estimation)

📁 Project Structure

Smart-Count/
│
├── calibrate.py               # Manual sensor-camera calibration
├── counting.py                # Main fusion + UI + detection logic
├── model/
│   ├── mobilenet_ssd_v2...    # TFLite model
│   └── coco_labels.txt
├── calibration_matrix.npy     # Generated by calibrate.py
├── README.md
└── ...

🧠 Key Algorithms

  • Centroid Tracking for temporal stability
  • Perspective Transform for ToF-to-Camera spatial alignment
  • Dynamic Object Height Threshold from depth feedback
  • Sensor-only Object Counting using average cluster height

👨‍💻 Authors

  • Mert Mermer @Winnerxl
  • Kerem Çelik
  • Yusuf Doğan Çiçek

Graduation project @ Eskişehir Technical University, 2025
Supervisor: Assist. Prof. Dr. Altan Onat

📬 Contact

For questions or collaborations, reach out via GitHub or via LinkedIn.

About

Project Files of the Graduation Project "Smart Count: Efficient and Affordable Object Counting"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages