The Multi-Purpose Cleaning Robot trained based on YOLO-V7, embedded into an autonomous robot chassis, can automatically navigate, identify different types of rubbishes/resources, and collect them.
The deep learning model pcryolo.pt was trained based on YOLO-V7, which can identify 5 classes, including "coin", "paper", "plastic", "can" and "water", with impressive accuracy.
Fig.1 The confusion matrix
To access YOLO-v7 pretrained model and environment setup, go to:
https://github.com/WongKinYiu/yolov7
For the labelled training and validation dataset, you can have access at:
Train: https://drive.google.com/drive/folders/1bB5pMYbI_IrGh3C9LoxWuooe1P4-QxEg?usp=sharing
Validation: https://drive.google.com/drive/folders/1Cp0gC1_IgugWHFTNZ-MxYV5n6-BozK3c?usp=sharing
To modify the repository to run PCRyolo, simply replace the detect.py with our given file, which integrates the webcam stream and communication module. Then add the pcryolo.pt to the root address.
Step 1:
Upload robot.c to Arduino.
Step 2:
At the raspberry pi, run:
python3 robot.py
to start the streaming.
Step 3:
At the server (can be your laptop), run:
python3 detect.py --source http://192.168.43.37:8000/stream.mjpg --weights pcryolo.pt
at the root folder of YOLO-V7
