Automated red-object detection and 3D printer-mounted gripper control
Developed for the International Science and Engineering Fair (ISEF) 2022
This project implements a vision-based pick-and-place system that:
- Captures images from a camera
- Detects red objects in the scene
- Computes their positions
- Commands a 3D printer with a gripper attachment to move to the object, pick it up, and remove it from the build plate
The system was prototyped in Python for real-time detection and motion control.
The goal was to demonstrate how computer vision can be integrated with mechanical motion systems to automate object removal from a surface.
The system continuously reads frames from a connected USB/web camera.
Each frame is processed to detect red regions using color thresholding (typically HSV filtering with OpenCV).
Pixel coordinates are converted into real-world printer coordinates using scaling logic inside Pixeltogcodetranslator.py.
The system sends motion commands (e.g., G-code or serial instructions) to:
- Move the printer head above the detected object
- Lower the gripper
- Close the gripper to grab the object
- Lift the object
- Move it away from the plate
| File | Purpose |
|---|---|
everything.py |
Entry point for running the detection and control loop |
read my camera.py |
Handles camera feed acquisition |
Pixeltogcodetranslator.py |
Converts pixel positions to printer motion coordinates |
servotest.py |
Tests servo/gripper actuation |
- Python 3.x
- OpenCV (
pip install opencv-python) - A USB or integrated camera
- A 3D printer or motion platform with serial control
- A servo-controlled gripper
- Connect and verify your camera:
python "read my camera.py" - Adjust conversion parameters in:
Pixeltogcodetranslator.py
- Test servo actuation:
python servotest.py
- Run the full system:
python everything.py
The system will:
- Display the live camera feed
- Highlight detected red objects
- Send motion commands to the printer
Before full operation:
- Tune red color thresholds
- Ensure the build plate fills the camera frame
- Measure and calibrate pixel-to-millimeter scaling
- Validate motion commands safely before enabling full-speed movement
This code was developed as a research demonstration project.
- It does not include advanced collision avoidance
- It does not implement industrial-gradde safety checks
- Always test motion commands carefully before operating real hardware
This project was created for ISEF 2022 as a proof-of-concept system demonstrating integration between computer vision and robotic motion control for automated object removal.
Future improvements could include:
- Multi-color object detection
- Object classification
- Closed-loop position feedback
- ROS integration
- Improved motion planning and safety constraints
- Real-time object tracking