Skip to content

EAOZONE/ISEFCode

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ISEFCode

Automated red-object detection and 3D printer-mounted gripper control
Developed for the International Science and Engineering Fair (ISEF) 2022


Overview

This project implements a vision-based pick-and-place system that:

  1. Captures images from a camera
  2. Detects red objects in the scene
  3. Computes their positions
  4. Commands a 3D printer with a gripper attachment to move to the object, pick it up, and remove it from the build plate

The system was prototyped in Python for real-time detection and motion control.

The goal was to demonstrate how computer vision can be integrated with mechanical motion systems to automate object removal from a surface.


System Workflow

1. Image Capture

The system continuously reads frames from a connected USB/web camera.

2. Red Object Detection

Each frame is processed to detect red regions using color thresholding (typically HSV filtering with OpenCV).

3. Coordinate Translation

Pixel coordinates are converted into real-world printer coordinates using scaling logic inside Pixeltogcodetranslator.py.

4. Motion Execution

The system sends motion commands (e.g., G-code or serial instructions) to:

  • Move the printer head above the detected object
  • Lower the gripper
  • Close the gripper to grab the object
  • Lift the object
  • Move it away from the plate

Repository Structure

File Purpose
everything.py Entry point for running the detection and control loop
read my camera.py Handles camera feed acquisition
Pixeltogcodetranslator.py Converts pixel positions to printer motion coordinates
servotest.py Tests servo/gripper actuation

Requirements

  • Python 3.x
  • OpenCV (pip install opencv-python)
  • A USB or integrated camera
  • A 3D printer or motion platform with serial control
  • A servo-controlled gripper

Usage

  1. Connect and verify your camera:
    python "read my camera.py"
  2. Adjust conversion parameters in:
    Pixeltogcodetranslator.py
  3. Test servo actuation:
    python servotest.py
  4. Run the full system:
    python everything.py

The system will:

  • Display the live camera feed
  • Highlight detected red objects
  • Send motion commands to the printer

Calibration Notes

Before full operation:

  • Tune red color thresholds
  • Ensure the build plate fills the camera frame
  • Measure and calibrate pixel-to-millimeter scaling
  • Validate motion commands safely before enabling full-speed movement

Safety Notice

This code was developed as a research demonstration project.

  • It does not include advanced collision avoidance
  • It does not implement industrial-gradde safety checks
  • Always test motion commands carefully before operating real hardware

Project Background

This project was created for ISEF 2022 as a proof-of-concept system demonstrating integration between computer vision and robotic motion control for automated object removal.


Possible Extensions

Future improvements could include:

  • Multi-color object detection
  • Object classification
  • Closed-loop position feedback
  • ROS integration
  • Improved motion planning and safety constraints
  • Real-time object tracking

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages