This repo contains all the resources used in the rover project started as part of IROC-2024.
- We have attempted to build a lunar exploration rover, where we combines features like LiDAR and stereo vision with a sequential operating strategy, for accurate mapping of the terrain.
- Radio communication to be used for manual control and autonomous commanding to be implemented through the Jetson microcontroller, based on pre-trained ML models.
- We have also included high-torque DC motors for improved stability, and an individual camera on the arm to ensure precision while sample picking.
Below diagram shows the control flow of the rover.
- The primary processing unit (Jetson) controls is responsible for spatial data collection and using stereo-vision computations, will map out the terrain.
- The generated map will be used to send instructions to the wheel motors through the roving motor actuation, to navigate the terrain.
- Jetson will also be connected through WiFi to the Control Centre, where the team can access and control the rover and manipulator actions.
- Individual components (eg: arm-mounted camera and motors, ultrasonic and force sensors etc.) will be controlled via the secondary processing unit (presumably an Arduino Mega). This is mainly to reduce the load on Jetson. This secondary processing unit is entirely responsible for the arm and gripper operations. Ultrasonic and Force sensors are to ensure that the gripper collects the target object with precision.
❌ The primary operating system - Jetson, based on the stereo-vision computations, will map out the local environment and implement the Simultaneous Localization and Mapping (SLAM) algorithm. Heuristic path-finding and re-planning on encounter with obstacles will be performed repeatedly as the rover autonomously navigates the terrain. The algorithm will also be trained to detect hurdles in the path and will autonomously decide whether to maneuver past them or not.
✅ The pre-trained ML model will identify the required pickup object (in this case, a cylinder). Using the camera attached to the arm, the rover estimates the position of the target at will perform pickup operations as necessary (mostly done, except for the position estimation).
✅ Individual software based control modules have been build for different mechanical parts of the rover. However, they are yet to be integrated with rover navigation.
✅ Inverse kinematics based arm control has also been implemented and tested. However, gripper control and pickup is yet to be implemented. The arm control flowchart is given below.
❌ Proposed web integration module will act as a dashboard from which the team can manually view and control the rover and the arm movements.
rover
│
└───Design Models
│ (Mechanical Design Documentation)
│
└───ObjectDetect
│ │ cylinder.py (Working code)
│ │
│ └───Dataset (Images used for training)
│ └───runs
│ │
│ └───train2 (Working ML model)
│
└───Sensor Integration
│ │ (arduino based code to control the mechanics of the rover)
│ │
│ └───inverse kinematics
│ │ ...


