Skip to content

OguzhanKirik/robocam

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 

Repository files navigation

🗘️ Mapping & Object Detection with RoboCam (ROS 2)

This guide walks you through how to map your environment using the RoboCam robot and ROS 2, and how to detect and visualize objects in 3D space using a depth camera and a clustering algorithm.


⚙️ Prerequisites: SLAM Toolbox, Nav2, and Clustering Dependencies

Before you begin, make sure all necessary packages are installed.

🛠️ Install Required Packages

sudo apt update
sudo apt install ros-humble-slam-toolbox
sudo apt install ros-humble-navigation2 ros-humble-nav2-bringup
sudo apt install ros-humble-pcl-conversions ros-humble-pcl-msgs libpcl-dev

Make sure your custom package (e.g., sensor_handler) includes:

  • rclcpp
  • sensor_msgs
  • pcl_conversions
  • visualization_msgs

📦 1. Launch the Robot and Navigation Stack

Start the simulated or physical RoboCam robot along with the Nav2 navigation system:

ros2 launch robocam one_robot_ign_launch.py

Optional: Customize navigation behavior in:

robocam_navigation/config/nav2.yaml

🧐 2. Launch SLAM Toolbox for Mapping

Use SLAM Toolbox to build the map in real-time:

ros2 launch robocam_navigation slamtoolbox_launch.py

Edit this launch file if you need to change SLAM configuration:

robocam_navigation/launch/slamtoolbox_launch.py

🕁️ 3. Start Depth Camera Clustering

Launch your custom clustering node to detect objects using the depth camera's point cloud data:

ros2 run sensor_handler depth_camera_node --ros-args -p publish_markers:=true

Features:

  • Subscribes to /r1/depth_camera/points
  • Filters and downsamples point cloud
  • Performs Euclidean clustering using PCL
  • Publishes detected objects as markers (/cluster_marker_array)

You can disable visualization using:

--ros-args -p publish_markers:=false

🎯 4. Drive the Robot

Use keyboard teleop to drive the robot and let SLAM + clustering work together:

ros2 run robocam omni_teleop_keyboard.py

As you drive around, a 2D map is built and 3D clusters are detected and shown in RViz.


🗅️ 5. Save the Generated Map

Once you're done exploring:

ros2 run nav2_map_server map_saver_cli -f ~/my_map

This saves my_map.yaml and my_map.pgm or .png in your home directory.


⚙️ Optional: Adjust Clustering Parameters

You can tune the behavior of the clustering node in depth_camera_node.cpp:

  • ClusterTolerance: distance between points (e.g. 0.3)
  • MinClusterSize: ignore small noise clusters (e.g. 10)
  • MaxClusterSize: filter out very large clusters (e.g. 1000)
  • VoxelGrid leaf size: downsample for faster processing (e.g. 0.1f)

✅ Summary

Step Action
1️⃣ Launch RoboCam and Nav2
2️⃣ Start SLAM Toolbox
3️⃣ Launch depth clustering node
4️⃣ Drive the robot with keyboard
5️⃣ Save the map

📌 Notes

  • Clustering performance depends on sensor quality and environment structure.
  • Markers are published using visualization_msgs/MarkerArray on /cluster_marker_array.
  • RViz setup: Add a MarkerArray display and point it to /cluster_marker_array.

Enjoy building maps and detecting objects in real-time with RoboCam and ROS 2! 🚀

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors