This repository contains a Unity-based robot simulation integrated with a ROS backend. It demonstrates distributed control, sensor communication, and real-time interaction between Unity and ROS.
Follow the steps below to set up the project on your local machine (This guide is written specifically for Windows and Ubuntu 20.04).
git clone https://github.com/imorange/Mobile-Robot-Project.git
cd Mobile-Robot-Project-
Read documentation and install their drivers (OpenHaptics for Windows Developer Edition v3.5)
-
Run Touch Smart Setup and initialise Haptic device
-
Open Unity Hub
-
Click Add → Add Existing Project
-
Select the folder Mobile-Robot-Project/MyUnityProject
-
Open it with Unity 2021.3+ (or your required version)
Ubuntu Install of ROS Noetic is essential. Follow the steps in the link if not been configured on your own machine.
# Clone the GitHub repository into your Catkin workspace
cd ~/catkin_ws/src
git clone https://github.com/RcO2Rob/Dis-Project.git
# Restructure the directory
cd ~/catkin_ws/src
mv Dis-Project/myproject .
# Modify move_base launch file
roscd turtlebot3_navigation/launch/
sudo vim move_base.launch
# Edit line 4 to <arg name="cmd_vel_topic" default="/nav_vel" />
# Build catkin workspace
cd ~/catkin_ws
catkin_make
source devel/setup.bash# Install rosbridge_server in WSL
sudo apt update
sudo apt install ros-noetic-rosbridge-server
# Find IP address of WSL
hostname -ITurtlebot3 Packages are essential. Follow the steps in the link if not been configured on your own machine.
# Move map files to Ubuntu root directory.
cd ~/catkin_ws/src/maps
mv room2.yaml ~/
mv room2.pgm ~/# Move all shell scripts to catkin_ws, then launch shell script on a separate terminal
./run_experiment.shScenes Main Task A, B, and C differ by the location of target objects. In all four conditions (modes), they differ only in the clues presented.
- Condition A has no cues
- Condition B has a haptic guidance force
- Condition C has a direction indicator line and an autonomy level indicator bar, and haptic guidance force
- Condition D combines the cues from B and C, and mini-map, collision detection lines, and anti-collision braking cue.

