This is the code repository for the IEEE-RAL'26 paper "Super LiDAR Intensity for Robotic Perception"

-2026-03-13: The code, model weights, and data have been released.
This repository has been tested on Ubuntu 20.04 + ROS Noetic.
- Opencv 4.2
- Eigen3
- PCL 1.13.1
- FAST-LIO2
- livox_ros_driver2
- LibTorch 2.4.0(cxx11-abi-shared)
- CUDA 12.4
- OpenMP
- DBoW3
- M-detector (Used for dynamic point filtering in applications)
mkdir -p super_ws/src
cd ~/super_ws/src
git clone https://github.com/IMRL/Super-LiDAR-Intensity.git
catkin build
source ~/super_ws/devel/setup.bash
Create Conda Environment
conda env create -f SuperLidarIntensity.yaml -y
conda activate super
Generate intensity images from ROS bag
The Super package provides tools to convert LiDAR point clouds into panoramic/virtual-camera intensity images for training and evaluation. We also provide test_bag
for intensity image generation.
roslaunch Super intensity_image.launch
The dataset is available at here
conda activate super
cd ~/super_ws/src/Super/scripts/panoramic_virtualCamera
for training:
python main.py --config config_panoramic.example.yaml
for inference:
python infer.py --config config_panoramic.example.yaml --view_type panoramic
The model weight of "loop closure detection is available at super_panoramic.ts and orbvoc.dbow3. Please place them in the corresponding paths.
LoopClosure detection for single sequence:
The ROS bag for "LoopClosure detection for single sequence" is available at here
cd ~/super_ws
source devel/setup.zsh
roslaunch fast_lio mapping_avia.launch
roslaunch imaging_lidar_place_recognition run.launch
rosbag play --pause --clock example.bag
LoopClosure detection for different sequences: (eg. day and night)
The ROS bag for "LoopClosure detection for different sequence" is available at here
cd ~/super_ws
source devel/setup.zsh
roslaunch fast_lio mapping_avia.launch
roslaunch diff_imaging_lidar_place_recognition run.launch
rosbag play --pause --clock example.bag
Traffic Lane Detection:
The model weight of "Traffic Lane Detection" is available at super_virtual_camera.ts and lane.ts. Please place them in the corresponding paths.
The ROS bag for "Traffic Lane Detection" is available at here
cd ~/super_ws
source devel/setup.zsh
roslaunch fast_lio mapping_avia.launch
roslaunch lane_detect lane_detect.launch
(If you need to filter moving objects, use the M-detector and compile lane_detect_static.cpp)
rosbag play --pause --clock example.bag
Collect ROSbag
We provide code for conveniently collecting static Livox data. Please refer to record_rosbag.cpp.
Part of the code references the implementation of the imaging liar place recognition and laneatt. We thank the authors for their awesome works!
@article{gao2026super,
title={Super LiDAR Intensity for Robotic Perception},
author={Gao, Wei and Zhang, Jie and Zhao, Mingle and Zhang, Zhiyuan and Kong, Shu and Ghaffari, Maani and Song, Dezhen and Xu, Chengzhong and Kong, Hui},
journal={IEEE Robotics and Automation Letters},
year={2026},
publisher={IEEE}
}

