This is an official repo of CartoRadar for the following paper:
RF-Based 3D SLAM Rivaling Vision Approaches
Haowen Lai, Zhiwei Zheng, Mingmin Zhao
ACM International Conference on Mobile Computing and Networking (MobiCom), 2025
[Paper] [Website] [Demo Video] [Dataset] [BibTeX]
π Best Artifact Award!
- GPU: Nvidia RTX 3090 to achieve a similar online SLAM performance (suggested), or GPUs whose memory >= 16 GB, driver version >= 520.61.05, CUDA version >= 11.8
- RAM: Memory >= 32 GB
- Disk: Available space >= 550 GB
- OS: Ubuntu >= 22.04 with Python β₯ 3.9
- Clone the codebase to your local machine
- Change the working directory to the root folder of the repo.
We provide two ways to get the environment to run the code: Conda and Docker. For people who have already had Docker on their local machine, we suggest using Docker. If you have a problem with one method, you can switch to the other one.
Using Docker Image
Please follow the instructions below to build the image. If you haven't added your user into the docker group (see here), you need sudo access to run the following docker-related commands.
# Detect GPU compute capability
export CUDA_ARCH=$(nvidia-smi --query-gpu=compute_cap --format=csv,noheader | awk '{print $1 * 10}')
# build the image (might need sudo)
docker build --build-arg USERNAME=$(whoami) --build-arg USER_UID=$(id -u) --build-arg USER_GID=$(id -g) --build-arg CUDA_ARCH=${CUDA_ARCH} -t cartoradar -f docker/Dockerfile . # don't forget the ending dot
# Start a new container (might need sudo)
docker run -it --rm --gpus all -v .:/home/$(whoami)/CartoRadar -w ~/CartoRadar --shm-size=4096M cartoradar /bin/bashAfter you start a new container, you can run our code in it. The environment setup is done.
Note: Make sure you have Nvidia driver version >= 520.61.05 and CUDA version >= 11.8 in your local machine, otherwise you won't be able to start the container.
Note (For advanced users): The machine that uses the docker image needs to have the same GPU compute capability as the machine that builds the image. If you want to build the image locally and use it in another machine (e.g., a remote server), you need to specify the compute capability of the target machine by running export CUDA_ARCH=<GPU compute capability of the target machine>, then re-build the image.
Using Conda Environment
Please follow the below instructions to create the conda environment. If you have any problem, feel free to open an issue or contact us via emails.
# Create a new conda environment
conda env create -f environment.yml
conda activate cartoradar
pip install setuptools==58.2.0 cmake==3.25.0
sudo apt-get install libsuitesparse-dev
# Install g2opy
git clone https://github.com/uoip/g2opy.git \
&& mkdir g2opy/build \
&& cd g2opy/build \
&& cmake .. \
&& make -j8 \
&& cd .. \
&& python setup.py install \
&& cd ..
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
# Install tinycudann
export LDFLAGS="$LDFLAGS -L$CONDA_PREFIX/lib/stubs"
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
# Install other packages
pip install -r requirements.txtOur dataset includes two parts: the uncertainty quantification dataset and the SLAM evaluation dataset. Our models are also using the PanoRadar dataset for pretrain weights.
-
Uncertainty quantification dataset: This dataset is organized corresponding to different buildings. Within each building, it has three folders, i.e., glass_npy, lidar_npy, and rf_npy. There are synchronized glass masks, lidar range images, and RF heatmaps in those folders. The dataset can be downloaded here under the
uncertainty folder. Please download the data, uncompress it, and put it in the folderCartoRadar/Uncertainty/data/uncertainty/. -
Pretain weights from PanoRadar dataset: The PanoRadar dataset is a large-scale RF imaging dataset collected from diverse environments. We use the depth-only pretrain weights for our uncertainty quantification models. They can be downloaded here under the
pretrain folder. Please download the data, uncompress it, and put it in the folderCartoRadar/Uncertainty/data/pretrain/. -
SLAM evaluation dataset: This dataset is organized as different robot moving trajectories. Within each trajectory folder, there are ground truth range images from LiDAR (images-gt), our predicted range images from RF (images-pred), the predicted uncertainty from OursH-16 method (uncertainty-mixed), the robot odometry, and the ground truth point cloud map of the environment (maps). The dataset can be downloaded here under the
occnet folder. Please download the data, uncompress it, and put it in the folderCartoRadar/OccNet/data/.
Please make sure you put all the data in the right path. It should have a file structure like this:
CartoRadar
βββ Uncertainty
βΒ Β βββ data
βΒ Β βΒ Β βββ uncertainty
βΒ Β βΒ Β β βββ building1
βΒ Β βΒ Β β βββ building2
βΒ Β βΒ Β β βββ ...
βΒ Β βΒ Β β βββ building5
βΒ Β βΒ Β βββ pretrain
βΒ Β βΒ Β βββ depth_building1_lobo_x2
βΒ Β βΒ Β βββ ...
βΒ Β βΒ Β βββ laplace_building5_lobo_x2
βΒ Β βββ ...
βΒ Β
βββ OccNet
βΒ Β βββ data
βΒ Β βΒ Β βββ building1-exp000
βΒ Β βΒ Β βββ building1-exp001
| | βββ ...
βΒ Β βΒ Β βββ building5-exp000
βΒ Β βΒ Β βββ building5-exp001
βΒ Β βββ ...
βββ ...
Our perturbation and sampling based uncertainty quantification methods are applied to trained ML models to estimate the uncertainty of the predictions. We have proposed two types of approaches, denoted as Ours-# and OursH-# in the paper.
To start with, change the working directory to Uncertainty and follow the instructions below.
To train a model on a specific building, run the following commands. Make sure to change the working directory to Uncertainty.
python train_net.py --config-file configs/<config file>Since we use the leave-one-building-out training strategy to ensure generalization, the above command only trains a model for one building. Please substitute <config file> to the following files. There are 10 models that need to be trained in total.
| Ours-# config files | OursH-# config files |
|---|---|
| pure_depth_building1.yaml | laplace_net_building1.yaml |
| pure_depth_building2.yaml | laplace_net_building2.yaml |
| pure_depth_building3.yaml | laplace_net_building3.yaml |
| pure_depth_building4.yaml | laplace_net_building4.yaml |
| pure_depth_building5.yaml | laplace_net_building5.yaml |
After finishing training the above 10 models, we can evaluate the performance of our uncertainty quantification approaches.
-
To evaluate
Ours-#, run the following command:python scripts/eval_sampling_uncertainty.py
After the script finishes, you can see the evaluation results printed in the terminal.
-
To evaluate
OursH-#, run the following command:python scripts/eval_laplacian_uncertainty.py
After the script finishes, you can see the evaluation results printed in the terminal.
Our SLAM system has two versions: offline and online. The following section provides detailed instructions on how to run our offline and online uncertainty-aware RF-based SLAM.
To evaluate the performance of our SLAM system, first ensure you are in the folder OccNet, then you can evaluate our offline performance by simply running the following command in the terminal:
Note: Before running the below two scripts, please ensure the output folder is empty if you are running the scripts for the second time. The result_summary.py included in the script would assert there are only 14 subfolders for the offline and 14 subfolders for the online before the metric calculation.
bash offline_run_all.shThe above shell script would run our algorithm on all trajectories and output the average performance. You can find it at the end of the terminal output. Checkpoints can be found in the folder OccNet/output/
Similarly, for the online performance, simply run:
bash online_run_all.shNote: While running online SLAM, please avoid performing other jobs at the same time. Otherwise, they may compete for resources and affect the online SLAM performance.
If you miss the result summary output in the terminal, you can regenerate them simply by running the following code:
# For offline result summary
python result_summary.py
# For online result summary
python result_summary.py --online
If you want to perform offline/online SLAM for one trajectory, first train the implicit field as the following:
# For offline slam
python ./main_offline.py --config $config --expname $expname
# For online slam
python ./main_online.py --config $config --expname $expname
$config represents the path to the config like ./config/building2-exp001/RadarOccNerf_xxx.yaml, and $expname is the name of the experiment.
To evaluate the performance of the SLAM result:
# Render and save the scene
python ./mobicom_generate_pkl.py --target "$expname" --save_pkl
# Do the comprehensive evaluation
python ./mobicom_analyze_pkl.py --target "$expname"
The metric is written into a JSON file located in the output folder. You can also find other visualizations including the rendered point cloud, the path comparison, etc.
We also provide shell scripts to simplify the above process:
# For offline slam
bash offline_run.sh $config $expname
# For online slam
bash online_run.sh $config $expname
CartoRadar is licensed under a CC BY 4.0 License.
If you use CartoRadar in your research, find the code useful, or would like to acknowledge our work, please consider citing our paper:
@inproceedings{CartoRadar,
author = {Lai, Haowen and Zheng, Zhiwei and Zhao, Mingmin},
title = {RF-Based 3D SLAM Rivaling Vision Approaches},
booktitle = {ACM International Conference on Mobile Computing and Networking (MobiCom)},
year = {2025}
}If our rotating radar design inspires your work, please consider citing PanoRadar:
@inproceedings{PanoRadar,
author = {Lai, Haowen and Luo, Gaoxiang and Liu, Yifei and Zhao, Mingmin},
title = {Enabling Visual Recognition at Radio Frequency},
booktitle = {ACM International Conference on Mobile Computing and Networking (MobiCom)},
year = {2024},
doi = {https://doi.org/10.1145/3636534.3649369},
}