Skip to content

[MobiCom 2025] Official repo paper: RF-Based 3D SLAM Rivaling Vision Approaches. CartoRadar

License

Notifications You must be signed in to change notification settings

penn-waves-lab/CartoRadar

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

pytorch Python black paper

CartoRadar [MobiCom'25]

This is an official repo of CartoRadar for the following paper:

RF-Based 3D SLAM Rivaling Vision Approaches
Haowen Lai, Zhiwei Zheng, Mingmin Zhao
ACM International Conference on Mobile Computing and Networking (MobiCom), 2025

[Paper] [Website] [Demo Video] [Dataset] [BibTeX]

🌟 Best Artifact Award!


imaging


Installation

Requirements

  • GPU: Nvidia RTX 3090 to achieve a similar online SLAM performance (suggested), or GPUs whose memory >= 16 GB, driver version >= 520.61.05, CUDA version >= 11.8
  • RAM: Memory >= 32 GB
  • Disk: Available space >= 550 GB
  • OS: Ubuntu >= 22.04 with Python β‰₯ 3.9

Preparation

  • Clone the codebase to your local machine
  • Change the working directory to the root folder of the repo.

Environment

We provide two ways to get the environment to run the code: Conda and Docker. For people who have already had Docker on their local machine, we suggest using Docker. If you have a problem with one method, you can switch to the other one.

Using Docker Image

Please follow the instructions below to build the image. If you haven't added your user into the docker group (see here), you need sudo access to run the following docker-related commands.

# Detect GPU compute capability
export CUDA_ARCH=$(nvidia-smi --query-gpu=compute_cap --format=csv,noheader | awk '{print $1 * 10}')

# build the image (might need sudo)
docker build --build-arg USERNAME=$(whoami) --build-arg USER_UID=$(id -u) --build-arg USER_GID=$(id -g) --build-arg CUDA_ARCH=${CUDA_ARCH} -t cartoradar -f docker/Dockerfile .  # don't forget the ending dot

# Start a new container (might need sudo)
docker run -it --rm --gpus all -v .:/home/$(whoami)/CartoRadar -w ~/CartoRadar --shm-size=4096M cartoradar /bin/bash

After you start a new container, you can run our code in it. The environment setup is done.

Note: Make sure you have Nvidia driver version >= 520.61.05 and CUDA version >= 11.8 in your local machine, otherwise you won't be able to start the container.

Note (For advanced users): The machine that uses the docker image needs to have the same GPU compute capability as the machine that builds the image. If you want to build the image locally and use it in another machine (e.g., a remote server), you need to specify the compute capability of the target machine by running export CUDA_ARCH=<GPU compute capability of the target machine>, then re-build the image.

Using Conda Environment

Please follow the below instructions to create the conda environment. If you have any problem, feel free to open an issue or contact us via emails.

# Create a new conda environment
conda env create -f environment.yml
conda activate cartoradar
pip install setuptools==58.2.0 cmake==3.25.0
sudo apt-get install libsuitesparse-dev

# Install g2opy
git clone https://github.com/uoip/g2opy.git \
    && mkdir g2opy/build \
    && cd g2opy/build \
    && cmake .. \
    && make -j8 \
    && cd .. \
    && python setup.py install \
    && cd ..

pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118

# Install tinycudann
export LDFLAGS="$LDFLAGS -L$CONDA_PREFIX/lib/stubs"
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

# Install other packages
pip install -r requirements.txt

Dataset

Our dataset includes two parts: the uncertainty quantification dataset and the SLAM evaluation dataset. Our models are also using the PanoRadar dataset for pretrain weights.

  • Uncertainty quantification dataset: This dataset is organized corresponding to different buildings. Within each building, it has three folders, i.e., glass_npy, lidar_npy, and rf_npy. There are synchronized glass masks, lidar range images, and RF heatmaps in those folders. The dataset can be downloaded here under the uncertainty folder. Please download the data, uncompress it, and put it in the folder CartoRadar/Uncertainty/data/uncertainty/.

  • Pretain weights from PanoRadar dataset: The PanoRadar dataset is a large-scale RF imaging dataset collected from diverse environments. We use the depth-only pretrain weights for our uncertainty quantification models. They can be downloaded here under the pretrain folder. Please download the data, uncompress it, and put it in the folder CartoRadar/Uncertainty/data/pretrain/.

  • SLAM evaluation dataset: This dataset is organized as different robot moving trajectories. Within each trajectory folder, there are ground truth range images from LiDAR (images-gt), our predicted range images from RF (images-pred), the predicted uncertainty from OursH-16 method (uncertainty-mixed), the robot odometry, and the ground truth point cloud map of the environment (maps). The dataset can be downloaded here under the occnet folder. Please download the data, uncompress it, and put it in the folder CartoRadar/OccNet/data/.

Please make sure you put all the data in the right path. It should have a file structure like this:

CartoRadar
β”œβ”€β”€ Uncertainty
β”‚Β Β  β”œβ”€β”€ data
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ uncertainty
β”‚Β Β  β”‚Β Β  β”‚   β”œβ”€β”€ building1
β”‚Β Β  β”‚Β Β  β”‚   β”œβ”€β”€ building2
β”‚Β Β  β”‚Β Β  β”‚   β”œβ”€β”€ ...
β”‚Β Β  β”‚Β Β  β”‚   └── building5
β”‚Β Β  β”‚Β Β  └── pretrain
β”‚Β Β  β”‚Β Β      β”œβ”€β”€ depth_building1_lobo_x2
β”‚Β Β  β”‚Β Β      β”œβ”€β”€ ...
β”‚Β Β  β”‚Β Β      └── laplace_building5_lobo_x2
β”‚Β Β  β”œβ”€β”€ ...
β”‚Β Β  
β”œβ”€β”€ OccNet
β”‚Β Β  β”œβ”€β”€ data
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ building1-exp000
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ building1-exp001
|   |   β”œβ”€β”€  ...
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ building5-exp000
β”‚Β Β  β”‚Β Β  └── building5-exp001
β”‚Β Β  β”œβ”€β”€ ...
β”œβ”€β”€ ...

Uncertainty Quantification

Our perturbation and sampling based uncertainty quantification methods are applied to trained ML models to estimate the uncertainty of the predictions. We have proposed two types of approaches, denoted as Ours-# and OursH-# in the paper.

To start with, change the working directory to Uncertainty and follow the instructions below.

Model Training

To train a model on a specific building, run the following commands. Make sure to change the working directory to Uncertainty.

python train_net.py --config-file configs/<config file>

Since we use the leave-one-building-out training strategy to ensure generalization, the above command only trains a model for one building. Please substitute <config file> to the following files. There are 10 models that need to be trained in total.

Ours-# config files OursH-# config files
pure_depth_building1.yaml laplace_net_building1.yaml
pure_depth_building2.yaml laplace_net_building2.yaml
pure_depth_building3.yaml laplace_net_building3.yaml
pure_depth_building4.yaml laplace_net_building4.yaml
pure_depth_building5.yaml laplace_net_building5.yaml

Evaluate Uncertainty Performance

After finishing training the above 10 models, we can evaluate the performance of our uncertainty quantification approaches.

  • To evaluate Ours-#, run the following command:

    python scripts/eval_sampling_uncertainty.py

    After the script finishes, you can see the evaluation results printed in the terminal.

  • To evaluate OursH-#, run the following command:

    python scripts/eval_laplacian_uncertainty.py

    After the script finishes, you can see the evaluation results printed in the terminal.

SLAM

Our SLAM system has two versions: offline and online. The following section provides detailed instructions on how to run our offline and online uncertainty-aware RF-based SLAM.

Evaluate All Trajectories

To evaluate the performance of our SLAM system, first ensure you are in the folder OccNet, then you can evaluate our offline performance by simply running the following command in the terminal:

Note: Before running the below two scripts, please ensure the output folder is empty if you are running the scripts for the second time. The result_summary.py included in the script would assert there are only 14 subfolders for the offline and 14 subfolders for the online before the metric calculation.

bash offline_run_all.sh

The above shell script would run our algorithm on all trajectories and output the average performance. You can find it at the end of the terminal output. Checkpoints can be found in the folder OccNet/output/

Similarly, for the online performance, simply run:

bash online_run_all.sh

Note: While running online SLAM, please avoid performing other jobs at the same time. Otherwise, they may compete for resources and affect the online SLAM performance.

If you miss the result summary output in the terminal, you can regenerate them simply by running the following code:

# For offline result summary
python result_summary.py

# For online result summary
python result_summary.py --online

(Optional) Evaluate a Single Trajectory

If you want to perform offline/online SLAM for one trajectory, first train the implicit field as the following:

# For offline slam
python ./main_offline.py --config $config --expname $expname

# For online slam
python ./main_online.py --config $config --expname $expname

$config represents the path to the config like ./config/building2-exp001/RadarOccNerf_xxx.yaml, and $expname is the name of the experiment.

To evaluate the performance of the SLAM result:

# Render and save the scene
python ./mobicom_generate_pkl.py --target "$expname" --save_pkl

# Do the comprehensive evaluation
python ./mobicom_analyze_pkl.py --target "$expname"

The metric is written into a JSON file located in the output folder. You can also find other visualizations including the rendered point cloud, the path comparison, etc.

We also provide shell scripts to simplify the above process:

# For offline slam
bash offline_run.sh $config $expname

# For online slam
bash online_run.sh $config $expname

License

CartoRadar is licensed under a CC BY 4.0 License.

BibTeX

If you use CartoRadar in your research, find the code useful, or would like to acknowledge our work, please consider citing our paper:

@inproceedings{CartoRadar,
  author    = {Lai, Haowen and Zheng, Zhiwei and Zhao, Mingmin},
  title     = {RF-Based 3D SLAM Rivaling Vision Approaches},
  booktitle = {ACM International Conference on Mobile Computing and Networking (MobiCom)},
  year      = {2025}
}

If our rotating radar design inspires your work, please consider citing PanoRadar:

@inproceedings{PanoRadar,
  author    = {Lai, Haowen and Luo, Gaoxiang and Liu, Yifei and Zhao, Mingmin},
  title     = {Enabling Visual Recognition at Radio Frequency},
  booktitle = {ACM International Conference on Mobile Computing and Networking (MobiCom)},
  year      = {2024},
  doi       = {https://doi.org/10.1145/3636534.3649369},
}

About

[MobiCom 2025] Official repo paper: RF-Based 3D SLAM Rivaling Vision Approaches. CartoRadar

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Contributors 2

  •  
  •