[CVPR 2026] Proxy-GS: Unified Occlusion Priors for Training and Inference in Structured 3D Gaussian Splatting
This repo contains official implementations of Proxy-GS, ⭐ us if you like it!
- 🔥🔥 News:
2026/2/26: Proxy-GS has been accepted to CVPR 2026. - [✓] Release the training & inference code of Proxy-GS.
- Release all model checkpoints.
We recommend using a dedicated conda environment:
conda create -n proxy-gs python=3.10 -y
conda activate proxy-gsInstall a CUDA-enabled PyTorch build that matches your local CUDA version first, then install the remaining dependencies:
pip install -r requirements.txt
# Install torch-scatter with the wheel matching your PyTorch/CUDA version.
# See: https://data.pyg.org/whl/
pip install torch-scatter
# Install local CUDA extensions
pip install ./submodules/diff-gaussian-rasterization
pip install ./submodules/simple-knnThe example below shows the expected structure of the MatrixCity small-city dataset:
First, download all data under small_city/ from MatrixCity via Hugging Face:
pip install huggingface_hub
# Download the 'MatrixCity' dataset (will include all data)
# To only download the 'small_city' portion, use '--pattern' to filter
huggingface-cli download BoDai/MatrixCity --repo-type dataset --local-dir MatrixCity --include "small_city/**"Since the street data in the small city set is not originally split into blocks, running all images together may result in an excessively large dataset at once.
To address this, I partitioned thesmall_city_road_horizonsubset into multiple blocks.
The correspondingtrainandtestJSON files for each split are provided underpose_block/.
Each block (e.g.,block_1,block_2, ...) contains its owntransforms_train.jsonandtransforms_test.json, making it easier to train and evaluate on manageable subsets of the data.
To use the dataset, you need to combine the data from Hugging Face's MatrixCity repository with the block-specific JSON splits we provide.
mv pose_block MatrixCity/small_city/street/The overall data directory structure should look like:
MatrixCity/
└── small_city/
├── aerial/
└── street/
├── pose_block/
│ ├── block_1/
│ │ ├── transforms_test.json
│ │ └── transforms_train.json
│ ├── block_2/
│ │ ├── transforms_test.json
│ │ └── transforms_train.json
│ ├── block_3/
│ │ ├── transforms_test.json
│ │ └── transforms_train.json
│ ├── block_4/
│ │ ├── transforms_test.json
│ │ └── transforms_train.json
│ ├── block_5/
│ │ ├── transforms_test.json
│ │ └── transforms_train.json
│
├── train/
│ ├── small_city_road_down/
│ ├── small_city_road_horizon/
│ ├── small_city_road_outside/
│ └── small_city_road_vertical/
├── test/
└── train_dense/
The following example uses the MatrixCity block_5 scene. For reproducibility, we recommend using a dedicated output directory such as output/block_5.
SCENE=proxy-gs/MatrixCity/small_city/street/pose_block/block_5
IMAGES=proxy-gs/MatrixCity/small_city/street/train/small_city_road_horizon
MESH=cvpr/block_E_from_mesh.ply
POINTS=MatrixCity/small_city/aerial/small_city_pointcloud/point_cloud_ds20/aerial/Block_E.ply
DEPTH_DIR=mesh_depth_block_5
OUTPUT=output/block_5Render the proxy mesh into per-view depth maps and save them as .npy files:
python mesh_render.py \
-s ${SCENE} \
-m ${OUTPUT} \
-i ${IMAGES} \
--ply_mesh ${MESH} \
--depth_npy_dir ${DEPTH_DIR}The rendered depth files will be written to ${DEPTH_DIR}.
After the depth cache is ready, start training with the rendered mesh depth prior:
python train.py \
-s ${SCENE} \
-m ${OUTPUT} \
-i ${IMAGES} \
--ply_mesh ${MESH} \
--depth_npy_dir ${DEPTH_DIR} \
--ply_path ${POINTS}Checkpoints will be saved under ${OUTPUT}/point_cloud.
This optional Vulkan backend currently requires Ubuntu Linux and an NVIDIA RTX-series compute GPU.
If you want to use the Vulkan-CUDA interop backend used by render_real.py, build the Python extension in ProxyGS-Vulkan-Cuda-Interop first.
cd ProxyGS-Vulkan-Cuda-Interop
# Use the current conda environment
export PYTHON_EXECUTABLE=$(which python)
# Set this to your local Vulkan SDK path
export VULKAN_SDK_PREFIX=$HOME/VulkanSDK/1.4.321.1/x86_64
# Build the Python extension only
./build_pyext_only.sh build-pyAfter building, verify that the extension can be imported successfully:
cd ProxyGS-Vulkan-Cuda-Interop
python -c "
import sys
sys.path.insert(0, 'build-py/_bin/Release')
import vk2torch_ext
print('vk2torch_ext OK:', vk2torch_ext.__version__)
"Once the extension is built, go back to the project root and run render_real.py.
cd ..
python render_real.py \
-m ${OUTPUT} \
--scene_file /absolute/path/to/your_scene.glbIf you want to keep the exact commands used in our current internal runs, simply replace ${OUTPUT} with output.
This project builds upon several excellent open-source repositories. We sincerely thank the authors of:
- Octree-GS: Octree-GS: Towards Consistent Real-time Rendering with LOD-Structured 3D Gaussians
- Scaffold-GS: Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering
- 3D Gaussian Splatting: 3D Gaussian Splatting for Real-Time Radiance Field Rendering
- vk_lod_clusters: Sample for cluster-based continuous level of detail rasterization or ray tracing
