Skip to content

Luo-Yihao/FaithC

Repository files navigation

Faithful Contouring: Near-Lossless 3D Voxel Representation Free from Iso-surface

Enough with SDF + Marching Cubes?   Time to bring geometry back — faithfully.

arXiv CVPR 2026 Oral License

Yihao Luo1*, Xianglong He2*, Chuanyu Pan3, Yiwen Chen3,4, Jiaqi Wu5, Yangguang Li6, Wanli Ouyang6, Yuanming Hu3, Guang Yang1, ChoonHwai Yap1

1Imperial College London   2Tsinghua University   3Meshy   4Nanyang Technological University   5University of Melbourne   6The Chinese University of Hong Kong

Teaser

News

  • [2026-03] 🎉 Accepted as Oral at CVPR 2026!
  • [2026-01] 🤝 Concurrent work TRELLIS 2 released with O-Voxel representation — great to see the community moving beyond iso-surfaces.
  • [2025-12] 🚀 Code fully open-sourced! v1.5 released — pure Python + Atom3d, no C++ compilation required.
  • [2025-11] 📄 arXiv preprint and wheel package released.

Overview

Conventional voxel-based mesh representations rely on distance fields (SDF/UDF) and iso-surface extraction through Marching Cubes. These pipelines require watertight preprocessing and global sign computation, often introducing artifacts like surface thickening, jagged iso-surfaces, and loss of internal structures.

Faithful Contouring avoids these issues by directly operating on the raw mesh. It identifies all surface-intersecting voxels and solves for a compact set of local anchor features — Faithful Contour Tokens (FCTs) — that enable near-lossless reconstruction.

  • High fidelity — sharp edges and internal structures preserved, even for open or non-manifold meshes
  • Scalable — efficient GPU kernels enable resolutions up to 2048+
  • Compact — 18 dimensions per voxel token
  • Flexible — token format supports filtering, texturing, manipulation, and assembly

Compare

Installation

Requirements

  • NVIDIA GPU with CUDA support
  • Python 3.10+
  • PyTorch 2.5+

Quick Start with Pixi (Recommended)

Pixi handles all dependencies automatically — one command to install, one to run:

git clone https://github.com/Luo-Yihao/FaithC.git
cd FaithC
pixi run demo

That's it. The first run installs everything (Python, PyTorch, torch-scatter, Atom3d, etc.) and runs the demo. Subsequent runs take ~5 seconds.

Install Pixi
curl -fsSL https://pixi.sh/install.sh | sh

Manual Setup

# Install PyTorch (match your CUDA version, example for CUDA 12.4)
pip install torch --index-url https://download.pytorch.org/whl/cu124

# Install torch_scatter
pip install pyg_lib torch_scatter torch_sparse torch_cluster torch_spline_conv \
    -f https://data.pyg.org/whl/torch-2.5.1+cu124.html

# Install Atom3d (geometry backend)
pip install git+https://github.com/Luo-Yihao/Atom3d.git --no-build-isolation

# Install FaithContour
git clone https://github.com/Luo-Yihao/FaithC.git
cd FaithC
pip install -e . --no-build-isolation

# Other dependencies
pip install trimesh scipy einops

Quick Start

# Default icosphere at resolution 128
python demo.py

# Custom mesh at resolution 512
python demo.py -p assets/examples/pirateship.glb -r 512 -o output/pirateship.glb
All arguments
Argument Default Description
-p, --mesh_path "" (icosphere) Path to input mesh
-r, --res 128 Grid resolution (power of 2)
-o, --output output/reconstructed_mesh.glb Output path
--margin 0.05 Grid boundary margin
--tri_mode auto Triangulation mode: auto, length, angle, normal_abs, simple_02, simple_13
--clamp_anchors True Clamp anchors to voxel bounds
--compute_flux True Compute edge flux signs

API Usage

import torch
import trimesh
from faithcontour import FCTEncoder, FCTDecoder
from atom3d import MeshBVH
from atom3d.grid import OctreeIndexer

# Load mesh
mesh = trimesh.load("model.obj", force='mesh')
V = torch.tensor(mesh.vertices, dtype=torch.float32, device='cuda')
F = torch.tensor(mesh.faces, dtype=torch.long, device='cuda')

# Build spatial structures
bvh = MeshBVH(V, F)
bounds = torch.tensor([[-1., -1., -1.], [1., 1., 1.]], device='cuda')
octree = OctreeIndexer(max_level=9, bounds=bounds, device='cuda')  # 512^3

# Encode
encoder = FCTEncoder(bvh, octree, device='cuda')
fct = encoder.encode(min_level=4, compute_flux=True, clamp_anchors=True)
# fct.anchor:             [K, 3]  — surface anchor points
# fct.normal:             [K, 3]  — surface normals
# fct.edge_flux_sign:     [K, 12] — edge crossing signs {-1, 0, +1}
# fct.active_voxel_indices: [K]   — linear voxel indices

# Decode
decoder = FCTDecoder(resolution=512, bounds=bounds, device='cuda')
result = decoder.decode_from_result(fct)

# Export
trimesh.Trimesh(
    result.vertices.cpu().numpy(),
    result.faces.cpu().numpy()
).export("output.glb")

FCT Token Format

Each active voxel is encoded as an 18-dimensional token:

Field Dims Type Description
anchor 3 float32 Surface representative point
normal 3 float32 Surface normal direction
edge_flux_sign 12 int8 Edge crossing signs {-1, 0, +1}

How It Works

Encoder — mesh to tokens:

  1. Hierarchical octree traversal with BVH-accelerated AABB intersection
  2. SAT polygon clipping at the finest level for precise centroids and areas
  3. QEF (Quadric Error Function) solve for optimal anchor points and normals
  4. Segment-triangle intersection for edge flux sign computation

Decoder — tokens to mesh:

  1. Identify edges with non-zero flux (surface crossings)
  2. Form quads from 4 voxels incident to each active edge
  3. Adaptive triangulation based on normal consistency

Performance

Benchmarked on NVIDIA H100:

Resolution Active Voxels Encode Decode Total
128 71K 0.27s 0.02s 0.29s
256 287K 0.45s 0.06s 0.51s
512 1.1M 0.52s 0.17s 0.70s
1024 4.6M 0.82s 0.61s 1.42s
2048 18.4M 2.16s 2.51s 4.68s

Roadmap

  • Wheel package for Linux (v0.1)
  • Pure Python + Atom3d implementation (v1.5)
  • FCT-based VAE release
  • Diffusion model release

Citation

If you find this work useful, please cite:

@inproceedings{luo2026faithfulcontouring,
    title     = {Faithful Contouring: Near-Lossless 3D Voxel Representation Free from Iso-surface},
    author    = {Luo, Yihao and He, Xianglong and Pan, Chuanyu and Chen, Yiwen and Wu, Jiaqi and Li, Yangguang and Ouyang, Wanli and Hu, Yuanming and Yang, Guang and Yap, ChoonHwai},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year      = {2026}
}

License

This project is licensed under the Apache License 2.0.

Contact

Yihao Luo — y.luo23@imperial.ac.uk

Project: https://github.com/Luo-Yihao/FaithC

About

[CVPR 2026 (Oral)] Official Torch/CUDA Implementation of Faithful Contouring

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors