Unofficial version of Fast Learning Signed Distance Functions from Noisy 3D Point Clouds via Noise to Noise Mapping
Unofficial Pytorch implementation of Noise2Noise for Point Clouds by Nayoung Kwon and Pierrick Bournez
Explore the docs »
This is an repository that try to reproduce the Paper: "Fast Learning Signed Distance Functions from Noisy 3D Point Clouds via Noise to Noise Mapping". It is focus on the shape reconstruction part. The primary focus was to have preliminary working reproducible results. The implementation is not intended to be quick but to be able to iterate quickly on it ( and be reproducible).
We used it for a class project, we added the report here
We don't think our results are as good as the paper claim to be, but we hope it will help people to have better result with this method.
Also, if you ❤️ or simply use this project, don't forget to give the repository a ⭐, it means a lot to us !
We tested the code with Cuda 12.0 Download the requirements :
# python3 -m venv venv
# source venv/bin/activate # for linux
pip install -r requirements.txt
# If you want to use MultiHashResolution uncomment the next line
#pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
If the installation of the fancy InstantNGP models doesn't work, see the repository for more details. We provided another baseline [SirenNET] to have preliminary results before getting into TinyCUda
We provide two mains models. One with the Multi Hash Resolution Resolution and [SirenNET] as a comparison. We Mostly tested our implementation on 2D Data so we can assess the underlying SDF Data. See the Notebook "Main.ipynb" for more information. if you want to scale it to 3D, we provide a proof of concept in the notebook "3D_version.iypnb"
We welcome any pull request to make it work on custom ply.
The repository is structured as follows. Each point is detailed below.
├── README.md <- The top-level README for developers using this project
├── src <- Source code for use in this project
│ ├── data # How we create our synthetic data
│ ├── loss # The different losses we implemented.
│ ├── models # Model Architecture
│ ├── result # Visual Display
│ ├── loss # Loss
│ ├── metrics # Metrics
│ ├── models # Model architecture
│
│ ├── optimize.py # Our training loop
│
├── requirements.txt <- The requirements file for reproducing the analysis environment
├── main.ipynb <- Main script to run the code
└── personal_files <- Personal files, ignored by git (e.g. notes, debugging test scripts, ...)
I was inspired by this documentation The Main structure of the repository come from this (great) course
Official Repository in Tensorflow. We are not affiliated with this repository and this is our own opinions.