Skip to content
/ NERF_ Public

Implementation of 3D View Synthesis using Neural Radiance Fields.

License

Notifications You must be signed in to change notification settings

Aashay7/NERF_

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

3D View Synthesis using Neural Radiance Fields (NeRF)

This repository implements 3D view synthesis using classical volumetric rendering and ray sampling, based on the paper Representing Scenes as Neural Radiance Fields for View Synthesis by Mildenhall et al.

Features

  • TensorFlow Implementation: End-to-end NeRF pipeline using TensorFlow and Keras.
  • Volumetric Rendering: Generates images by integrating along rays through a neural scene representation.
  • Ray Sampling: Efficient hierarchical sampling for coarse and fine models.
  • Modular Utilities: Includes data loading, positional encoding, model definition, training, and rendering utilities.
  • Configurable Training: Easily adjust parameters via NERF_utils/config.py.

Project Structure

NERF_/
├── train.py                # Main training script
├── NERF_utils/
│   ├── config.py           # Configuration parameters
│   ├── data.py             # Data loading and ray generation
│   ├── encoder.py          # Positional encoding functions
│   ├── nerf.py             # NeRF model definition
│   ├── nerf_trainer.py     # Training loop and logic
│   ├── train_monitor.py    # Training monitoring utilities
│   └── utils.py            # Rendering and sampling utilities
├── README.md
├── LICENSE
└── NeRF CV Poster.pptx

How It Works

NeRF represents a scene as a continuous 5D function (position and viewing direction) and uses a neural network to predict color and density at each point. Images are synthesized by tracing rays from the camera and integrating predicted values using volumetric rendering.

  • Data Preparation: Loads images and camera poses from JSON files.
  • Ray Generation: Computes rays for each pixel using camera intrinsics and extrinsics.
  • Positional Encoding: Encodes input positions and directions for the neural network.
  • Model: Deep MLP with skip connections, outputs RGB and density.
  • Training: Optimizes photometric loss between rendered and ground truth images.
  • Rendering: Synthesizes novel views by querying the trained model.

Usage

  1. Configure Dataset Paths: Edit NERF_utils/config.py to set your dataset and output paths.
  2. Prepare Data: Place your images and JSON files in the specified dataset directory.
  3. Train the Model:
    python train.py
  4. View Results: Rendered images and videos will be saved in the output directory.

Requirements

  • Python 3.x
  • TensorFlow 2.x
  • (Optional) PyTorch for experimentation

Install dependencies:

pip install tensorflow

Contributors

About

Implementation of 3D View Synthesis using Neural Radiance Fields.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages