Skip to content

Ishwor-git/DenoisingAutoEncoder

Repository files navigation

Autoencoder from Scratch (NumPy)

This project is a fully modular implementation of an Autoencoder built from scratch using NumPy, designed as both a learning exercise and a mini deep-learning framework.

It intentionally avoids high-level libraries like PyTorch or TensorFlow so that you can see, understand, and modify every part of the training process — forward pass, backward pass, gradients, and optimization.


What You Will Learn

By studying and running this project, you will understand:

  • How neural networks work under the hood
  • How forward and backward propagation are implemented
  • How autoencoders compress data into a latent space
  • How training loops, losses, and optimizers interact
  • How deep-learning frameworks are structured internally

This project is intentionally written in a clear, readable, and extensible style, not an optimized one.
For detailed information about Autoencoders read Autoencoders


Project Structure

autoencoder/
│
├── activations.py              # Activation functions (ReLU, Sigmoid)
├── layers.py                   # Neural network layers (Dense)
├── loss.py                     # Loss functions (MSE)
├── optimizer.py                # Optimizers (SGD, Adam)
├── model.py                    # Sequential model abstraction
├── autoencoder.py              # Encoder → Latent → Decoder orchestration
├── notebooks/                  # Implementation
│   ├── minist.ipynb            # Reconstruction of Minist data
│   ├── denoising_minist.ipynb  # Reconstruction of noisy minist data
├── Audoencoders.md             # Short and detailed explanation on autoencoders
├── .gitignore
└── README.md

Each file has one responsibility. No file tries to do everything.


Core Design Philosophy

1. Modularity First

  • Layers do not know about losses
  • Losses do not know about optimizers
  • Optimizers do not know about models
  • Encoder and decoder are separate but composable

This mirrors how real frameworks like PyTorch are structured.


2. Encoder → Latent → Decoder

The autoencoder is explicitly structured as:

Input → Encoder → Latent Space → Decoder → Reconstruction

This makes it easy to:

  • Inspect latent representations
  • Add noise (denoising autoencoder)
  • Extend to VAEs later

File-by-File Explanation

activations.py

Contains activation functions and their derivatives.

  • ReLU
  • Sigmoid

Each activation implements:

  • forward(x)
  • backward(grad)

No weights. No learning rate. Only math.


layers.py

Defines trainable neural-network layers.

Currently implemented:

  • Dense

Responsibilities:

  • Store parameters (W, b)
  • Compute forward pass
  • Compute gradients during backward pass

loss.py

Defines loss functions.

Implemented:

  • MSELoss

Responsibilities:

  • Compute scalar loss
  • Return gradient w.r.t. predictions

optimizer.py

Handles parameter updates.

Implemented:

  • SGD
  • Adam

The optimizer:

  • Does not compute gradients
  • Simply updates parameters using stored gradients

model.py

Defines a minimal Sequential container.

It:

  • Applies layers in order (forward)
  • Applies gradients in reverse order (backward)

This keeps model logic extremely lightweight.


autoencoder.py

This file connects everything together.

Responsibilities:

  • Run encoder → latent → decoder
  • Route gradients correctly
  • Expose all layers to the optimizer

This file contains no math, only orchestration.


notebooks/

End-to-end training and visualization of reconstruction and denoising process

It:

  • Loads MNIST
  • Normalizes and flattens images
  • Builds encoder and decoder
  • Trains the autoencoder using mini-batches

This is the best place to start exploring the project.


How to Run

Requirements

python >= 3.8
numpy
scikit-learn

Install dependencies:

pip install numpy scikit-learn

explore the notebooks/

You should see the reconstruction andn denoising process


Why This Project Matters

Modern deep-learning libraries hide complexity.

This project reveals it.

If you understand this codebase, you will:

  • Debug models faster
  • Design architectures more confidently
  • Truly understand backpropagation

Frameworks change. Fundamentals don’t.


About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published