This project is a minimal, educational implementation of an automatic differentiation (autograd) engine in Python. It defines a Value object that tracks operations to build a computation graph and efficiently computes gradients using backpropagation.
This implementation is heavily inspired by and follows the structure of Andrej Karpathy's micrograd. It's intended as a learning tool to understand how neural networks and modern deep learning libraries like PyTorch work "under the hood."
The project also includes a small neural network library (nn.py) built on top of this autograd engine, capable of building and training Multi-Layer Perceptrons (MLPs).
-
ValueObject: A scalar wrapper that tracks its "children" (the values it was computed from) and the operation used. -
Automatic Differentiation: Computes gradients for any mathematical expression by traversing the computation graph.
-
Neural Network Library: Basic modules for building neural networks:
NeuronLayerMLP(Multi-Layer Perceptron)
-
Activations: Includes
tanhandReLUactivation functions. -
Visualization: Utility functions to draw the computation graph using
graphviz.
micrograd_project/
├── micro_grad.py # Core autograd engine (Value class)
├── nn.py # Neural network library (Neuron, Layer, MLP)
├── viz.py # Visualization utilities (draw_dot)
├── train_moons.py # Script to train an MLP on the moons dataset
├── plot_neural_net.py # Visualizes a small neural network computation graph
├── examples.ipynb # Jupyter notebook with interactive examples
├── requirements.txt # Python dependencies
└── README.md # You are here :)
Clone the repository and navigate into the project directory:
git clone https://github.com/its-nott-me/micro_grad
cd micro_gradCreate a virtual environment and activate it:
python -m venv venv
venv\Scripts\activate # On Windows
# source venv/bin/activate # On macOS/LinuxInstall dependencies:
pip install -r requirements.txtRun the main example to train an MLP on the “moons” dataset:
python train_moons.pyThis will train a small network and display the decision boundary.
To visualize how the computation graph is built and gradients flow through the model, run:
python plot_neural_net.pyThis will:
- Build a small 2→2→1 neural network.
- Run a forward pass on a sample input.
- Generate a computation graph (
simple_net.svg) showing how values and operations connect.
Example output (simplified view):
💡 The graph shows how each
Valuenode connects through operations likemul,add, and activation functions — demonstrating how backpropagation works under the hood.
For a step-by-step breakdown of how the Value object and backpropagation work, open:
jupyter notebook examples.ipynbnumpymatplotlibgraphviz(You may also need to install the Graphviz binary. See the Graphviz download page)scikit-learnjupyter(for running notebooks)
This codebase is a Python reimplementation of Andrej Karpathy's micrograd. All credit for the original concept and educational inspiration goes to him.