Skip to content

VectorPioneer/sparse-resonance

 
 

Repository files navigation

Sparse Resonance Networks

Sparse Resonance Networks is a research toolkit for building and evaluating sparse modern Hopfield-style associative memories. The project focuses on scalable modules, reproducible experiments, and approachable baselines that demonstrate how sparsity can improve memory retrieval efficiency.

Highlights

  • Modular Hopfield pooling layers with interchangeable sparsity mechanisms
  • Ready-to-run experiments for MNIST and CIFAR multiple-instance learning
  • Scripts for theoretical validation and visualization of convergence behaviour
  • Lightweight configuration files for quick benchmarking and ablation studies

Quick Start

Create a fresh environment and install the core dependencies:

conda create -n sparse_resonance python=3.8
conda activate sparse_resonance
pip install -r requirement.txt

Run the MNIST MIL demo with default hyperparameters:

python mnist_mil_main.py --bag_size 5

For CIFAR MIL experiments:

python cifar_mil_main.py --dataset cifar10 --bag_size 20

Project Structure

  • layers.py — entry point for Hopfield pooling layers with sparse, dense, entmax, and generalized sparse variants
  • hflayers/ and sparse_hflayers/ — reference implementations of dense and sparse transformer blocks
  • datasets/ — utilities for creating synthetic bags and loading real datasets
  • theoretical_results_validation/ — scripts to replicate convergence and energy landscape figures
  • imgs/ — sample plots from baseline experiments

Experiment Recipes

Visualize Theoretical Benchmarks

python theoretical_results_validation/plotting.py

Real-World MIL Runs

python real_world_mil.py --dataset fox --mode sparse

Key arguments:

  • dataset: fox, tiger, ucsb, elephant
  • mode: sparse or standard
  • cpus_per_trial: number of CPU cores to reserve per run
  • gpus_per_trial: GPU allocation per run (0 or 1)
  • gpus_id: comma-separated device IDs when GPUs are available

Contributing

  1. Fork the repository and create a feature branch.
  2. Format code with black and run unit tests relevant to your change.
  3. Open a pull request describing the motivation and experimental impact.

Bug reports and feature suggestions are welcome through issues. Please include reproduction steps and expected outcomes when possible.

License

This project is distributed under the terms of the MIT License.

About

Advanced sparse modern Hopfield models delivering fast associative memory with energy-efficient inference.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.9%
  • Shell 0.1%