Skip to content

vsingh-group/mnemodyn

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🧠 MnemoDyn: Learning Resting State Dynamics from 40K fMRI Sequences

[Paper] [Poster] [Slide]

Sourav Pal, Viet Luong, Hoseok Lee, Tingting Dan, Guorong Wu, Richard Davidson, Won Hwa Kim, Vikas Singh

MnemoDyn architecture

MnemoDyn is an operator-learning foundation model for resting-state fMRI, combining multi-resolution wavelet dynamics with CDE-style temporal modeling.

🚀 Update

MnemoDyn is now published on Hugging Face: https://huggingface.co/vhluong/MnemoDyn

You can also publish your own trained checkpoint directly from this repo.

📚 Tutorial

A usage walkthrough is available as a Google Colab notebook:

Open In Colab

🌟 At A Glance

  • Pretraining backbones: code/light/model/main.py, code/light/model/main_masked_autoencode.py, code/light/model/main_masked_autoencode_jepa.py, code/light/model/main_denoise.py, code/light/model/orion.py
  • Core model modules: code/light/model/conv1d_optimize.py, code/light/model/dense_layer.py, code/light/model/ema.py, code/light/model/normalizer.py
  • Downstream tasks: HBN, ADHD200, ADNI, ABIDE, NKIR, UK Biobank, HCP Aging under code/light/*.py
  • Launch scripts: code/light/script/*.sh

📁 Repository Layout

.
├── highdim_req.txt
├── pyproject.toml
├── code/
│   ├── parcellation/
│   └── light/
│       ├── model/
│       ├── script/
│       ├── *_dataset.py
│       └── *classification*.py, *regress*.py
└── README.md

⚙️ Environment Setup

Python 3.10+ is recommended.

🚀 Option A (recommended): uv

uv venv
source .venv/bin/activate
uv sync

📦 Option B: pip

python -m venv .venv
source .venv/bin/activate
pip install -r highdim_req.txt

Ensure your PyTorch build matches your CUDA stack.

🔄 Preprocessing Pipeline (NIfTI to Parcellated CIFTI)

We provide a unified, Python-based CLI pipeline to automate mapping volumetric NIfTI images to fs_LR surfaces and parcellating the resulting dense time series. The pipeline dynamically extracts the Repetition Time (TR) from your NIfTI files to ensure downstream models learn accurate temporal dynamics.

📋 Requirements

  • Connectome Workbench (wb_command) installed and on your system PATH.
  • nibabel and tqdm Python packages.

💻 Usage

Run the pipeline from the repository root:

# Sequential Processing (Safe for Colab/Low RAM)
python -m code.preprocess.pipeline \
  --input-dir /path/to/niftis \
  --output-dir /path/to/output_dir \
  --atlas /path/to/atlas.dlabel.nii \
  --pattern "*_task-rest_space-MNI305_preproc.nii.gz"

# Parallel Processing (Recommended for powerful local machines)
python -m code.preprocess.pipeline_parallel \
  --input-dir /path/to/niftis \
  --output-dir /path/to/output_dir \
  --atlas /path/to/atlas.dlabel.nii \
  --pattern "*_task-rest_space-MNI305_preproc.nii.gz" \
  --jobs 8  # Number of concurrent subjects. Warning: Highly memory intensive.

The script will automatically orchestrate wb_command for left/right mapping and resampling, output an intermediate .dtseries.nii, and finally parcellate it using the provided atlas, injecting the correct native TR throughout.

⚡ Quick Start

🔍 1) Inspect pretraining CLIs

cd code/light/model
python main.py --help
python main_masked_autoencode.py --help
python main_masked_autoencode_jepa.py --help
python main_denoise.py --help

🏋️ 2) Pretraining

bash orion.sh

🏃 3) Run downstream examples

cd code/light
bash script/hbn_classification.sh
bash script/adhd_200_diagnose.sh

🛠️ Typical Workflow

  1. Pretrain a foundation checkpoint (code/light/model/main*.py).
  2. Save Lightning checkpoints under a versioned results directory.
  3. Fine-tune a downstream head using a task script in code/light/.
  4. Track outputs and metrics under Result/<ExperimentName>/....

📖 Citation

If this work helps your research, please cite:

@inproceedings{
pal2026mnemodyn,
title={MnemoDyn: Learning Resting State Dynamics from  $40$K {FMRI} sequences},
author={Sourav Pal and Viet Luong and Hoseok Lee and Tingting Dan and Guorong Wu and Richard Davidson and Won Hwa Kim and Vikas Singh},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=zexMILcQOV}
}

CC BY-NC 4.0

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

CC BY-NC 4.0

About

Official implementation for "MnemoDyn: Learning Resting State Dynamics from 40K FMRI sequences" (ICLR 2026)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors