Documentation | Join our Discord community
brainsets is a Python package for processing neural data into a standardized format.
brainsets is available for Python 3.9+
To install the package, run the following command:
pip install brainsets| brainset_id | Brainset Card | Raw Data Size | Processed Data Size |
|---|---|---|---|
| churchland_shenoy_neural_2012 | Link | 46 GB | 25 GB |
| flint_slutzky_accurate_2012 | Link | 3.2 GB | 151 MB |
| odoherty_sabes_nonhuman_2017 | Link | 22 GB | 26 GB |
| pei_pandarinath_nlb_2021 | Link | 688 KB | 22 MB |
| perich_miller_population_2018 | Link | 13 GB | 2.9 GB |
| kemp_sleep_edf_2013 | TBA | 8.2 GB | 60 GB |
| allen_visual_coding_ophys_2016 | TBA | 356 GB | 58 GB |
This work is only made possible thanks to the public release of these valuable datasets by the original researchers. If you use any of the datasets processed by brainsets in your research, please make sure to cite the appropriate original papers and follow any usage guidelines specified by the dataset creators. Proper attribution not only gives credit to the researchers who collected and shared the data but also helps promote open science practices in the neuroscience community. You can find the original papers and usage guidelines for each dataset in the brainsets documentation.
First, configure the directories where brainsets will store raw and processed data:
brainsets configYou will be prompted to enter the paths to the raw and processed data directories.
$> brainsets config
Enter raw data directory: ./data/raw
Enter processed data directory: ./data/processedYou can update the configuration at any time by running the config command again.
You can list the available datasets by running the list command:
brainsets listYou can prepare a dataset by running the prepare command:
brainsets prepare <brainset>Data preparation involves downloading the raw data from the source then processing it,
following a set of rules defined in pipelines/<brainset>/.
For example, to prepare the Perich & Miller (2018) dataset, you can run:
brainsets prepare perich_miller_population_2018 --cores 8If you are planning to contribute to the package, you can install the package in development mode by running the following command:
pip install -e ".[dev]"Install pre-commit hooks:
pre-commit installUnit tests are located under test/. Run the entire test suite with
pytestor test individual files via, e.g., pytest test/test_enum_unique.py
Please cite our paper if you use this code in your own work:
@inproceedings{
azabou2023unified,
title={A Unified, Scalable Framework for Neural Population Decoding},
author={Mehdi Azabou and Vinam Arora and Venkataramana Ganesh and Ximeng Mao and Santosh Nachimuthu and Michael Mendelson and Blake Richards and Matthew Perich and Guillaume Lajoie and Eva L. Dyer},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
}