| title | emoji | colorFrom | colorTo | sdk | pinned | license | short_description |
|---|---|---|---|---|---|---|---|
NeuroMusicLab |
🧠🎵 |
indigo |
red |
gradio |
false |
mit |
A demo for EEG-based music composition and manipulation. |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
A user-friendly, accessible neuro-music studio for motor rehabilitation and creative exploration. Compose and remix music using EEG motor imagery signals—no musical experience required!
- Automatic Composition: Layer musical stems (bass, drums, instruments, vocals) by imagining left/right hand or leg movements. Each correct, high-confidence prediction adds a new sound.
- DJ Mode: After all four layers are added, apply real-time audio effects (Echo, Low Pass, Compressor, Fade In/Out) to remix your composition using new brain commands.
- Seamless Playback: All completed layers play continuously, with smooth transitions and effect toggling.
- Manual Classifier: Test the classifier on individual movements and visualize EEG data, class probabilities, and confusion matrix.
- Accessible UI: Built with Gradio for easy use in a browser or on Hugging Face Spaces.
- Compose:
- Click "Start Composing" and follow the on-screen prompts.
- Imagine the prompted movement (left hand, right hand, left leg, right leg) to add musical layers.
- Each correct, confident prediction adds a new instrument to the mix.
- DJ Mode:
- After all four layers are added, enter DJ mode.
- Imagine movements in a specific order to toggle effects on each stem.
- Effects are sticky and only toggle every 4th repetition for smoothness.
- Manual Classifier:
- Switch to the Manual Classifier tab to test the model on random epochs for each movement.
- Visualize predictions, probabilities, and the confusion matrix.
app.py # Main Gradio app and UI logic
sound_control.py # Audio processing and effect logic
classifier.py # EEG classifier
config.py # Configuration and constants
data_processor.py # EEG data loading and preprocessing
requirements.txt # Python dependencies
.gitignore # Files/folders to ignore in git
SoundHelix-Song-6/ # Demo audio stems (bass, drums, instruments, vocals)
- Install dependencies:
pip install -r requirements.txt
- Add required data:
- Ensure the
SoundHelix-Song-6/folder with all audio stems (bass.wav,drums.wav,instruments.wavorother.wav,vocals.wav) is present and tracked in your repository. - Include at least one demo EEG
.matfile (as referenced in yourDEMO_DATA_PATHSinconfig.py) for the app to run out-of-the-box. Place it in the correct location and ensure it is tracked by git.
- Ensure the
- Run the app:
python app.py
- Open in browser:
- Go to
http://localhost:7860(or the port shown in the terminal)
- Go to
- Ready for Hugging Face Spaces or any Gradio-compatible cloud platform.
- Minimal
.gitignoreand clean repo for easy deployment. - Make sure to include all required audio stems and at least two demo
.matEEG files in your deployment for full functionality.
- Concept & Lead Developer: Sofia Fregni
- Model Training: Katarzyna Kuhlmann
- Deployment/Infrastructure: Hamed Koochaki Kelardeh
- Audio Stems: SoundHelix
The EEG dataset used for training and demonstration is sourced from the following published Data Descriptor.
Kaya, M., Binli, M., Ozbay, E. et al. A large electroencephalographic motor imagery dataset for electroencephalographic brain computer interfaces. Sci Data 5, 180211 (2018). DOI: https://doi.org/10.1038/sdata.2018.211
The signal processing and base model architecture (e.g., ShallowFBCSPNet) were implemented using the open-source Python library Braindecode.
Schirrmeister, R. T., Springenberg, J. T., Fiederer, S. K. J., Glasstetter, M., Eggensperger, P., Tangermann, A., ... & Hutter, F. (2017). Deep learning with convolutional neural networks for EEG decoding and visualization. Human Brain Mapping, 38(11), 5891–5910.
Project Link: https://braindecode.org/
MIT License - see LICENSE file for details.