Skip to content

A proof-of-concept demo translating Motor Imagery (MI) EEG brain signals into a layered, evolving musical composition. Designed for neuro-feedback and rehabilitation, users "compose" and manipulate music by imagining movements. Uses prerecorded EEG data for robust model benchmarking.

License

Notifications You must be signed in to change notification settings

sofieff/NeuroMusicLab

Repository files navigation

title emoji colorFrom colorTo sdk pinned license short_description
NeuroMusicLab
🧠🎵
indigo
red
gradio
false
mit
A demo for EEG-based music composition and manipulation.

Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference

EEG Motor Imagery Music Composer

A user-friendly, accessible neuro-music studio for motor rehabilitation and creative exploration. Compose and remix music using EEG motor imagery signals—no musical experience required!

Features

  • Automatic Composition: Layer musical stems (bass, drums, instruments, vocals) by imagining left/right hand or leg movements. Each correct, high-confidence prediction adds a new sound.
  • DJ Mode: After all four layers are added, apply real-time audio effects (Echo, Low Pass, Compressor, Fade In/Out) to remix your composition using new brain commands.
  • Seamless Playback: All completed layers play continuously, with smooth transitions and effect toggling.
  • Manual Classifier: Test the classifier on individual movements and visualize EEG data, class probabilities, and confusion matrix.
  • Accessible UI: Built with Gradio for easy use in a browser or on Hugging Face Spaces.

How It Works

  1. Compose:
    • Click "Start Composing" and follow the on-screen prompts.
    • Imagine the prompted movement (left hand, right hand, left leg, right leg) to add musical layers.
    • Each correct, confident prediction adds a new instrument to the mix.
  2. DJ Mode:
    • After all four layers are added, enter DJ mode.
    • Imagine movements in a specific order to toggle effects on each stem.
    • Effects are sticky and only toggle every 4th repetition for smoothness.
  3. Manual Classifier:
    • Switch to the Manual Classifier tab to test the model on random epochs for each movement.
    • Visualize predictions, probabilities, and the confusion matrix.

Project Structure

app.py                # Main Gradio app and UI logic
sound_control.py      # Audio processing and effect logic
classifier.py         # EEG classifier
config.py             # Configuration and constants
data_processor.py     # EEG data loading and preprocessing
requirements.txt      # Python dependencies
.gitignore            # Files/folders to ignore in git
SoundHelix-Song-6/    # Demo audio stems (bass, drums, instruments, vocals)

Quick Start

  1. Install dependencies:
    pip install -r requirements.txt
  2. Add required data:
    • Ensure the SoundHelix-Song-6/ folder with all audio stems (bass.wav, drums.wav, instruments.wav or other.wav, vocals.wav) is present and tracked in your repository.
    • Include at least one demo EEG .mat file (as referenced in your DEMO_DATA_PATHS in config.py) for the app to run out-of-the-box. Place it in the correct location and ensure it is tracked by git.
  3. Run the app:
    python app.py
  4. Open in browser:
    • Go to http://localhost:7860 (or the port shown in the terminal)

Deployment

  • Ready for Hugging Face Spaces or any Gradio-compatible cloud platform.
  • Minimal .gitignore and clean repo for easy deployment.
  • Make sure to include all required audio stems and at least two demo .mat EEG files in your deployment for full functionality.

✨ Credits and Attribution

🧑‍💻 Project Contribution

  • Concept & Lead Developer: Sofia Fregni
  • Model Training: Katarzyna Kuhlmann
  • Deployment/Infrastructure: Hamed Koochaki Kelardeh
  • Audio Stems: SoundHelix

🧠 Data Source: Motor Imagery EEG Dataset

The EEG dataset used for training and demonstration is sourced from the following published Data Descriptor.

Kaya, M., Binli, M., Ozbay, E. et al. A large electroencephalographic motor imagery dataset for electroencephalographic brain computer interfaces. Sci Data 5, 180211 (2018). DOI: https://doi.org/10.1038/sdata.2018.211


💻 Model Framework: Braindecode

The signal processing and base model architecture (e.g., ShallowFBCSPNet) were implemented using the open-source Python library Braindecode.

Schirrmeister, R. T., Springenberg, J. T., Fiederer, S. K. J., Glasstetter, M., Eggensperger, P., Tangermann, A., ... & Hutter, F. (2017). Deep learning with convolutional neural networks for EEG decoding and visualization. Human Brain Mapping, 38(11), 5891–5910.

Project Link: https://braindecode.org/

License

MIT License - see LICENSE file for details.

About

A proof-of-concept demo translating Motor Imagery (MI) EEG brain signals into a layered, evolving musical composition. Designed for neuro-feedback and rehabilitation, users "compose" and manipulate music by imagining movements. Uses prerecorded EEG data for robust model benchmarking.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages