▶︎ Click or tap the video window above to play the demo.
PhD Research Project — MishMash WP1: AI for Artistic Performances
⚠️ Important: This system does not decode thoughts or reconstruct music from brain activity. EEG, fNIRS, and EMG are used as expressive control modalities — they measure attention, slow cortical state, and muscular engagement to drive musical parameters.
BrainJam is a research-grade multimodal AI-mediated performance instrument designed for artistic performance laboratories. It combines:
| Modality | Role | Mechanism |
|---|---|---|
| EEG / P300 | Discrete attention-based selection | Performer attends to one of four instrument panels; the P300 ERP component identifies the target |
| fNIRS | Slow cortical modulation | HbO/HbR concentration changes drive texture density, AI complexity, and harmonic tension over 6–30 s timescales |
| EMG | Embodied expressive control | Muscle activation (RMS envelope + burst detection) modulates volume, filter sweeps, and rhythm energy |
| AI Co-Performer | Musical proposal and continuation | Generates melodic proposals; the performer selects via P300; never generates autonomously |
- How can AI act as a responsive co-performer rather than an autonomous generator?
- Can biosignals serve as expressive control inputs while maintaining performer agency?
- What interaction patterns emerge when neuro-attentional, slow cortical, and embodied signals are fused in a single instrument?
┌─────────────────────────────────────────────────────────────────────┐
│ BrainJam Architecture (v0.4) │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ biosignals/ performance_system/ │
│ ┌──────────────────────┐ ┌───────────────────────────────┐ │
│ │ eeg/p300/ │ │ instruments/ │ │
│ │ P300Pipeline (LDA) │─────►│ PianoPanel GuitarPanel │ │
│ │ P300Calibration │ │ DrumPanel FlutePanel │ │
│ ├──────────────────────┤ │ (P300 flash + glow anim.) │ │
│ │ fnirs/ │ ├───────────────────────────────┤ │
│ │ FNIRSProcessor │─────►│ control/ │ │
│ │ (HbO/HbR, slow │ │ P300Selector (tick loop) │ │
│ │ state estimator) │ ├───────────────────────────────┤ │
│ ├──────────────────────┤ │ music_engine/ │ │
│ │ emg/ │─────►│ AIGenerator (abstract) │ │
│ │ EMGProcessor │ │ MidiMarkovGenerator │ │
│ │ (RMS, burst detect) │ │ TorchSequenceGenerator │ │
│ ├──────────────────────┤ │ MusicGenWrapper │ │
│ │ fusion/ │ ├───────────────────────────────┤ │
│ │ MultimodalManager │ │ ui/ │ │
│ │ (EEG→EMG→fNIRS │─────►│ UIRenderer (Dev / Perf mode) │ │
│ │ priority fusion) │ │ UIState, latency, confidence │ │
│ └──────────────────────┘ └───────────────────────────────┘ │
│ │
│ interaction_measures/ (legacy performance_system/) │
│ SelectionAccuracyMeter agents, biofeedback, embodied, │
│ ModalityCoverageMeter realtime, evaluation, … │
│ ExpressionRangeMeter │
│ CoPerformerAdaptationMeter │
└─────────────────────────────────────────────────────────────────────┘
# 1. Create a virtual environment (recommended)
python3 -m venv brainjam_env
source brainjam_env/bin/activate # macOS / Linux
# brainjam_env\Scripts\activate # Windows
# 2. Install BrainJam
pip install git+https://github.com/curiousbrutus/brainjam.git
# 3. Launch the GUI
brainjam-appThe application will open automatically in your browser at http://localhost:8501.
git clone https://github.com/curiousbrutus/brainjam.git
cd brainjam
pip install -e ".[dev]" # installs core + dev extras via setup.py
# Run interactive Streamlit GUI
streamlit run streamlit_app/app.py
# Run all tests
pytest tests/| Requirement | Version |
|---|---|
| Python | 3.9 or later |
| pip | 21.0 or later |
| OS | macOS, Linux, or Windows |
| RAM | 4 GB minimum (8 GB recommended for ML features) |
# --- Biosignal pipeline ---
from biosignals.eeg.p300 import P300Pipeline, P300Calibration
from biosignals.fnirs import FNIRSProcessor
from biosignals.emg import EMGProcessor
from biosignals.fusion import MultimodalManager
pipeline = P300Pipeline(backend="mock")
cal = P300Calibration(pipeline, stimuli=["piano","guitar","drum","flute"], n_blocks=5, mock=True)
cal.run(target_stimulus="piano") # trains LDA on synthetic data
fnirs = FNIRSProcessor(mock=True)
emg = EMGProcessor(mock=True)
fusion = MultimodalManager()
# --- Visual instrument panels ---
from performance_system.instruments import PianoPanel, GuitarPanel, DrumPanel, FlutePanel
panels = [PianoPanel(), GuitarPanel(), DrumPanel(), FlutePanel()]
# --- AI co-performer ---
from performance_system.music_engine import MidiMarkovGenerator
ai = MidiMarkovGenerator(seed=42)
proposals = ai.propose_options(n_options=3) # 3 melodic phrases for P300 selection
# --- P300 real-time selector ---
from performance_system.control import P300Selector
selector = P300Selector(
pipeline,
stimuli=["piano","guitar","drum","flute"],
n_repeats=6,
on_selection=lambda label, conf: print(f"Selected: {label} conf={conf:.2f}"),
)
selector.start()
# --- Performance loop ---
while True:
# Tick P300 state machine → flash panels, collect ERP, decide
result = selector.tick()
if result:
label, conf = result
panels_map = {"piano": panels[0], "guitar": panels[1], ...}
panels_map[label].confirm_selection()
# Update slow modality state
fnirs_state = fnirs.update(n_new_samples=1)
emg_state = emg.update(n_samples=64)
fusion.update_fnirs(fnirs_state)
fusion.update_emg(emg_state)
perf = fusion.get_performance_state()
ai.adapt_to_modulation({"complexity": perf["ai_complexity"],
"tension": perf["harmonic_tension"]})brainjam/
│
├── biosignals/ # NEW: modular biosignal processing
│ ├── eeg/p300/
│ │ ├── pipeline.py # Bandpass→Epoch→Baseline→LDA
│ │ └── calibration.py # Supervised calibration session
│ ├── fnirs/
│ │ └── processor.py # HbO/HbR extraction, slow-state estimator
│ ├── emg/
│ │ └── processor.py # RMS envelope, burst detection
│ └── fusion/
│ └── multimodal_manager.py # Priority-based EEG→EMG→fNIRS fusion
│
├── performance_system/
│ ├── instruments/ # NEW: 4 visual instrument panels
│ │ ├── base.py # PanelState animation state machine
│ │ ├── piano.py # Keyboard SVG + glow
│ │ ├── guitar.py # Body/strings SVG + glow
│ │ ├── drum.py # Snare SVG + glow
│ │ └── flute.py # Tube/keys SVG + glow
│ ├── ui/ # NEW: Dev / Performance modes
│ │ └── modes.py # UIRenderer, UIState, UIMode
│ ├── music_engine/ # NEW: AI co-performer abstraction
│ │ └── ai_generator.py # AIGenerator, MidiMarkov, Torch, MusicGen
│ ├── control/ # NEW: real-time P300 selector
│ │ └── p300_selector.py # Tick-based stimulus state machine
│ ├── agents/ # Hybrid Adaptive Agent (GRU memory)
│ ├── biofeedback/ # Legacy fusion (HRV, GSR, EMG)
│ ├── embodied/ # Gesture, spatial audio, interaction modes
│ ├── realtime/ # EventBus, StreamingPipeline
│ └── evaluation/ # Interaction metrics, session logger
│
├── interaction_measures/ # NEW: standalone interaction measures
│ └── __init__.py # Selection accuracy, coverage, expression range
│
├── streamlit_app/ # Interactive GUI (8 pages)
├── tests/ # 125 unit tests
│ └── test_new_components.py # NEW: 55 tests for v0.3 components
├── setup.py # NEW: installable package
├── requirements.txt # Updated with backend extras
├── examples/ # Usage demos
├── docs/ # Architecture + research docs
└── literature/ # Academic references
Each of the four instrument panels exposes:
| Feature | Implementation |
|---|---|
| Stylised SVG icon | Inline SVG with instrument-specific geometry |
| Idle glow breathing | Sine-wave alpha oscillation (idle_glow_period_s) |
| P300 flash highlight | Smooth sine-arc luminance pulse (100–200 ms, no strobe) |
| Selection confirmation | Fast brightening → gradual fade (confirm_duration_s) |
| Active state glow | Elevated steady alpha with slow breathing |
from performance_system.instruments import PianoPanel, PanelState
panel = PianoPanel()
panel.flash() # P300 stimulus – glow rises then fades
panel.confirm_selection() # Confirmation animation
panel.activate() # Mark as the active instrument
svg_html = panel.render_svg() # Embed in Streamlit / HTMLBandpass (0.1–30 Hz) → Epoch (−200 to 800 ms) → Baseline correction
→ Downsample → Feature vector → LDA → P(target) per trial
→ Accumulate n_repeats → argmax → Selection event
- Calibration:
P300Calibrationruns a supervised block design (2×N trials) and fits the LDA on real or synthetic epochs. - Real-time:
P300Selectordrives the flash sequence via a non-blockingtick()loop. Eachtick()call advances the state machine; no threads required. - Backends: Mock (default), LSL (
pylsl), BrainFlow (brainflow). - Model persistence:
pipeline.save_model(path)/pipeline.load_model(path).
- Modified Beer–Lambert Law: ΔOD at 760 nm + 850 nm → ΔHbO + ΔHbR
- 30-second sliding-window averaging
- Slow-state indices (
texture,complexity,tension) normalised to [0, 1] - Used only for long-timescale modulation; never for discrete selection
- Causal bandpass 20–450 Hz with persistent filter state across chunks
- Full-wave rectification → RMS envelope (configurable window)
- Threshold/hysteresis burst detector with
BurstEventrecords - Outputs:
volume,filter_sweep,rhythm_energy,burstflag
class AIGenerator(ABC):
def propose_options(n_options, context) -> List[dict]: ...
def generate_continuation(selected, modulation) -> dict: ...
def adapt_to_modulation(modulation) -> None: ...Three concrete backends:
| Backend | Dependencies | Notes |
|---|---|---|
MidiMarkovGenerator |
none | 2nd-order Markov on pentatonic scale; <1 ms |
TorchSequenceGenerator |
torch |
Tiny GRU; optional pretrained weights |
MusicGenWrapper |
transformers |
HuggingFace MusicGen; GPU recommended |
Development Mode — signal traces, classifier probabilities, calibration status, P300 trial accumulator, event log, latency indicator.
Performance Mode — fullscreen instrument label, confidence meter, AI status pill, calibration dot, latency indicator.
from performance_system.ui import UIRenderer, UIMode, UIState
renderer = UIRenderer(mode=UIMode.PERFORMANCE)
state = UIState(
active_instrument="piano",
selection_confidence=0.87,
calibration_status="ready",
ai_status="generating",
latency_ms=14.2,
)
html = renderer.render(state) # embed in Streamlit markdown
renderer.toggle_mode() # switch to Development modePriority strategy (hard-coded):
- EEG → structural selection (discrete, high-confidence)
- EMG → expressive layer (fast, continuous)
- fNIRS → slow modulation (slow, long-window)
Each modality has an independent update rate. Stale signals are exponentially decayed so a disconnected device does not freeze the system.
- Performer-Led Systems (Tanaka, 2006): AI responds, never overrides
- Interactive ML (Fiebrink, 2011): Real-time adaptation with user control
- BCMIs (Miranda & Castet, 2014): Brain signals as expressive input
- P300 Speller (Farwell & Donchin, 1988): Attention-based discrete selection
- fNIRS / Beer–Lambert (Delpy et al., 1988; Cope & Delpy, 1988)
- sEMG (Hermens et al., 2000 SENIAM guidelines)
BrainJam currently implements P300-based selection. Future expansions include SSVEP and oscillatory-state (alpha-band) modulation.
from interaction_measures import (
SelectionAccuracyMeter, SelectionEvent,
ModalityCoverageMeter,
ExpressionRangeMeter,
CoPerformerAdaptationMeter,
)| Measure | What it tracks |
|---|---|
SelectionAccuracyMeter |
Rolling P300 selection accuracy, confidence, latency |
ModalityCoverageMeter |
Fraction of modalities providing fresh data |
ExpressionRangeMeter |
Dynamic range exploited by EMG/fNIRS signals |
CoPerformerAdaptationMeter |
Cross-correlation lag between modulation and AI output |
- Hybrid Adaptive Agent, DDSP synths, EEG Mapper, Streamlit GUI
- Real-time EventBus, multimodal biofeedback fusion (EEG/EMG/HRV/GSR)
- Embodied gesture controller, interaction modes, evaluation metrics
-
biosignals/package: P300 ERP pipeline, fNIRS processor, EMG processor - Priority-based multimodal fusion manager
- 4 visual instrument panels with P300 flash + glow animation
-
P300Selectorreal-time tick-loop state machine -
AIGeneratorabstraction + MIDI Markov + Torch + MusicGen backends - Dev / Performance dual-mode UI renderer
-
interaction_measuresstandalone module -
setup.pyinstallable package - 125 unit tests (55 new)
-
brainjam-appconsole-script entry point (pip install→brainjam-app) - Streamlit & Plotly promoted to core dependencies
- Navigation menu consistency (all 8 pages listed, home page renamed)
- Enhanced slider tooltips with musical context
- AI Thought Stream — semantic "inner voice" on Live Performance page
- Visual clutter reduction via collapsible expanders
- Improved graph labels (musical parameter names instead of generic "Control N")
- User study design
- Real EEG / fNIRS hardware integration (LSL / BrainFlow)
- Streamlit v0.3 UI page (instrument panel display + P300 flash loop)
- SSVEP and alpha-band modulation
- Python 3.9+, NumPy, SciPy, scikit-learn (LDA)
- PyTorch (optional, AI backend), Streamlit GUI
- Mock / LSL / BrainFlow EEG backends
- Performance target: <30 ms end-to-end latency
BCI / P300: Farwell & Donchin (1988); Krusienski et al. (2006); Miranda & Castet (2014)
fNIRS: Delpy et al. (1988); Cope & Delpy (1988)
EMG: Hermens et al. (2000); Konrad (2005)
Interactive ML: Fiebrink (2011); Lawhern et al. (2018)
Audio Synthesis: Engel et al. (2020); Karplus & Strong (1983)
Synchrony: Zamm et al. (2018); Jordà (2005)
Project: BrainJam — Multimodal AI Performance Instrument
Purpose: PhD Research Application (MishMash WP1)
eyyub.gvn@gmail.com
Academic research project. Contact for usage permissions.
Built with 🧠 + 🎵 + 🤖 for research-grade human-AI musical performance
