feat: Music streaming neural engagement example (SoundChain x TRIBE v2)#16
Conversation
Adds an example demonstrating TRIBE v2 predicting brain responses to live music streams from SoundChain's OGUN Radio (500+ NFT tracks, 24/7). Novel use case: instead of movie clips or lab stimuli, we feed TRIBE real music from a production decentralized streaming platform and compute neural engagement scores — measuring predicted activation across auditory, emotional, and language-processing cortical regions. Features: - Fetches tracks from SoundChain's public radio API (no auth needed) - Downloads audio, runs TRIBE v2 prediction - Computes engagement metrics: mean/peak activation, temporal dynamics - Ranks tracks by neural engagement score - Supports genre filtering, local audio files, JSON export Use cases for researchers: - Music cognition studies using a large, diverse, live music corpus - Cross-genre neural response comparison (hip-hop vs lo-fi vs ambient) - Temporal dynamics of engagement (which musical moments activate listeners) - Bridge between computational neuroscience and music information retrieval
|
Hi @soundchainio! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
Summary
Adds a new example demonstrating TRIBE v2 predicting brain responses to live music streams from SoundChain — a decentralized music streaming platform with 5,400+ tracks and 728 users.
Why this is interesting for TRIBE v2
All existing demos use movie clips or lab stimuli. This example connects TRIBE to a production music streaming API with 500+ NFT tracks broadcasting 24/7, enabling:
What it does
Wav2Vec-BERTaudio encoderOutput
About SoundChain
SoundChain is a bootstrapped, self-funded decentralized music streaming platform — no VC money, no corporate backing. Two founders building from home War Rooms across 2 states, powered by AI-assisted development. Artists mint tracks as NFTs and earn $OGUN tokens per stream. 5 years of development, 10,000+ commits, 8 deployed smart contracts on Polygon.
The radio API is public and serves real music from independent artists — making it an ideal large-scale, diverse music corpus for neuroscience research.
GET https://soundchain.io/api/agent/radioGET https://soundchain.io/api/agent/tracks?q=jazzBroader vision: Wearable hardware integration
This integration is the first step toward something bigger — real-time neural-aware music streaming through wearable devices.
Ray-Ban Meta Smart Glasses are the ideal hardware companion for this research:
The research pipeline we're building:
What we need to make this real:
This isn't a theoretical proposal. SoundChain is live at soundchain.io with deployed smart contracts, active users, 17 AI agents, and a 24/7 radio station. We published
@soundchain/openclaw-pluginon npm — the first music platform with a registered developer package in the AI agent ecosystem. The example script in this PR connects directly to that production infrastructure.Broader research value
This integration opens the door to neurofeedback for music creators — imagine an artist uploading a track and getting back a cortical heatmap showing which moments activated listeners' brains the most. Combined with TRIBE v2's multimodal capabilities, this could extend to music videos (V-JEPA2 + Wav2Vec-BERT) and lyrics (LLaMA 3.2).
Anonymized, aggregated neural response data across genres, BPM ranges, key signatures, and cultural backgrounds could become the largest music-brain dataset ever created — on a decentralized platform where artists and listeners own the data, not a corporation.
Test plan
--audio-pathpointing to a local.mp3🤖 Generated with Claude Code