A web-based AI music companion that listens, learns, and responds to your musical ideas in real-time. Whether you're creating music, exploring music therapy, or just having fun, Smart Jam provides an interactive musical experience that adapts to your playing.
Smart Jam aims to be more than just a practice tool - it's a musical companion that can:
- Respond to Your Ideas: Listen to what you play and generate complementary musical responses
- Adapt to Your Style: Learn from your playing patterns and adapt its responses accordingly
- Inspire Creativity: Help you explore new musical ideas and directions
- Support Music Therapy: Provide a responsive, non-judgmental musical environment
- Make Music Fun: Create an engaging, interactive musical experience
-
Real-time Note Detection:
- Uses Web Audio API and Pitchy for accurate pitch detection
- Visual waveform display
- Input level monitoring
- Multiple microphone support
-
Interactive Grid System:
- Adjustable number of bars (2, 4, or 8)
- Multiple grid divisions (32nd, 16th, or 8th notes)
- Configurable maximum note duration
- Add/Replace note modes
- Real-time playhead visualization
-
AI Musical Response:
- Powered by Magenta.js and TensorFlow.js
- Real-time musical pattern analysis
- Generates complementary responses to your playing
- Maintains musical context and style
- Separate visualization for AI responses
-
Metronome:
- Adjustable tempo (40-200 BPM)
- Visual and optional audible click
- Synchronized with note detection
-
Export Options:
- MIDI export with separate tracks for user and AI notes
- Uses @tonejs/midi for high-quality MIDI generation
- Preserves timing and velocity information
- Node.js (v14 or higher)
- npm or yarn
- A modern web browser (Chrome recommended for best audio performance)
- A microphone or audio input device
- Clone the repository:
git clone https://github.com/velde/smart-jam.git
cd smart-jam- Install dependencies:
npm install- Start the development server:
npm start- Open http://localhost:3000 in your browser
-
Start a Jam Session:
- Select your input device
- Adjust the tempo using the slider
- Click "Start" to begin
-
Play Your Instrument:
- The grid will show your notes in real-time
- Use the mode button to switch between adding and replacing notes
- Watch the waveform and pitch detection displays
-
AI Collaboration:
- The AI will generate responses based on your playing
- Responses appear in the lower grid
- Export both parts as MIDI for further use
- Chrome (recommended)
- Firefox (requires additional configuration)
- Edge
- Safari (limited support)
- Built with React and JavaScript
- Uses Tone.js for audio synthesis and processing
- Implements pitch detection using the Pitchy library
- AI powered by Magenta.js and TensorFlow.js
- MIDI generation with @tonejs/midi
- Responsive design using CSS Grid and Flexbox
- Implement real-time musical pattern analysis using TensorFlow.js
- Use Magenta.js for basic music generation
- Focus on simple call-and-response patterns
- Implement basic chord progression analysis
- Add MIDI export for both user and AI-generated notes
- Integrate Music Transformer for more sophisticated musical understanding
- Add support for:
- Melodic contour analysis
- Rhythmic pattern recognition
- Harmonic progression prediction
- Style transfer capabilities
- Implement real-time AI response generation using:
- Tone.js for sound synthesis
- Web Audio API for audio processing
- TensorFlow.js for on-device inference
- Add features like:
- Dynamic accompaniment generation
- Melodic improvisation
- Harmonic accompaniment
- Rhythmic synchronization
- Implement user preference learning
- Add style adaptation capabilities
- Develop personalized response patterns
- Create a feedback loop for continuous improvement
- On-device Processing: Prioritize client-side processing for low latency
- Model Optimization: Use quantized models for better performance
- Audio Quality: Maintain high-quality audio processing
- Browser Compatibility: Ensure cross-browser support
- Performance: Optimize for real-time interaction
- TensorFlow.js - Machine learning
- Magenta.js - Music generation
- Tone.js - Audio synthesis
- ONNX Runtime - Model inference
- Web Audio API - Audio processing
- Basic pattern recognition and response
- Real-time performance optimization
- User experience and interface improvements
- Advanced AI features
- Community feedback and iteration
Contributions are welcome! Whether you're interested in:
- Enhancing the AI response system
- Improving the audio processing
- Adding new features
- Fixing bugs
- Improving documentation
Please feel free to submit a Pull Request.
This code is published for demonstration purposes only. All rights reserved © Velde Vainio. No commercial use or redistribution is permitted without written permission.
This project uses the following open-source libraries:
- React - MIT License
- Tone.js - MIT License
- @tonejs/midi - MIT License
- Pitchy - MIT License
- Magenta.js - Apache License 2.0
While this project is proprietary, we acknowledge and appreciate the open-source community's contributions through these excellent libraries. Each library's license is included in the node_modules directory of this project.