AI-powered meeting intelligence that runs entirely on your device, your private data never leaves anywhere. Record, transcribe, summarize, and query your meetings using local AI models. Perfect for healthcare, legal and finance professionals with confidential data needs.
Trusted by users at AWS, Deliveroo & Tesco.
Disclaimer: This is an independent open-source project for meeting-notes productivity and is not affiliated with, endorsed by, or associated with any similarly named company.
- 2026-02-22 π Multi-language support β Transcribe and summarize in 10 languages
- 2026-02-22 π₯οΈ Remote Ollama server β Run AI models on another machine on your network
- 2026-02-22 βοΈ Cloud APIs β OpenAI compatible APIs & Anthropic π© (Not recommended - data leaves your device)
- 2026-02-22 π Apple M5 support β Bundled Ollama v0.16.3 with M5 Metal support
- 2026-02-19 π§ System audio capture β Record both sides of virtual meetings, even with headphones on
- 2026-02-19 π Outlook Calendar integration β Connect Outlook as an alternative to Google Calendar
- 2026-02-19 π₯οΈ macOS system tray β Menu bar icon with quick actions; window hides to tray on close
- 2026-02-15 π Google Calendar integration β Auto-name recordings from your upcoming meetings
- Local transcription using whisper.cpp
- AI summarization with Ollama models
- Privacy-first - 100% local processing, your data never leaves your device
- Multiple AI models - Choose from 4 models optimized for different use cases
- Ask Steno - Query your meetings with natural language questions
- Multi-language support - Transcribe and summarize in 10 languages
- Remote Ollama server - Run AI models on another machine on your network
- System audio capture - Record mic + system audio simultaneously for virtual meetings with headphones
- macOS desktop app with intuitive interface
Have questions or suggestions? Join our Discord to chat with the community.
Transcription Models (Whisper):
small: Default model - good accuracy and speed on Apple Silicon (default)base: Faster but lower accuracy for basic meetingsmedium: High accuracy for important meetings (slower)
Summarization Models (Ollama):
llama3.2:3b(2GB): Fastest option for quick meetings (default)gemma3:4b(2.5GB): Lightweight and efficientqwen3:8b(4.7GB): Excellent at structured output and action itemsdeepseek-r1:8b(4.7GB): Strong reasoning and analysis capabilities
YouTube Video Summary Challenge (11m 36s): High-Speed Rail Systems Around the World
Scored on: overall quality, factual accuracy, completeness, and hallucination (each out of 10).
| # | Provider | Model | Overall | Accuracy | Complete | No Halluc. | Notes |
|---|---|---|---|---|---|---|---|
| 1 | Anthropic | Claude Sonnet 4.6 | 9.8 | 9.8 | 9.5 | 10.0 | Most precise; perfect framing |
| 2 | Anthropic | Claude Haiku | 9.5 | 9.5 | 9.0 | 10.0 | Very strong; slightly less detailed |
| 3 | StenoAI | DeepSeek R1:8B | 8.8 | 9.0 | 8.0 | 8.5 | Broad coverage; fewer numerical details |
| 4 | StenoAI | Qwen 3:8B | 8.5 | 9.0 | 7.5 | 8.5 | Accurate but more compressed |
| 5 | OpenAI | GPT-4.1 | 8.3 | 9.0 | 8.0 | 6.5 | Accurate but invented meeting framing |
| 6 | OpenAI | GPT-4o Mini | 8.0 | 8.5 | 7.5 | 6.0 | Invented framing and participants |
| 7 | StenoAI | Gemma 3:4B | 7.0 | 8.5 | 7.0 | 3.5 | Fabricated participants and action items |
- Custom summarization templates
- Speaker Diarisation
- HIPAA compliance for healthcare workflows
- EHR integration for medical notes
Download the latest release for your Mac:
- Apple Silicon (M1-M5)
- Intel Macs Performance on Intel Macs is limited due to lack of dedicated AI inference capabilities on these older chips.
-
Download and open the DMG file
-
Drag the app to Applications
-
When you first launch the app, macOS may show a security warning
-
To fix this warning:
- Go to System Settings > Privacy & Security and click "Open Anyway"
Alternatively:
- Right-click StenoAI in Applications and select "Open"
- Or run in Terminal:
xattr -cr /Applications/StenoAI.app
-
The app will work normally on subsequent launches
You can run it locally as well (see below) if you don't want to install a DMG.
- Python 3.9+
- Node.js 18+
git clone https://github.com/ruzin/stenoai.git
cd stenoai
# Backend setup
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# Download bundled binaries (Ollama, ffmpeg)
./scripts/download-ollama.sh
# Build the Python backend
pip install pyinstaller
pyinstaller stenoai.spec --noconfirm
# Frontend
cd app
npm install
npm startNote: Ollama and ffmpeg are bundled - no system installation needed. The setup wizard in the app will download the required AI models automatically.
cd app
npm run buildstenoai/
βββ app/ # Electron desktop app
βββ src/ # Python backend
βββ website/ # Marketing site
βββ recordings/ # Audio files
βββ transcripts/ # Text output
βββ output/ # Summaries
StenoAI includes a built-in debug panel for troubleshooting issues:
In-App Debug Panel:
- Launch StenoAI
- Click the π¨ hammer icon (next to settings)
- The debug panel shows real-time logs of all operations
Terminal Logging (Advanced): For detailed system-level logs, run the app from Terminal:
# Launch StenoAI with full logging
/Applications/StenoAI.app/Contents/MacOS/StenoAIThis displays comprehensive logs including:
- Python subprocess output
- Whisper transcription details
- Ollama API communication
- HTTP requests and responses
- Error stack traces
- Performance timing
System Console Logs: For system-level debugging:
# View recent StenoAI-related logs
log show --last 10m --predicate 'process CONTAINS "StenoAI" OR eventMessage CONTAINS "ollama"' --info
# Monitor live logs
log stream --predicate 'eventMessage CONTAINS "ollama" OR process CONTAINS "StenoAI"' --level infoCommon Issues:
- Recording stops early: Check microphone permissions and available disk space
- "Processing failed": Usually Ollama service or model issues - check terminal logs
- Empty transcripts: Whisper couldn't detect speech - verify audio input levels
- Slow processing: Normal for longer recordings - Ollama processing is CPU-intensive especially on older intel Macs
- User Data:
~/Library/Application Support/stenoai/ - Recordings:
~/Library/Application Support/stenoai/recordings/ - Transcripts:
~/Library/Application Support/stenoai/transcripts/ - Summaries:
~/Library/Application Support/stenoai/output/
This project is licensed under the MIT License.
