Skip to content

FoxNoseTech/diarize

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

diarize

PyPI version Python versions License codecov CI Docs

Speaker diarization for Python — answers "who spoke when?" in any audio file.

Runs on CPU. No GPU, no API keys, no account signup. Apache 2.0 licensed.

pip install diarize
from diarize import diarize

result = diarize("meeting.wav")
for seg in result.segments:
    print(f"  [{seg.start:.1f}s - {seg.end:.1f}s] {seg.speaker}")

~5.0% weighted DER on VoxConverse dev. Processes audio ~8x faster than real-time on CPU. Automatically detects the number of speakers.

Benchmarked on a single dataset (VoxConverse). Cross-dataset validation is in progress.

How diarize compares

diarize pyannote (free) pyannote (commercial)
License Apache 2.0 CC-BY-4.0 Commercial
GPU required No No (7x slower on CPU) No
HuggingFace account No Yes Yes
Auto speaker count Yes Yes Yes
DER (VoxConverse dev) ~5.0% ~11.2% ~8.5%
CPU speed (RTF) 0.12 0.86
Install pip install diarize pip install pyannote.audio pip install pyannote.audio

DER = Diarization Error Rate (lower is better). RTF = Real-Time Factor (lower is faster). pyannote numbers are self-reported from their benchmark page. The diarize number is from the VoxConverse dev evaluation described in benchmarks.

Quick Start

from diarize import diarize

result = diarize("meeting.wav")

print(f"Found {result.num_speakers} speakers")
for seg in result.segments:
    print(f"  [{seg.start:.1f}s - {seg.end:.1f}s] {seg.speaker}")

# Export to RTTM format
result.to_rttm("meeting.rttm")

Requires Python 3.9+. Supports WAV, MP3, FLAC, OGG, and other formats via soundfile/libsndfile. diarize pins a compatible torch/torchaudio range during install, so no extra manual pinning is required.

📖 Full documentation — installation, API reference, architecture, benchmarks.

API

result = diarize("meeting.wav")                # auto-detect speakers
result = diarize("call.mp3", num_speakers=2)   # known speaker count
result = diarize("panel.flac", min_speakers=3, max_speakers=8)

result.segments      # [Segment(start=0.5, end=4.2, speaker='SPEAKER_00'), ...]
result.num_speakers  # 3
result.speakers      # ['SPEAKER_00', 'SPEAKER_01', 'SPEAKER_02']
result.audio_duration  # 324.5

result.to_rttm("output.rttm")  # export to standard RTTM format
result.to_list()                # export as list of dicts (JSON-serializable)

Each Segment has .start, .end, .speaker, and .duration (all in seconds).

Full API reference: documentation

How It Works

Four-stage pipeline, all CPU, all open-source:

  1. Silero VAD (MIT) — detects speech segments
  2. WeSpeaker ResNet34-LM (Apache 2.0) — extracts 256-dim speaker embeddings via ONNX
  3. GMM BIC + silhouette refinement — estimates the number of speakers
  4. Spectral Clustering (scikit-learn, BSD) + temporal smoothing — assigns speaker labels

Details: How It Works

Benchmarks

Evaluated on VoxConverse dev set (216 files, 1–20 speakers):

Diarization Error Rate (DER)

System Weighted DER Notes
pyannote precision-2 ~8.5% Commercial license
diarize ~5.0% Apache 2.0, CPU-only, no API key
pyannote community-1 ~11.2% CC-BY-4.0, needs HF token
pyannote 3.1 (legacy) ~11.2% MIT, needs HF token

Speaker Count Estimation

Metric Result
Files 216
Exact match 117/216 (54%)
Within ±1 175/216 (81%)

Many-speaker files remain the weak spot: automatic count estimation degrades above 7 speakers. Pass num_speakers when the count is known.

Full benchmark results, speed comparison, and methodology: benchmarks.

When to use something else

  • You need commercial support or cross-dataset validation. pyannote's commercial model has published production-oriented benchmarks beyond this single VoxConverse evaluation. If accuracy is the top priority and you have budget, compare on your own data.
  • You need very stable speaker labels in transcripts. Temporal smoothing reduces short label jumps, but diarize can still show speaker fragmentation / label switching: one real speaker may be split across multiple SPEAKER_XX labels, especially on noisy real-world audio.
  • Your audio has 8+ speakers. Automatic speaker count estimation degrades above 7 speakers. You can pass num_speakers explicitly, but test carefully.
  • You need overlapping speech detection. diarize assigns each segment to one speaker. Overlapping speech is not modeled.
  • You need GPU-accelerated throughput. diarize is CPU-only by design. For processing thousands of hours with GPU infrastructure, NeMo or pyannote on GPU will be faster.

Roadmap

Current benchmarks are based on VoxConverse dev set only. We are actively working on:

  • Cross-dataset validation — AMI, DIHARD III, CALLHOME, and other standard benchmarks in isolated environments
  • Speaker count estimation benchmarks — comparison of speaker counting accuracy against other systems
  • Broader system comparison — NeMo, WhisperX, and other diarization solutions
  • Streaming / real-time diarization — live audio streams with real-time speaker detection
  • Speaker identification — recognise known speakers across sessions using stored embeddings

Logging

diarize uses Python's standard logging module:

import logging
logging.basicConfig(level=logging.INFO)

License

Apache 2.0 License. See LICENSE for details.

All dependencies are permissively licensed:

  • Silero VAD: MIT
  • WeSpeaker: Apache 2.0
  • scikit-learn: BSD
  • PyTorch: BSD

Contributing

Contributions are welcome! Please open an issue or pull request on GitHub.