An API for model loading and inference, and a Python-based system for training and evaluating bioacoustics representation learning models.
The Animal Vocalization Encoder library AVEX provides a unified interface for working with pre-trained bioacoustics representation learning models, with support for:
- Model Loading: Load pre-trained models with checkpoints and class mappings
- Embedding Extraction: Extract features from audio for downstream tasks
- Probe System: Flexible probe heads (linear, MLP, LSTM, attention, transformer) for transfer learning
- Training & Evaluation: Scripts for supervised learning experiments
- Plugin Architecture: Register and use custom models seamlessly
- Python 3.10, 3.11, or 3.12
pip install avexuv add avexFor development installation with training/evaluation tools, see the Contributing guide.
import torch
import librosa
from avex import load_model, list_models
# List available models
print(list_models().keys())
# Load a pre-trained model
model = load_model("esp_aves2_sl_beats_all", device="cpu")
# Load and preprocess audio (BEATs expects 16kHz)
audio, sr = librosa.load("your_audio.wav", sr=16000)
audio_tensor = torch.tensor(audio).unsqueeze(0) # Shape: (1, num_samples)
# Run inference
with torch.no_grad():
logits = model(audio_tensor)
predicted_class = logits.argmax(dim=-1).item()
# Get human-readable label
if model.label_mapping:
label = model.label_mapping.get(str(predicted_class), predicted_class)
print(f"Predicted: {label}")# Load for embedding extraction (no classifier head)
model = load_model("esp_aves2_sl_beats_all", return_features_only=True, device="cpu")
with torch.no_grad():
embeddings = model(audio_tensor)
# Shape: (batch, time_steps, 768) for BEATs
# Pool to get fixed-size embedding
embedding = embeddings.mean(dim=1) # Shape: (batch, 768)from avex.models.probes import build_probe_from_config
from avex.configs import ProbeConfig
# Load backbone for feature extraction
base = load_model("esp_aves2_sl_beats_all", return_features_only=True, device="cpu")
# Define a probe head for your task
probe_config = ProbeConfig(
probe_type="linear",
target_layers=["last_layer"],
aggregation="mean",
freeze_backbone=True,
online_training=True,
)
probe = build_probe_from_config(
probe_config=probe_config,
base_model=base,
num_classes=10, # Your number of classes
device="cpu",
)Full documentation: docs/index.md
- API Reference - Complete API documentation for model loading, registry, and management functions
- Architecture - Framework architecture, core components, and plugin system
- Supported Models - List of supported models and their configurations
- Configuration - ModelSpec parameters, audio requirements, and configuration options
- Training and Evaluation - Guide to training and evaluating models
- Embedding Extraction - Working with feature representations and embeddings
- Examples - Comprehensive examples and use cases
- Probe System - Understanding and using probes for transfer learning
- API Probes - API reference for probe-related functionality
- Custom Model Registration - Guide on registering custom model classes and loading pre-trained models
Examples: See the examples/ directory:
00_quick_start.py- Basic model loading01_basic_model_loading.py- Loading models with different configurations02_checkpoint_loading.py- Working with checkpoints03_custom_model_registration.py- Custom model registration04_training_and_evaluation.py- Training and evaluation examples05_embedding_extraction.py- Feature extraction06_classifier_head_loading.py- Classifier head behavior
The framework supports the following audio representation learning models:
- EfficientNet - EfficientNet-based models for audio classification
- BEATs - BEATs transformer models for audio representation learning
- EAT - Efficient Audio Transformer models
- AVES - AVES model for bioacoustics
- BirdMAE - BirdMAE masked autoencoder for bioacoustic representation learning
- ATST - Audio Spectrogram Transformer
- ResNet - ResNet models (ResNet18, ResNet50, ResNet152)
- CLIP - Contrastive Language-Audio Pretraining models
- BirdNet - BirdNet models for bioacoustic classification - external tensorflow model, some features might not be available
- Perch - Perch models for bioacoustics - external tensorflow model, some features might not be available
- SurfPerch - SurfPerch models - external tensorflow model, some features might not be available
See Supported Models for detailed information and configuration examples.
The framework provides flexible probe heads for transfer learning:
- Linear - Simple linear classifier (fastest, most memory-efficient)
- MLP - Multi-layer perceptron with configurable hidden layers
- LSTM - Long Short-Term Memory network for sequence modeling
- Attention - Self-attention mechanism for sequence modeling
- Transformer - Full transformer encoder architecture
Probes can be trained:
- Online: End-to-end with the backbone (raw audio input)
- Offline: On pre-computed embeddings
See Probe System and API Probes for detailed documentation.
If you use this framework in your research, please cite:
@inproceedings{miron2025matters,
title={What Matters for Bioacoustic Encoding},
author={Miron, Marius and Robinson, David and Alizadeh, Milad and Gilsenan-McMahon, Ellen and Narula, Gagan and Chemla, Emmanuel and Cusimano, Maddie and Effenberger, Felix and Hagiwara, Masato and Hoffman, Benjamin and Keen, Sara and Kim, Diane and Lawton, Jane K. and Liu, Jen-Yu and Raskin, Aza and Pietquin, Olivier and Geist, Matthieu},
booktitle={The Fourteen International Conference on Learning Representations},
year={2026}
}Related ESP papers:
@inproceedings{miron2026probing,
title={Multi-layer attentive probing improves transfer of audio representations for bioacoustics},
author={Miron, Marius and Robinson, David and Hagiwara, Masato and Titouan, Parcollet and Cauzinille, Jules and and Narula, Gagan and Alizadeh, Milad and Gilsenan-McMahon, Ellen and Keen, Sara and Chemla, Emmanuel and Hoffman, Benjamin and Cusimano, Maddie and Kim, Diane and Effenberger, Felix and Lawton, Jane K. and Raskin, Aza and Pietquin, Olivier and Geist, Matthieu},
booktitle={ICASSP 2026-2026 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={1--5},
year={2026},
organization={IEEE}
}
@inproceedings{hagiwara2023aves,
title={Aves: Animal vocalization encoder based on self-supervision},
author={Hagiwara, Masato},
booktitle={ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={1--5},
year={2023},
organization={IEEE}
}We welcome contributions! Please see CONTRIBUTING.md for:
This project is licensed under the MIT License - see the LICENSE file for details.
- Built on top of PyTorch
- ICLR2026 and ICASSP2026 reviewers for the feedback
- Titouan Parcollet for templating, engineering feedback
- Bioacoustics community (IBAC, BioDCASE, ABS)