Skip to content

earthspecies/avex

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

151 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AVEX - Animal Vocalization Encoder Library

arXiv PyPI Hugging Face CI status Pre-commit status

An API for model loading and inference, and a Python-based system for training and evaluating bioacoustics representation learning models.

Description

The Animal Vocalization Encoder library AVEX provides a unified interface for working with pre-trained bioacoustics representation learning models, with support for:

  • Model Loading: Load pre-trained models with checkpoints and class mappings
  • Embedding Extraction: Extract features from audio for downstream tasks
  • Probe System: Flexible probe heads (linear, MLP, LSTM, attention, transformer) for transfer learning
  • Training & Evaluation: Scripts for supervised learning experiments
  • Plugin Architecture: Register and use custom models seamlessly

Installation

Prerequisites

  • Python 3.10, 3.11, or 3.12

Install with pip

pip install avex

Install with uv

uv add avex

For development installation with training/evaluation tools, see the Contributing guide.

Quick Start

import torch
import librosa
from avex import load_model, list_models

# List available models
print(list_models().keys())

# Load a pre-trained model
model = load_model("esp_aves2_sl_beats_all", device="cpu")

# Load and preprocess audio (BEATs expects 16kHz)
audio, sr = librosa.load("your_audio.wav", sr=16000)
audio_tensor = torch.tensor(audio).unsqueeze(0)  # Shape: (1, num_samples)

# Run inference
with torch.no_grad():
    logits = model(audio_tensor)
    predicted_class = logits.argmax(dim=-1).item()

# Get human-readable label
if model.label_mapping:
    label = model.label_mapping.get(str(predicted_class), predicted_class)
    print(f"Predicted: {label}")

Embedding Extraction

# Load for embedding extraction (no classifier head)
model = load_model("esp_aves2_sl_beats_all", return_features_only=True, device="cpu")

with torch.no_grad():
    embeddings = model(audio_tensor)
    # Shape: (batch, time_steps, 768) for BEATs

# Pool to get fixed-size embedding
embedding = embeddings.mean(dim=1)  # Shape: (batch, 768)

Transfer Learning with Probes

from avex.models.probes import build_probe_from_config
from avex.configs import ProbeConfig

# Load backbone for feature extraction
base = load_model("esp_aves2_sl_beats_all", return_features_only=True, device="cpu")

# Define a probe head for your task
probe_config = ProbeConfig(
    probe_type="linear",
    target_layers=["last_layer"],
    aggregation="mean",
    freeze_backbone=True,
    online_training=True,
)

probe = build_probe_from_config(
    probe_config=probe_config,
    base_model=base,
    num_classes=10,  # Your number of classes
    device="cpu",
)

Documentation

Full documentation: docs/index.md

Core Documentation

  • API Reference - Complete API documentation for model loading, registry, and management functions
  • Architecture - Framework architecture, core components, and plugin system
  • Supported Models - List of supported models and their configurations
  • Configuration - ModelSpec parameters, audio requirements, and configuration options

Usage Guides

Advanced Topics

Examples: See the examples/ directory:

  • 00_quick_start.py - Basic model loading
  • 01_basic_model_loading.py - Loading models with different configurations
  • 02_checkpoint_loading.py - Working with checkpoints
  • 03_custom_model_registration.py - Custom model registration
  • 04_training_and_evaluation.py - Training and evaluation examples
  • 05_embedding_extraction.py - Feature extraction
  • 06_classifier_head_loading.py - Classifier head behavior

Supported Models

The framework supports the following audio representation learning models:

  • EfficientNet - EfficientNet-based models for audio classification
  • BEATs - BEATs transformer models for audio representation learning
  • EAT - Efficient Audio Transformer models
  • AVES - AVES model for bioacoustics
  • BirdMAE - BirdMAE masked autoencoder for bioacoustic representation learning
  • ATST - Audio Spectrogram Transformer
  • ResNet - ResNet models (ResNet18, ResNet50, ResNet152)
  • CLIP - Contrastive Language-Audio Pretraining models
  • BirdNet - BirdNet models for bioacoustic classification - external tensorflow model, some features might not be available
  • Perch - Perch models for bioacoustics - external tensorflow model, some features might not be available
  • SurfPerch - SurfPerch models - external tensorflow model, some features might not be available

See Supported Models for detailed information and configuration examples.

Supported Probes

The framework provides flexible probe heads for transfer learning:

  • Linear - Simple linear classifier (fastest, most memory-efficient)
  • MLP - Multi-layer perceptron with configurable hidden layers
  • LSTM - Long Short-Term Memory network for sequence modeling
  • Attention - Self-attention mechanism for sequence modeling
  • Transformer - Full transformer encoder architecture

Probes can be trained:

  • Online: End-to-end with the backbone (raw audio input)
  • Offline: On pre-computed embeddings

See Probe System and API Probes for detailed documentation.

Citing

If you use this framework in your research, please cite:

@inproceedings{miron2025matters,
  title={What Matters for Bioacoustic Encoding},
  author={Miron, Marius and Robinson, David and Alizadeh, Milad and Gilsenan-McMahon, Ellen and Narula, Gagan and Chemla, Emmanuel and Cusimano, Maddie and Effenberger, Felix and Hagiwara, Masato and Hoffman, Benjamin and Keen, Sara and Kim, Diane and Lawton, Jane K. and Liu, Jen-Yu and Raskin, Aza and Pietquin, Olivier and Geist, Matthieu},
  booktitle={The Fourteen International Conference on Learning Representations},
  year={2026}
}

Related ESP papers:

@inproceedings{miron2026probing,
  title={Multi-layer attentive probing improves transfer of audio representations for bioacoustics},
  author={Miron, Marius and Robinson, David and Hagiwara, Masato and Titouan, Parcollet and Cauzinille, Jules and and Narula, Gagan and Alizadeh, Milad and Gilsenan-McMahon, Ellen and Keen, Sara and Chemla, Emmanuel and Hoffman, Benjamin and Cusimano, Maddie and Kim, Diane and Effenberger, Felix and Lawton, Jane K. and Raskin, Aza and Pietquin, Olivier and Geist, Matthieu},
  booktitle={ICASSP 2026-2026 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  pages={1--5},
  year={2026},
  organization={IEEE}
}
@inproceedings{hagiwara2023aves,
  title={Aves: Animal vocalization encoder based on self-supervision},
  author={Hagiwara, Masato},
  booktitle={ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  pages={1--5},
  year={2023},
  organization={IEEE}
}

Contributing

We welcome contributions! Please see CONTRIBUTING.md for:

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • Built on top of PyTorch
  • ICLR2026 and ICASSP2026 reviewers for the feedback
  • Titouan Parcollet for templating, engineering feedback
  • Bioacoustics community (IBAC, BioDCASE, ABS)

About

Animal Vocalization Encoder Library

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 6