Skip to content

Official repository for paper "Distillation-based Layer Dropping (DLD): Effective End-to-end Framework for Dynamic Speech Networks" accepted at ICASSP-2026.

Notifications You must be signed in to change notification settings

hannabdul/DLD4ASR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 

Repository files navigation

Distillation-based Layer Dropping (DLD): Effective End-to-end Framework for Dynamic Speech Networks

Paper Link: [arxiv][]   [ICASSP][]

Code

Coming soon.!!

Overview

Large Speech Models (LSMs) are effective in transcribing audio data while offering high computational complexity and lacking scalability for different computational budgets. To effectively leverage the capabilities of a LSMs for different computational budgets, methods like early exit, adaptive pruning, and layer dropping are used.

DLD is an end-to-end framework for Automatic Speech Recongition (ASR) that performs knowledge distillation from the teacher network to the dynamic / scalable child network, thus, minimizing the difference between models embeddings and optimizing the performance on all computational budgets.

Architecture

image

About

Official repository for paper "Distillation-based Layer Dropping (DLD): Effective End-to-end Framework for Dynamic Speech Networks" accepted at ICASSP-2026.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published