You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -564,7 +564,7 @@ <h1 class="title">JAM-1: A Tiny Flow-based Song Generator with Fine-grained Cont
564
564
<divclass="section">
565
565
<h2class="section-title">Abstract</h2>
566
566
<pclass="abstract">
567
-
Diffusion and flow-matching models have revolutionized automatic text-to-audio generation in recent times. These models are increasingly capable of generating high quality and faithful audio outputs capturing speech and acoustic events. However, there is still much room for improvement in creative audio generation that primarily involves music and songs. Recent open lyrics-to-song models, such as DiffRhythm, Ace-Step, and Levo, have set an acceptable standard in automatic song generation for recreational use. However, these models lack fine-grained word-level controllability often desired by musicians in their workflows. To the best of our knowledge, our flow-matching-based JAM-1 is the first effort toward endowing word-level timing and duration control in song generation, allowing fine-grained vocal control. Furthermore, we aim to standardize the evaluation of such lyrics-to-song models through our public evaluation dataset JAME. We show that JAM-1 outperforms the existing models.
567
+
Diffusion and flow-matching models have revolutionized automatic text-to-audio generation in recent times. These models are increasingly capable of generating high quality and faithful audio outputs capturing to speech and acoustic events. However, there is still much room for improvement in creative audio generation that primarily involves music and songs. Recent open lyrics-to-song models, such as, DiffRhythm, ACE-Step, and LeVo, have set an acceptable standard in automatic song generation for recreational use. However, these models lack fine-grained word-level controllability often desired by musicians in their workflows. To the best of our knowledge, our flow-matching-based JAM is the first effort toward endowing word-level timing and duration control in song generation, allowing fine-grained vocal control. To enhance the quality of generated songs to better align with human preferences, we implement aesthetic alignment through Direct Preference Optimization, which iteratively refines the model using a synthetic dataset, eliminating the need or manual data annotations. Furthermore, we aim to standardize the evaluation of such lyrics-to-song models through our public evaluation dataset JAME. We show that JAM outperforms the existing models in terms of the music-specific attributes.
At 530M parameters, JAM-1 is less than half the size of the next smallest system (DiffRhythm-1.1B), enabling faster inference and reduced resource demands.
577
+
At 530M parameters, JAM is less than half the size of the next smallest system (DiffRhythm-1.1B), enabling faster inference and reduced resource demands.
By accepting word- and phoneme-level timing inputs, JAM-1 lets users control the exact placement of each vocal sound, improving rhythmic flexibility and expressive timing.
584
+
By accepting word- and phoneme-level timing inputs, JAM lets users control the exact placement of each vocal sound, improving rhythmic flexibility and expressive timing.
0 commit comments