Skip to content

Commit c808077

Browse files
Add files via upload
1 parent 2637434 commit c808077

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

jamify.html

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
<head>
44
<meta charset="UTF-8">
55
<meta name="viewport" content="width=device-width, initial-scale=1.0">
6-
<title>JAM-1: A Tiny Flow-based Song Generator with Fine-grained Controllability and
6+
<title>JAM: A Tiny Flow-based Song Generator with Fine-grained Controllability and
77
Aesthetic Alignment</title>
88
<style>
99
* {
@@ -537,16 +537,16 @@
537537
<img src="jamify-logo-new.png" alt="Project Jamify Logo" class="project-logo">
538538
</div>
539539
<div class="title-content">
540-
<h1 class="title">JAM-1: A Tiny Flow-based Song Generator with Fine-grained Controllability and Aesthetic Alignment</h1>
540+
<h1 class="title">JAM: A Tiny Flow-based Song Generator with Fine-grained Controllability and Aesthetic Alignment</h1>
541541
<p class="subtitle">The First Model of Project Jamify</p>
542542
</div>
543543
</div>
544544

545545
<div class="buttons-container">
546-
<a href="https://huggingface.co/declare-lab/JAM-1" class="btn btn-huggingface">
546+
<a href="https://huggingface.co/declare-lab/JAM" class="btn btn-huggingface">
547547
🤗 HuggingFace Model
548548
</a>
549-
<a href="https://huggingface.co/spaces/declare-lab/JAM-1" class="btn btn-demo">
549+
<a href="https://huggingface.co/spaces/declare-lab/JAM" class="btn btn-demo">
550550
🎵 Interactive Demo
551551
</a>
552552
<a href="" class="btn btn-arxiv">
@@ -564,7 +564,7 @@ <h1 class="title">JAM-1: A Tiny Flow-based Song Generator with Fine-grained Cont
564564
<div class="section">
565565
<h2 class="section-title">Abstract</h2>
566566
<p class="abstract">
567-
Diffusion and flow-matching models have revolutionized automatic text-to-audio generation in recent times. These models are increasingly capable of generating high quality and faithful audio outputs capturing speech and acoustic events. However, there is still much room for improvement in creative audio generation that primarily involves music and songs. Recent open lyrics-to-song models, such as DiffRhythm, Ace-Step, and Levo, have set an acceptable standard in automatic song generation for recreational use. However, these models lack fine-grained word-level controllability often desired by musicians in their workflows. To the best of our knowledge, our flow-matching-based JAM-1 is the first effort toward endowing word-level timing and duration control in song generation, allowing fine-grained vocal control. Furthermore, we aim to standardize the evaluation of such lyrics-to-song models through our public evaluation dataset JAME. We show that JAM-1 outperforms the existing models.
567+
Diffusion and flow-matching models have revolutionized automatic text-to-audio generation in recent times. These models are increasingly capable of generating high quality and faithful audio outputs capturing to speech and acoustic events. However, there is still much room for improvement in creative audio generation that primarily involves music and songs. Recent open lyrics-to-song models, such as, DiffRhythm, ACE-Step, and LeVo, have set an acceptable standard in automatic song generation for recreational use. However, these models lack fine-grained word-level controllability often desired by musicians in their workflows. To the best of our knowledge, our flow-matching-based JAM is the first effort toward endowing word-level timing and duration control in song generation, allowing fine-grained vocal control. To enhance the quality of generated songs to better align with human preferences, we implement aesthetic alignment through Direct Preference Optimization, which iteratively refines the model using a synthetic dataset, eliminating the need or manual data annotations. Furthermore, we aim to standardize the evaluation of such lyrics-to-song models through our public evaluation dataset JAME. We show that JAM outperforms the existing models in terms of the music-specific attributes.
568568
</p>
569569
</div>
570570

@@ -574,14 +574,14 @@ <h2 class="section-title">Key Contributions</h2>
574574
<div class="contribution-card">
575575
<div class="contribution-title">🏗️ Compact Architecture</div>
576576
<div class="contribution-desc">
577-
At 530M parameters, JAM-1 is less than half the size of the next smallest system (DiffRhythm-1.1B), enabling faster inference and reduced resource demands.
577+
At 530M parameters, JAM is less than half the size of the next smallest system (DiffRhythm-1.1B), enabling faster inference and reduced resource demands.
578578
</div>
579579
</div>
580580

581581
<div class="contribution-card">
582582
<div class="contribution-title">🎯 Fine-Grained Alignment</div>
583583
<div class="contribution-desc">
584-
By accepting word- and phoneme-level timing inputs, JAM-1 lets users control the exact placement of each vocal sound, improving rhythmic flexibility and expressive timing.
584+
By accepting word- and phoneme-level timing inputs, JAM lets users control the exact placement of each vocal sound, improving rhythmic flexibility and expressive timing.
585585
</div>
586586
</div>
587587

@@ -679,7 +679,7 @@ <h3>🎵 Lyrics</h3>
679679

680680
<div class="comparison-container">
681681
<div class="audio-player jam1">
682-
<div class="model-title">JAM-1 (Ours)</div>
682+
<div class="model-title">JAM (Ours)</div>
683683
<button class="play-button" onclick="togglePlay('jam1')">
684684
<svg class="play-icon" viewBox="0 0 24 24" id="jam1-icon">
685685
<path d="M8 5v14l11-7z"/>

0 commit comments

Comments
 (0)