Skip to content

An instruct text-to-speech solution based on LLaSA and CosyVoice2 developed by the ASLP lab and collaborators.

License

Notifications You must be signed in to change notification settings

ASLP-lab/VoiceSculptor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

63 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VoiceSculptor

Official inference code for
VoiceSculptor: Your Voice, Designed By You

version HF-model technical report (coming soon) HF-demo Apache-2.0

💡 System Overview

VoiceSculptor is composed of two core components: voice design and voice clone. The voice design module enables the generation of timbre from natural language descriptions and supports command refinement through Retrieval-Augmented Generation (RAG). It also provides fine-grained control over voice attributes, including gender, age, speaking rate, fundamental frequency, volume, and emotional expression. The synthesized audio produced by the voice design module can be used as a prompt waveform for the CosyVoice2 voice cloning model, enabling timbre cloning and downstream speech synthesis tasks.

📊 Instruct TTS Eval

Instruct TTS Eval (ZH)

Model APS (%) DSD (%) RP (%) AVG (%)
Gemini 2.5-Flash* 88.2 90.9 77.3 85.4
Gemini 2.5-Pro* 89.0 90.1 75.5 84.8
GPT-4o-Mini-TTS* 54.9 52.3 46.0 51.1
ElevenLabs* 42.8 50.9 59.1 50.9
VoxInstruct 47.5 52.3 42.6 47.5
MiMo-Audio-7B-Instruct 70.1 66.1 57.1 64.5
VoiceSculptor 75.7 64.7 61.5 67.6

Note

  • Models marked with * are commercial models.
  • InstructTTSEval — InstructTTSEval: Benchmarking Complex Natural-Language Instruction Following in Text-to-Speech Systems. arXiv preprint arXiv:2506.16381.

✨ Demo Video

final.mov

🔥 News

  • [2026-1-2] We opened the repository and uploaded the voice design models! VoiceSculptor

🚀 Getting Started

1. Environment Setup

Follow the steps below to clone the repository and install the required environment.

# Clone the repository and enter the directory
git clone https://github.com/ASLP-lab/VoiceSculptor.git
cd VoiceSculptor

# Create and activate a Conda environment
conda create -n VoiceSculptor python=3.10 -y
conda activate VoiceSculptor

# Install dependencies
pip install -r requirements.txt

2. Download Pre-trained Models

git lfs install
git clone https://huggingface.co/ASLP-lab/VoiceSculptor-VD
git clone https://huggingface.co/HKUSTAudio/xcodec2

3. Infer

For detailed instructions on how to design high-quality voice prompts,
please refer to Voice Design Guide or Voice Design Guide EN.

You need to specify the local paths to the voice-design model and the xcodec2 model in the infer.py file.

python infer.py

📋 TODO

  • 🌐 Demo website
  • 🔓 Release inference code
  • 🤗 Release HuggingFace model
  • 🤗 HuggingFace Space
  • 📝 Release Technical Report
  • 🔓 Release gradio code
  • 🔓 Release RAG code
  • 🔓 Support vLLM
  • 🔓 Release training code

Citation

@misc{VoiceSculptor,
      title={VoiceSculptor: Your Voice, Designed By You},
      author={Jingbin Hu and Huakang Chen and Linhan Ma and Dake Guo and Qirui Zhan and Wenhao Li and Haoyu Zhang and Kangxiang Xia and Ziyu Zhang and Wenjie Tian and Chengyou Wang and Jinrui Liang and Shuhan Guo and Zihang Yang and Bengu Wu and Binbin Zhang and Pengcheng Zhu and Pengyuan Xie and Chuan Xie and Qiang Zhang and Jie Liu and Lei Xie},
      year={2026},
      url={https://github.com/ASLP-lab/VoiceSculptor},
}
@misc{ye2025llasascalingtraintimeinferencetime,
      title={Llasa: Scaling Train-Time and Inference-Time Compute for Llama-based Speech Synthesis},
      author={Zhen Ye and Xinfa Zhu and Chi-Min Chan and Xinsheng Wang and Xu Tan and Jiahe Lei and Yi Peng and Haohe Liu and Yizhu Jin and Zheqi Dai and Hongzhan Lin and Jianyi Chen and Xingjian Du and Liumeng Xue and Yunlin Chen and Zhifei Li and Lei Xie and Qiuqiang Kong and Yike Guo and Wei Xue},
      year={2025},
      eprint={2502.04128},
      archivePrefix={arXiv},
      primaryClass={eess.AS},
      url={https://arxiv.org/abs/2502.04128},
}

License

We use the Apache 2.0 license. Researchers and developers are free to use the codes and model weights of our VoiceSculptor. Check the license at LICENSE for more details.

Acknowledgement

Usage Disclaimer

Additional Notice on Generated Voices

This project provides a speech synthesis model for voice design, intended for academic research, educational purposes, and legitimate applications, such as personalized speech synthesis, assistive technologies, and linguistic research.

Please note:

Do not use this model for unauthorized voice cloning, impersonation, fraud, scams, deepfakes, or any illegal or malicious activities.

Ensure compliance with local laws and regulations when using this model and uphold ethical standards.

The developers assume no liability for any misuse of this model.

Important clarification regarding generated voices:

As a generative model, the voices produced by this system are synthetic outputs inferred by the model, not recordings of real human voices.

The generated voice characteristics do not represent or reproduce any specific real individual, and are not derived from or intended to imitate identifiable persons.

We advocate for the responsible development and use of AI and encourage the community to uphold safety and ethical principles in AI research and applications.

Contact Us

If you are interested in leaving a message to our work, feel free to email jingbin.hu@mail.nwpu.edu.cn or lxie@nwpu.edu.cn

You’re welcome to join our WeChat group for technical discussions, updates.


WeChat Group QR Code

Star History

Star History Chart

About

An instruct text-to-speech solution based on LLaSA and CosyVoice2 developed by the ASLP lab and collaborators.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages