The introduction, pipeline details, experiments, and results are presented in the wandb report.
To get started install the requirements
pip install -r ./requirements.txtThen download train data (LJSpeech dataset)
sudo apt install axel
bash loader.shThis project implements HiFiGAN model for speech synthesis.
To train model from scratch run
python3 train.py -c nv/configs/train.jsonFor fine-tuning pretrained model from checkpoint, --resume parameter is applied.
For example, continuing training model with train.json config organized as follows
python3 train.py -c nv/configs/train.json -r saved/models/final/<run_id>/<any_checkpoint>.pthCheckpoint should be located in default_test_model directory. Pretrained model can be downloaded by running python code
import gdown
gdown.download("https://drive.google.com/uc?id=1I5qPDu6Bsc_xm6u6U35e867RRNeqi0v3", "default_test_model/checkpoint.pth")Model evaluation is executed by command
python3 test.py \
-i default_test_model/test \
-r default_test_model/checkpoint.pth \
-o output \
-l False-i(--input-dir) provide the path to directory with input.wavfiles. Additionally, onetext.txtfile can be located there. In this case it will be read by rows (one row for each audio).-r(--resume) provide the path to model checkpoint. Note that config file is expected to be in the same dir with nameconfig.json.-o(--output) specify output directory path, where.wavfiles will be saved.-l(--log-wandb) determine log results to wandb project or not. IfTrue, authorization in command line is needed. Name of project can be changed in the config file. Lines fromtext.txtfile are also logged, if it is provided.
Running with default parameters
python3 test.pyThe code of model is based on an asr-template project.