-
Notifications
You must be signed in to change notification settings - Fork 0
Deep GriffinLim Iteration
PaoloSani edited this page Jun 24, 2022
·
1 revision
Here we describe the steps to train the DGL model. Minor changes were made, w.r.t. the original Repo https://github.com/Sytronik/deep-griffinlim-iteration, to accomodate the use of multiple dataset and update legacy libraries.
Folder structure
- model: contains the DGL model
- create.py: where to preprocess the dataset to be ready for the training. To process the files enter: python create.py TRAIN/TEST --num_snr YOUR_CHOICE. We tested our model with num_snr=3.
- create_result_file.py: here the individual wav results are loaded and saved in a single numpy array. This way, we can utilize them directly in our main Jupiter Notebook.
- dataset.py: used by other modules, subclass of the Pytorch Dataset class.
- hparams.py: where paths are set as well as training/testing conditions and other general parameters.
- main.py: here the training/testing conditions are set. To train or test use: python main.py --train/test
- tbwriter.py: this module writes the logs of the training, as well as the result wav files.
- train.py: contains the code for training and testing.
- utils.py: various methods used by the modules.
Authors:
- Michele Perrone: michele.perrone@mail.polimi.it
- Paolo Sani: paolo1.sani@mail.polimi.it