You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+16-10Lines changed: 16 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,6 +19,7 @@
19
19
:zany_face: TensorflowTTS provides real-time state-of-the-art speech synthesis architectures such as Tacotron-2, Melgan, Multiband-Melgan, FastSpeech, FastSpeech2 based-on TensorFlow 2. With Tensorflow 2, we can speed-up training/inference progress, optimizer further by using [fake-quantize aware](https://www.tensorflow.org/model_optimization/guide/quantization/training_comprehensive_guide) and [pruning](https://www.tensorflow.org/model_optimization/guide/pruning/pruning_with_keras), make TTS models can be run faster than real-time and be able to deploy on mobile devices or embedded systems.
20
20
21
21
## What's new
22
+
- 2020/08/18 **(NEW!)** Update [new base processor](https://github.com/TensorSpeech/TensorFlowTTS/blob/master/tensorflow_tts/processor/base_processor.py). Add [AutoProcessor](https://github.com/TensorSpeech/TensorFlowTTS/blob/master/tensorflow_tts/inference/auto_processor.py) and [pretrained processor](https://github.com/TensorSpeech/TensorFlowTTS/blob/master/tensorflow_tts/processor/pretrained/) json file.
22
23
- 2020/08/14 **(NEW!)** Support Chinese TTS. Pls see the [colab](https://colab.research.google.com/drive/1YpSHRBRPBI7cnTkQn1UcVTWEQVbsUm1S?usp=sharing). Thank [@azraelkuan](https://github.com/azraelkuan).
23
24
- 2020/08/05 **(NEW!)** Support Korean TTS. Pls see the [colab](https://colab.research.google.com/drive/1ybWwOS5tipgPFttNulp77P6DAB5MtiuN?usp=sharing). Thank [@crux153](https://github.com/crux153).
24
25
- 2020/07/17 Support MultiGPU for all Trainer.
@@ -93,7 +94,7 @@ Here in an audio samples on valid set. [tacotron-2](https://drive.google.com/ope
93
94
94
95
Prepare a dataset in the following format:
95
96
```
96
-
|- datasets/
97
+
|- [NAME_DATASET]/
97
98
| |- metadata.csv
98
99
| |- wav/
99
100
| |- file1.wav
@@ -102,6 +103,8 @@ Prepare a dataset in the following format:
102
103
103
104
where `metadata.csv` has the following format: `id|transcription`. This is a ljspeech-like format, you can ignore preprocessing steps if you have other format dataset.
104
105
106
+
Note that `NAME_DATASET` should be `[ljspeech/kss/baker/libritts]` for example.
107
+
105
108
## Preprocessing
106
109
107
110
The preprocessing has two steps:
@@ -116,20 +119,22 @@ The preprocessing has two steps:
Right now we only support [`ljspeech`](https://keithito.com/LJ-Speech-Dataset/), [`kss`](https://www.kaggle.com/bryanpark/korean-single-speaker-speech-dataset), [`baker`](https://weixinxcxdb.oss-cn-beijing.aliyuncs.com/gwYinPinKu/BZNSYP.rar) for dataset argument. In the future, we intend to support more datasets.
126
+
Right now we only support [`ljspeech`](https://keithito.com/LJ-Speech-Dataset/), [`kss`](https://www.kaggle.com/bryanpark/korean-single-speaker-speech-dataset), [`baker`](https://weixinxcxdb.oss-cn-beijing.aliyuncs.com/gwYinPinKu/BZNSYP.rar) and [`libritts`](http://www.openslr.org/60/) for dataset argument. In the future, we intend to support more datasets.
127
+
128
+
**Note**: To runing `libritts` preprocessing, please first read the instruction in [examples/fastspeech2_libritts](https://github.com/TensorSpeech/TensorFlowTTS/tree/master/examples/fastspeech2_libritts). We need reformat it first before run preprocessing.
124
129
125
130
After preprocessing, the structure of the project folder should be:
126
131
```
127
-
|- datasets/
132
+
|- [NAME_DATASET]/
128
133
| |- metadata.csv
129
134
| |- wav/
130
135
| |- file1.wav
131
136
| |- ...
132
-
|- dump/
137
+
|- dump_[ljspeech/kss/baker/libritts]/
133
138
| |- train/
134
139
| |- ids/
135
140
| |- LJ001-0001-ids.npy
@@ -190,6 +195,7 @@ We use suffix (`ids`, `raw-feats`, `raw-energy`, `raw-f0`, `norm-feats` and `wav
190
195
191
196
**IMPORTANT NOTES**:
192
197
- This preprocessing step is based on [ESPnet](https://github.com/espnet/espnet) so you can combine all models here with other models from ESPnet repository.
198
+
- Regardless how your dataset is formatted, the final structure of `dump` folder **SHOULD** follow above structure to be able use the training script or you can modify by yourself 😄.
193
199
194
200
## Training models
195
201
@@ -198,6 +204,7 @@ To know how to training model from scratch or fine-tune with other datasets/lang
198
204
- For Tacotron-2 tutorial, pls see [example/tacotron2](https://github.com/dathudeptrai/TensorflowTTS/tree/master/examples/tacotron2)
199
205
- For FastSpeech tutorial, pls see [example/fastspeech](https://github.com/dathudeptrai/TensorflowTTS/tree/master/examples/fastspeech)
200
206
- For FastSpeech2 tutorial, pls see [example/fastspeech2](https://github.com/dathudeptrai/TensorflowTTS/tree/master/examples/fastspeech2)
207
+
- For FastSpeech2 + MFA tutorial, pls see [example/fastspeech2_libritts](https://github.com/dathudeptrai/TensorflowTTS/tree/master/examples/fastspeech2_libritts)
201
208
- For MelGAN tutorial, pls see [example/melgan](https://github.com/dathudeptrai/TensorflowTTS/tree/master/examples/melgan)
202
209
- For MelGAN + STFT Loss tutorial, pls see [example/melgan.stft](https://github.com/dathudeptrai/TensorflowTTS/tree/master/examples/melgan.stft)
203
210
- For Multiband-MelGAN tutorial, pls see [example/multiband_melgan](https://github.com/dathudeptrai/TensorflowTTS/tree/master/examples/multiband_melgan)
@@ -241,10 +248,9 @@ import yaml
241
248
242
249
import tensorflow as tf
243
250
244
-
from tensorflow_tts.processor import LJSpeechProcessor
245
-
246
251
from tensorflow_tts.inference import AutoConfig
247
252
from tensorflow_tts.inference import TFAutoModel
253
+
from tensorflow_tts.inference import AutoProcessor
ids = processor.text_to_sequence("Recent research at Harvard has shown meditating for as little as 8 weeks, can actually increase the grey matter in the parts of the brain responsible for emotional regulation, and learning.")
Overrall, Almost models here are licensed under the [Apache 2.0](http://www.apache.org/licenses/LICENSE-2.0) for all countries in the world, except in **Viet Nam** this framework cannot be used for production in any way without permission from TensorflowTTS's Authors. There is an exception, Tacotron-2 can be used with any perpose. So, if you are VietNamese and want to use this framework for production, you **Must** contact our in andvance.
Copy file name to clipboardExpand all lines: examples/fastspeech2_libritts/README.md
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,15 +3,15 @@
3
3
## Prepare
4
4
Everything is done from main repo folder so TensorflowTTS/
5
5
6
-
0. Optional*[Download](http://www.openslr.org/60/) and prepare libritts (helper to prepare libri in examples/fastspeech2_multispeaker/libri_experiment/prepare_libri.ipynb)
6
+
0. Optional*[Download](http://www.openslr.org/60/) and prepare libritts (helper to prepare libri in examplesfastspeech2_libritts/libri_experiment/prepare_libri.ipynb)
7
7
- Dataset structure after finish this step:
8
8
```
9
9
|- TensorFlowTTS/
10
10
| |- LibriTTS/
11
11
| |- |- train-clean-100/
12
12
| |- |- SPEAKERS.txt
13
13
| |- |- ...
14
-
| |- dataset/
14
+
| |- libritts/
15
15
| |- |- 200/
16
16
| |- |- |- 200_124139_000001_000000.txt
17
17
| |- |- |- 200_124139_000001_000000.wav
@@ -25,32 +25,32 @@ Everything is done from main repo folder so TensorflowTTS/
25
25
1. Extract Duration (use examples/mfa_extraction or pretrained tacotron2)
6. Change CharactorDurationF0EnergyMelDataset speaker mapper in fastspeech2_dataset to match your dataset (if you use libri with mfa_extraction you didnt need to change anything)
51
51
7. Change train_libri.sh to match your dataset and run:
0 commit comments