You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> We're proud to release Parler-TTS v0.1, our first 300M parameter model, trained on 10.5K hours of audio data.
10
5
> In the coming weeks, we'll be working on scaling up to 50k hours of data, in preparation for the v1 model.
@@ -15,6 +10,15 @@ Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. A
15
10
16
11
This repository contains the inference and training code for Parler-TTS. It is designed to accompany the [Data-Speech](https://github.com/huggingface/dataspeech) repository for dataset annotation.
Copy file name to clipboardExpand all lines: training/TRAINING.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -207,5 +207,5 @@ Thus, the script generalises to any number of training datasets.
207
207
208
208
209
209
> [!IMPORTANT]
210
-
> Starting training a new model from scratch can easily be overwhelming, here how the training of v0.01 looked like: [logs](https://api.wandb.ai/links/ylacombe/ea449l81)
210
+
> Starting training a new model from scratch can easily be overwhelming,so here's what training looked like for v0.1: [logs](https://api.wandb.ai/links/ylacombe/ea449l81)
0 commit comments