Skip to content

Commit d51d9a9

Browse files
feat: add trainer link
1 parent 85549a6 commit d51d9a9

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
<img src="./LOGO.png"></img>
22

33
Unconditional audio generation using diffusion models, in PyTorch. The goal of this repository is to explore different architectures and diffusion models to generate audio (speech and music) directly from/to the waveform.
4-
Progress will be documented in the [experiments](#experiments) section.
4+
Progress will be documented in the [experiments](#experiments) section. You can use the [`audio-diffusion-pytorch-trainer`](https://github.com/archinetai/audio-diffusion-pytorch-trainer) to run your own experiments – please share your findings in the [discussions](https://github.com/archinetai/audio-diffusion-pytorch/discussions) page!
55

66
## Install
77

@@ -143,6 +143,7 @@ y_long = composer(y, keep_start=True) # [1, 1, 98304]
143143
- [x] Add dynamic thresholding.
144144
- [x] Add (variational) autoencoder option to compress audio before diffusion.
145145
- [x] Fix inpainting and make it work with ADPM2 sampler.
146+
- [x] Add trainer with experiments.
146147

147148
## Appreciation
148149

0 commit comments

Comments
 (0)