Skip to content
This repository was archived by the owner on Dec 14, 2023. It is now read-only.

Commit d09d52d

Browse files
Update README.md
1 parent 066a723 commit d09d52d

File tree

1 file changed

+13
-0
lines changed

1 file changed

+13
-0
lines changed

README.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,19 @@
1313
# Text-To-Video-Finetuning
1414
## Finetune ModelScope's Text To Video model using Diffusers 🧨
1515

16+
## Important Update **2023-14-12**
17+
First of all a note from me. Thank you guys for your support, feedback, and journey through discovering the nascent, innate potential of video Diffusion Models.
18+
19+
@damo-vilab Has released a repository for finetuning all things Video Diffusion Models, and I recommend their implementation over this repository.
20+
https://github.com/damo-vilab/i2vgen-xl
21+
22+
https://github.com/ExponentialML/Text-To-Video-Finetuning/assets/59846140/55608f6a-333a-458f-b7d5-94461c5da8bb
23+
24+
This repository will no longer be updated, but will instead be archived for researchers & builders that wish to bootstrap their projects.
25+
I will be leaving the issues, pull requests, and all related things for posterity purposes.
26+
27+
Thanks again!
28+
1629
### Updates
1730
- **2023-7-12**: You can now train a LoRA that is compatibile with the [webui extension](https://github.com/kabachuha/sd-webui-text2video)! See instructions [here.](https://github.com/ExponentialML/Text-To-Video-Finetuning#training-a-lora)
1831
- **2023-4-17**: You can now convert your trained models from diffusers to `.ckpt` format for A111 webui. Thanks @kabachuha!

0 commit comments

Comments
 (0)