You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+12Lines changed: 12 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,10 +18,20 @@ Experience the CogVideoX-5B model online at <a href="https://huggingface.co/spac
18
18
</p>
19
19
<palign="center">
20
20
📍 Visit <ahref="https://chatglm.cn/video?lang=en?fr=osm_cogvideo">QingYing</a> and <ahref="https://open.bigmodel.cn/?utm_campaign=open&_channel_track_key=OWTVNma9">API Platform</a> to experience larger-scale commercial video generation models.
21
+
22
+
We have publicly shared the feishu <ahref="https://zhipu-ai.feishu.cn/wiki/DHCjw1TrJiTyeukfc9RceoSRnCh">technical documentation</a> on CogVideoX fine-tuning scenarios, aiming to further increase the flexibility of distribution. All examples in the public documentation can be fully replicated.
23
+
24
+
CogVideoX fine-tuning is divided into SFT and LoRA fine-tuning. Based on our publicly available data processing scripts, you can more easily align specific styles in vertical scenarios. We provide guidance for ablation experiments on character image (IP) and scene style, further reducing the difficulty of replicating fine-tuning tasks.
25
+
26
+
We look forward to creative explorations and contributions.
21
27
</p>
22
28
23
29
## Project Updates
24
30
31
+
- 🔥🔥 **News**: ```2024/10/10```: We have updated our technical report, including more training details and demos.
32
+
33
+
- 🔥🔥 **News**: ```2024/10/09```: We have publicly released the [technical documentation](https://zhipu-ai.feishu.cn/wiki/DHCjw1TrJiTyeukfc9RceoSRnCh) for CogVideoX fine-tuning on Feishu, further increasing distribution flexibility. All examples in the public documentation can be fully reproduced.
34
+
25
35
- 🔥🔥 **News**: ```2024/9/25```: CogVideoX web demo is available on Replicate. Try the text-to-video model **CogVideoX-5B** here [](https://replicate.com/chenxwh/cogvideox-t2v) and image-to-video model **CogVideoX-5B-I2V** here [](https://replicate.com/chenxwh/cogvideox-i2v).
26
36
- 🔥🔥 **News**: ```2024/9/19```: We have open-sourced the CogVideoX series image-to-video model **CogVideoX-5B-I2V**.
27
37
This model can take an image as a background input and generate a video combined with prompt words, offering greater
@@ -294,6 +304,8 @@ works have already been adapted for CogVideoX, and we invite everyone to use the
0 commit comments