You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Merge branch 'inplace_sum_and_remove_padding_and_better_memory_count' of github.com:bm-synth/diffusers into inplace_sum_and_remove_padding_and_better_memory_count
Copy file name to clipboardExpand all lines: docs/source/en/api/pipelines/hunyuan_video.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -49,7 +49,8 @@ The following models are available for the image-to-video pipeline:
49
49
50
50
| Model name | Description |
51
51
|:---|:---|
52
-
|[`https://huggingface.co/Skywork/SkyReels-V1-Hunyuan-I2V`](https://huggingface.co/Skywork/SkyReels-V1-Hunyuan-I2V)| Skywork's custom finetune of HunyuanVideo (de-distilled). Performs best with `97x544x960` resolution. Performs best at `97x544x960` resolution, `guidance_scale=1.0`, `true_cfg_scale=6.0` and a negative prompt. |
52
+
|[`Skywork/SkyReels-V1-Hunyuan-I2V`](https://huggingface.co/Skywork/SkyReels-V1-Hunyuan-I2V)| Skywork's custom finetune of HunyuanVideo (de-distilled). Performs best with `97x544x960` resolution. Performs best at `97x544x960` resolution, `guidance_scale=1.0`, `true_cfg_scale=6.0` and a negative prompt. |
53
+
|[`hunyuanvideo-community/HunyuanVideo-I2V`](https://huggingface.co/hunyuanvideo-community/HunyuanVideo-I2V)| Tecent's official HunyuanVideo I2V model. Performs best at resolutions of 480, 720, 960, 1280. A higher `shift` value when initializing the scheduler is recommended (good values are between 7 and 20) |
Copy file name to clipboardExpand all lines: docs/source/en/conceptual/evaluation.md
+5Lines changed: 5 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,6 +16,11 @@ specific language governing permissions and limitations under the License.
16
16
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
17
17
</a>
18
18
19
+
> [!TIP]
20
+
> This document has now grown outdated given the emergence of existing evaluation frameworks for diffusion models for image generation. Please check
21
+
> out works like [HEIM](https://crfm.stanford.edu/helm/heim/latest/), [T2I-Compbench](https://arxiv.org/abs/2307.06350),
22
+
> [GenEval](https://arxiv.org/abs/2310.11513).
23
+
19
24
Evaluation of generative models like [Stable Diffusion](https://huggingface.co/docs/diffusers/stable_diffusion) is subjective in nature. But as practitioners and researchers, we often have to make careful choices amongst many different possibilities. So, when working with different generative models (like GANs, Diffusion, etc.), how do we choose one over the other?
20
25
21
26
Qualitative evaluation of such models can be error-prone and might incorrectly influence a decision.
0 commit comments