You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: packages/tasks/src/tasks/image-to-video/about.md
+25-9Lines changed: 25 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,22 +14,38 @@ Expand on the narrative of an image by generating a short video that imagines wh
14
14
15
15
Use an input image as a strong visual anchor to guide the generation of a video, ensuring that the style, characters, or objects in the video remain consistent with the source image.
16
16
17
-
##Task Variants
17
+
### Controllable Motion
18
18
19
-
Image-to-video models can have variants based on the specific type of transformation or control offered.
19
+
Image-to-video models can be used to specify the direction or intensity of motion or camera control, giving more fine-grained control over the generated animation.
20
20
21
-
### Controllable Motion
21
+
##Inference
22
22
23
-
Image-to-video models can be used to specify the direction or intensity of motion, giving more fine-grained control over the generated animation.
23
+
Running the model Wan 2.1 T2V 1.3B with diffusers
24
24
25
-
### Loopable Videos
25
+
```py
26
+
import torch
27
+
from diffusers import AutoencoderKLWan, WanPipeline
28
+
from diffusers.utils import export_to_video
26
29
27
-
Models can be used to to create seamlessly looping videos, perfect for backgrounds or short, endlessly watchable clips.
negative_prompt ="Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards"
30
37
31
-
Contribute an inference snippet for image-to-video here!
38
+
output = pipe(
39
+
prompt=prompt,
40
+
negative_prompt=negative_prompt,
41
+
height=480,
42
+
width=832,
43
+
num_frames=81,
44
+
guidance_scale=5.0
45
+
).frames[0]
46
+
export_to_video(output, "output.mp4", fps=15)
47
+
```
32
48
33
49
## Useful Resources
34
50
35
-
In this area, you can insert useful resources about how to train or use a model for this task.
51
+
To train image-to-video LoRAs check out [finetrainers](https://github.com/a-r-r-o-w/finetrainers) and [musubi trainer](https://github.com/kohya-ss/musubi-tuner).
0 commit comments