Skip to content

Generated video quality is not up to the mark LTX-Video 0.9.5 #11143

@nitinmukesh

Description

@nitinmukesh

Is it that this model can be used with LTXConditionPipeline only, which mandatorily requires image/video? [The documentation says https://huggingface.co/Lightricks/LTX-Video-0.9.5. LTX Video is compatible with the Diffusers Python library. It supports both text-to-video and image-to-video generation.]

Or It has to do with code or model or am I missing any parameter? Did anyone else tried this model using diffusers.

Using the same prompt and settings, with 0.9.1 the output was good
newgenai79/sd-diffuser-webui#8 (comment)

import torch
from diffusers import LTXPipeline
from diffusers.utils import export_to_video

pipe = LTXPipeline.from_pretrained("Lightricks/LTX-Video-0.9.5", torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload()

prompt = "A woman with long brown hair and light skin smiles at another woman with long blonde hair. The woman with brown hair wears a black jacket and has a small, barely noticeable mole on her right cheek. The camera angle is a close-up, focused on the woman with brown hair's face. The lighting is warm and natural, likely from the setting sun, casting a soft glow on the scene. The scene appears to be real-life footage"
negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"

video = pipe(
    prompt=prompt,
    negative_prompt=negative_prompt,
    width=704,
    height=480,
    num_frames=161,
    num_inference_steps=50,
).frames[0]
export_to_video(video, "output.mp4", fps=24)
output.mp4
output1.mp4

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions