diff --git a/docs/source/en/api/pipelines/text_to_video_zero.md b/docs/source/en/api/pipelines/text_to_video_zero.md index 6f157c668785..c6bf30fed7af 100644 --- a/docs/source/en/api/pipelines/text_to_video_zero.md +++ b/docs/source/en/api/pipelines/text_to_video_zero.md @@ -40,6 +40,7 @@ To generate a video from prompt, run the following Python code: ```python import torch from diffusers import TextToVideoZeroPipeline +import imageio model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5" pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") diff --git a/docs/source/en/optimization/coreml.md b/docs/source/en/optimization/coreml.md index 49ff1e9c9356..d090ef0ed3ba 100644 --- a/docs/source/en/optimization/coreml.md +++ b/docs/source/en/optimization/coreml.md @@ -95,7 +95,7 @@ print(f"Model downloaded at {model_path}") Once you have downloaded a snapshot of the model, you can test it using Apple's Python script. ```shell -python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" -i models/coreml-stable-diffusion-v1-4_original_packages -o --compute-unit CPU_AND_GPU --seed 93 +python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" -i ./models/coreml-stable-diffusion-v1-4_original_packages/original/packages -o --compute-unit CPU_AND_GPU --seed 93 ``` Pass the path of the downloaded checkpoint with `-i` flag to the script. `--compute-unit` indicates the hardware you want to allow for inference. It must be one of the following options: `ALL`, `CPU_AND_GPU`, `CPU_ONLY`, `CPU_AND_NE`. You may also provide an optional output path, and a seed for reproducibility.