Skip to content

Latest commit

 

History

History
 
 

README.md

Make 🤗 D🧨ffusers run on MindSpore

State-of-the-art diffusion models for image and audio generation in MindSpore. We've tried to provide a completely consistent interface and usage with the huggingface/diffusers. Only necessary changes are made to the huggingface/diffusers to make it seamless for users from torch.

📦 Requirements

mindspore ascend driver cann
>=2.6.0 >=24.1.RC2 >=8.1.RC1

Quickstart

Generating outputs is super easy with 🤗 Diffusers. To generate an image from text, use the from_pretrained method to load any pretrained diffusion model (browse the Hub for 19000+ checkpoints):

- from diffusers import DiffusionPipeline
+ from mindone.diffusers import DiffusionPipeline

pipe = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
-    torch_dtype=torch.float16,
+    mindspore_dtype=mindspore.float16
    use_safetensors=True
)

prompt = "An astronaut riding a green horse"

images = pipe(prompt=prompt)[0][0]

official supported mindone.diffusers examples(follow hf diffusers):

third-party supported mindone.diffusers examples:

Tip

If you are trying to develop your own 🤗diffusers-style training script based on MindONE diffusers, you can refer to this guide.