|  | 
| 16 | 16 | 
 | 
| 17 | 17 | Qwen-Image from the Qwen team is an image generation foundation model in the Qwen series that achieves significant advances in complex text rendering and precise image editing. Experiments show strong general capabilities in both image generation and editing, with exceptional performance in text rendering, especially for Chinese. | 
| 18 | 18 | 
 | 
| 19 |  | -Check out the model card [here](https://huggingface.co/Qwen/Qwen-Image) to learn more. | 
|  | 19 | +Qwen-Image comes in the following variants: | 
|  | 20 | + | 
|  | 21 | +| model type | model id | | 
|  | 22 | +|:----------:|:--------:| | 
|  | 23 | +| Qwen-Image | [`Qwen/Qwen-Image`](https://huggingface.co/Qwen/Qwen-Image) | | 
|  | 24 | +| Qwen-Image-Edit | [`Qwen/Qwen-Image-Edit`](https://huggingface.co/Qwen/Qwen-Image-Edit) | | 
| 20 | 25 | 
 | 
| 21 | 26 | <Tip> | 
| 22 | 27 | 
 | 
| 23 |  | -Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines. | 
|  | 28 | +[Caching](../../optimization/cache) may also speed up inference by storing and reusing intermediate outputs. | 
| 24 | 29 | 
 | 
| 25 | 30 | </Tip> | 
| 26 | 31 | 
 | 
|  | 32 | +## LoRA for faster inference | 
|  | 33 | + | 
|  | 34 | +Use a LoRA from `lightx2v/Qwen-Image-Lightning` to speed up inference by reducing the | 
|  | 35 | +number of steps. Refer to the code snippet below: | 
|  | 36 | + | 
|  | 37 | +<details> | 
|  | 38 | +<summary>Code</summary> | 
|  | 39 | + | 
|  | 40 | +```py | 
|  | 41 | +from diffusers import DiffusionPipeline, FlowMatchEulerDiscreteScheduler | 
|  | 42 | +import torch  | 
|  | 43 | +import math | 
|  | 44 | + | 
|  | 45 | +ckpt_id = "Qwen/Qwen-Image" | 
|  | 46 | + | 
|  | 47 | +# From | 
|  | 48 | +# https://github.com/ModelTC/Qwen-Image-Lightning/blob/342260e8f5468d2f24d084ce04f55e101007118b/generate_with_diffusers.py#L82C9-L97C10 | 
|  | 49 | +scheduler_config = { | 
|  | 50 | +    "base_image_seq_len": 256, | 
|  | 51 | +    "base_shift": math.log(3),  # We use shift=3 in distillation | 
|  | 52 | +    "invert_sigmas": False, | 
|  | 53 | +    "max_image_seq_len": 8192, | 
|  | 54 | +    "max_shift": math.log(3),  # We use shift=3 in distillation | 
|  | 55 | +    "num_train_timesteps": 1000, | 
|  | 56 | +    "shift": 1.0, | 
|  | 57 | +    "shift_terminal": None,  # set shift_terminal to None | 
|  | 58 | +    "stochastic_sampling": False, | 
|  | 59 | +    "time_shift_type": "exponential", | 
|  | 60 | +    "use_beta_sigmas": False, | 
|  | 61 | +    "use_dynamic_shifting": True, | 
|  | 62 | +    "use_exponential_sigmas": False, | 
|  | 63 | +    "use_karras_sigmas": False, | 
|  | 64 | +} | 
|  | 65 | +scheduler = FlowMatchEulerDiscreteScheduler.from_config(scheduler_config) | 
|  | 66 | +pipe = DiffusionPipeline.from_pretrained( | 
|  | 67 | +    ckpt_id, scheduler=scheduler, torch_dtype=torch.bfloat16 | 
|  | 68 | +).to("cuda") | 
|  | 69 | +pipe.load_lora_weights( | 
|  | 70 | +    "lightx2v/Qwen-Image-Lightning", weight_name="Qwen-Image-Lightning-8steps-V1.0.safetensors" | 
|  | 71 | +) | 
|  | 72 | + | 
|  | 73 | +prompt = "a tiny astronaut hatching from an egg on the moon, Ultra HD, 4K, cinematic composition." | 
|  | 74 | +negative_prompt = " " | 
|  | 75 | +image = pipe( | 
|  | 76 | +    prompt=prompt, | 
|  | 77 | +    negative_prompt=negative_prompt, | 
|  | 78 | +    width=1024, | 
|  | 79 | +    height=1024, | 
|  | 80 | +    num_inference_steps=8, | 
|  | 81 | +    true_cfg_scale=1.0, | 
|  | 82 | +    generator=torch.manual_seed(0), | 
|  | 83 | +).images[0] | 
|  | 84 | +image.save("qwen_fewsteps.png") | 
|  | 85 | +``` | 
|  | 86 | + | 
|  | 87 | +</details> | 
|  | 88 | + | 
| 27 | 89 | ## QwenImagePipeline | 
| 28 | 90 | 
 | 
| 29 | 91 | [[autodoc]] QwenImagePipeline | 
| 30 | 92 |   - all | 
| 31 | 93 |   - __call__ | 
| 32 | 94 | 
 | 
|  | 95 | +## QwenImageImg2ImgPipeline | 
|  | 96 | + | 
|  | 97 | +[[autodoc]] QwenImageImg2ImgPipeline | 
|  | 98 | +  - all | 
|  | 99 | +  - __call__ | 
|  | 100 | + | 
|  | 101 | +## QwenImageInpaintPipeline | 
|  | 102 | + | 
|  | 103 | +[[autodoc]] QwenImageInpaintPipeline | 
|  | 104 | +  - all | 
|  | 105 | +  - __call__ | 
|  | 106 | + | 
|  | 107 | +## QwenImageEditPipeline | 
|  | 108 | + | 
|  | 109 | +[[autodoc]] QwenImageEditPipeline | 
|  | 110 | +  - all | 
|  | 111 | +  - __call__ | 
|  | 112 | + | 
| 33 | 113 | ## QwenImagePipelineOutput | 
| 34 | 114 | 
 | 
| 35 |  | -[[autodoc]] pipelines.qwenimage.pipeline_output.QwenImagePipelineOutput | 
|  | 115 | +[[autodoc]] pipelines.qwenimage.pipeline_output.QwenImagePipelineOutput | 
0 commit comments