You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Every model has a specific pipeline subclass that inherits from [`DiffusionPipeline`]. A subclass usually has a narrow focus and are task-specific. See the table below for an example.
The [`~QwenImagePipeline.from_pretrained`] method won't download files from the Hub when it detects a local path. But this also means it won't download and cache any updates that have been made to the model.
The `device_map` argument determines individual model or pipeline placement on an accelerator like a GPU. It is especially helpful when there are multiple GPUs.
109
+
110
+
Diffusers currently provides three options to `device_map`, `"cuda"`, `"balanced"` and `"auto"`. Refer to the table below to compare the three placement strategies.
111
+
112
+
| parameter | description |
113
+
|---|---|
114
+
|`"cuda"`| places model on CUDA device |
115
+
|`"balanced"`| evenly distributes model or pipeline on all GPUs |
116
+
|`"auto"`| distribute model or pipeline from fastest device first to slowest |
117
+
118
+
Use the `max_memory` argument in [`~DiffusionPipeline.from_pretrained`] to allocate a maximum amount of memory to use on each device. By default, Diffusers uses the maximum amount available.
119
+
120
+
<hfoptionsid="device_map">
121
+
<hfoptionid="pipeline">
122
+
123
+
```py
124
+
import torch
125
+
from diffusers import DiffusionPipeline
126
+
127
+
pipeline = DiffusionPipeline.from_pretrained(
128
+
"black-forest-labs/FLUX.1-dev",
129
+
torch_dtype=torch.bfloat16,
130
+
device_map="cuda",
131
+
)
132
+
```
133
+
134
+
</hfoption>
135
+
<hfoptionid="individual model">
136
+
137
+
```py
138
+
import torch
139
+
from diffusers import DiffusionPipeline, AutoModel
140
+
141
+
max_memory = {0: "16GB", 1: "16GB"}
142
+
transformer = AutoModel.from_pretrained(
143
+
"black-forest-labs/FLUX.1-dev",
144
+
subfolder="transformer",
145
+
torch_dtype=torch.bfloat16
146
+
device_map="cuda",
147
+
max_memory=max_memory
148
+
)
149
+
```
150
+
151
+
</hfoption>
152
+
</hfoptions>
153
+
154
+
The `hf_device_map` attribute allows you to access and view the `device_map`.
Reset a pipeline's `device_map` with the [`~DiffusionPipeline.reset_device_map`] method. This is necessary if you want to use methods such as `.to()`, [`~DiffusionPipeline.enable_sequential_cpu_offload`], and [`~DiffusionPipeline.enable_model_cpu_offload`].
162
+
163
+
```py
164
+
pipeline.reset_device_map()
165
+
```
166
+
110
167
## Parallel loading
111
168
112
169
Large models are often [sharded](../training/distributed_inference#model-sharding) into smaller files so that they are easier to load. Diffusers supports loading shards in parallel to speed up the loading process.
> Pipelines created by [`~DiffusionPipeline.from_pipe`] share the same models and *state*. Modifying the state of a model in one pipeline affects all the other pipelines that share the same model.
205
260
206
-
Some methods may not work correctly on pipelines created with [`~DiffusionPipeline.from_pipe`]. For example, [`~DiffusionPipeline.enable_model_cpu_offload`] relies on a unique model execution order, which may differ in the new pipeline. To ensure proper functionality, reapply these methods on the new pipeline.
261
+
Some methods may not work correctly on pipelines created with [`~DiffusionPipeline.from_pipe`]. For example, [`~DiffusionPipeline.enable_model_cpu_offload`] relies on a unique model execution order, which may differ in the new pipeline. To ensure proper functionality, reapply these methods on the new pipeline.
262
+
263
+
## Safety checker
264
+
265
+
Diffusers provides a [safety checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) for older Stable Diffusion models to prevent generating harmful content. It screens the generated output against a set of hardcoded harmful concepts.
266
+
267
+
If you want to disable the safety checker, pass `safety_checker=None` in [`!DiffusionPipeline.from_pretrained`] as shown below.
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing `safety_checker=None`. Ensure that you abide by the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend keeping the safety filter enabled in all public-facing circumstances, disabling it only for use cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
0 commit comments