You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/en/using-diffusers/modular.md
+35-8Lines changed: 35 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,35 +12,62 @@ specific language governing permissions and limitations under the License.
12
12
13
13
# Modular Diffusers
14
14
15
-
Modular Diffusers is a unified pipeline that greatly simplifies how you work with diffusion models. There are two main advantages of using modular Diffusers:
15
+
Modular Diffusers is a unified pipeline that simplifies how you work with diffusion models. There are two main advantages of using modular Diffusers:
16
16
17
-
* Avoid rewriting an entire pipeline from scratch. Reuse existing blocks and only create new blocks for the functionality you need.
17
+
* Avoid rewriting an entire pipeline from scratch. Reuse existing blocks and only create new blocks for the functionalities you need.
18
18
* Flexibility. Compose pipeline blocks for one workflow and mix and match them for another workflow where a specific block works better.
19
19
20
-
Create a [`ComponentManager`] to manage the components in the pipeline. The example below adds the [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) weights. Reduce memory usage by automatically offloading unused components to the CPU and loading them back on the GPU when they're needed.
20
+
The example below composes a pipeline with an [IP-Adapter](./loading_adapters#ip-adapter) to enable image prompting.
21
+
22
+
Create a [`ComponentsManager`] to manage the components (text encoders, UNets, VAE, etc.) in the pipeline. Add the [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) weights with [`add_from_pretrained`], and load the image encoder and feature extractor for the IP-Adapter with [`add`].
23
+
24
+
> [!TIP]
25
+
> Reduce memory usage by automatically offloading unused components to the CPU and loading them back on the GPU when they're needed.
21
26
22
27
```py
23
28
import torch
29
+
from transformers import CLIPVisionModelWithProjection, CLIPImageProcessor
24
30
from diffusers import ModularPipeline, StableDiffusionXLAutoPipeline
25
31
from diffusers.pipelines.components_manager import ComponentsManager
Use `from_block` to load the [`StableDiffusionXLAutoPipeline`] block into [`ModularPipeline`], and then use [`update_states`] to update it with the [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) weights.
45
+
Use [`from_block`] to load the [`StableDiffusionXLAutoPipeline`] block into [`ModularPipeline`], and use [`update_states`] to update it with the components in [`ComponentsManager`].
[`ModularPipeline`] automatically adapts to your input (text, image, mask image, IP-Adapter, etc.). You don't need to choose a specific pipeline for a task.
0 commit comments