Skip to content

Commit ef1014c

Browse files
committed
init
1 parent 58bf268 commit ef1014c

File tree

3 files changed

+24
-31
lines changed

3 files changed

+24
-31
lines changed

docs/source/en/_toctree.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,11 +9,11 @@
99
- local: stable_diffusion
1010
title: Basic performance
1111

12-
- title: DiffusionPipeline
12+
- title: Pipelines
1313
isExpanded: false
1414
sections:
1515
- local: using-diffusers/loading
16-
title: Load pipelines
16+
title: DiffusionPipeline
1717
- local: tutorials/autopipeline
1818
title: AutoPipeline
1919
- local: using-diffusers/custom_pipeline_overview

docs/source/en/using-diffusers/loading.md

Lines changed: 5 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -10,25 +10,17 @@ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express o
1010
specific language governing permissions and limitations under the License.
1111
-->
1212

13-
# Load pipelines
14-
1513
[[open-in-colab]]
1614

17-
Diffusion systems consist of multiple components like parameterized models and schedulers that interact in complex ways. That is why we designed the [`DiffusionPipeline`] to wrap the complexity of the entire diffusion system into an easy-to-use API. At the same time, the [`DiffusionPipeline`] is entirely customizable so you can modify each component to build a diffusion system for your use case.
18-
19-
This guide will show you how to load:
15+
# DiffusionPipeline
2016

21-
- pipelines from the Hub and locally
22-
- different components into a pipeline
23-
- multiple pipelines without increasing memory usage
24-
- checkpoint variants such as different floating point types or non-exponential mean averaged (EMA) weights
17+
Diffusion models consists of multiple components like UNets or diffusion transformers (DiTs), text encoders, variational autoencoders (VAEs), and schedulers. The [`DiffusionPipeline`] wraps all of these components into a single easy-to-use API without giving up the flexibility to modify it's components.
2518

26-
## Load a pipeline
19+
This guide will show you how to load a [`DiffusionPipeline`].
2720

28-
> [!TIP]
29-
> Skip to the [DiffusionPipeline explained](#diffusionpipeline-explained) section if you're interested in an explanation about how the [`DiffusionPipeline`] class works.
21+
## Loading a pipeline
3022

31-
There are two ways to load a pipeline for a task:
23+
There are two ways to load a pipeline.
3224

3325
1. Load the generic [`DiffusionPipeline`] class and allow it to automatically detect the correct pipeline class from the checkpoint.
3426
2. Load a specific pipeline class for a specific task.
@@ -80,21 +72,6 @@ pipeline = StableDiffusionImg2ImgPipeline.from_pretrained("stable-diffusion-v1-5
8072

8173
Use the Space below to gauge a pipeline's memory requirements before you download and load it to see if it runs on your hardware.
8274

83-
<div class="block dark:hidden">
84-
<iframe
85-
src="https://diffusers-compute-pipeline-size.hf.space?__theme=light"
86-
width="850"
87-
height="1600"
88-
></iframe>
89-
</div>
90-
<div class="hidden dark:block">
91-
<iframe
92-
src="https://diffusers-compute-pipeline-size.hf.space?__theme=dark"
93-
width="850"
94-
height="1600"
95-
></iframe>
96-
</div>
97-
9875
### Specifying Component-Specific Data Types
9976

10077
You can customize the data types for individual sub-models by passing a dictionary to the `torch_dtype` parameter. This allows you to load different components of a pipeline in different floating point precisions. For instance, if you want to load the transformer with `torch.bfloat16` and all other components with `torch.float16`, you can pass a dictionary mapping:

src/diffusers/__init__.py

Lines changed: 17 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,15 @@
3636
"configuration_utils": ["ConfigMixin"],
3737
"guiders": [],
3838
"hooks": [],
39+
"image_processor": [
40+
"VaeImageProcessor",
41+
"VaeImageProcessorLDM3D",
42+
"PixArtImageProcessor",
43+
"IPAdapterMaskProcessor",
44+
],
45+
"video_processor": [
46+
"VideoProcessor",
47+
],
3948
"loaders": ["FromOriginalModelMixin"],
4049
"models": [],
4150
"modular_pipelines": [],
@@ -940,6 +949,13 @@
940949
ScoreSdeVePipeline,
941950
StableDiffusionMixin,
942951
)
952+
from .image_processor import (
953+
VaeImageProcessor,
954+
VaeImageProcessorLDM3D,
955+
PixArtImageProcessor,
956+
IPAdapterMaskProcessor,
957+
)
958+
from .video_processor import VideoProcessor
943959
from .quantizers import DiffusersQuantizer
944960
from .schedulers import (
945961
AmusedScheduler,
@@ -1336,4 +1352,4 @@
13361352
_import_structure,
13371353
module_spec=__spec__,
13381354
extra_objects={"__version__": __version__},
1339-
)
1355+
)

0 commit comments

Comments
 (0)