Skip to content

Commit e276f08

Browse files
authored
Merge branch 'main' into parallel-shards-loading
2 parents dca6388 + 38740dd commit e276f08

File tree

100 files changed

+1774
-4065
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

100 files changed

+1774
-4065
lines changed

docs/source/en/_toctree.yml

Lines changed: 25 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -112,22 +112,24 @@
112112
sections:
113113
- local: modular_diffusers/overview
114114
title: Overview
115-
- local: modular_diffusers/modular_pipeline
116-
title: Modular Pipeline
117-
- local: modular_diffusers/components_manager
118-
title: Components Manager
115+
- local: modular_diffusers/quickstart
116+
title: Quickstart
119117
- local: modular_diffusers/modular_diffusers_states
120-
title: Modular Diffusers States
118+
title: States
121119
- local: modular_diffusers/pipeline_block
122-
title: Pipeline Block
120+
title: ModularPipelineBlocks
123121
- local: modular_diffusers/sequential_pipeline_blocks
124-
title: Sequential Pipeline Blocks
122+
title: SequentialPipelineBlocks
125123
- local: modular_diffusers/loop_sequential_pipeline_blocks
126-
title: Loop Sequential Pipeline Blocks
124+
title: LoopSequentialPipelineBlocks
127125
- local: modular_diffusers/auto_pipeline_blocks
128-
title: Auto Pipeline Blocks
129-
- local: modular_diffusers/end_to_end_guide
130-
title: End-to-End Example
126+
title: AutoPipelineBlocks
127+
- local: modular_diffusers/modular_pipeline
128+
title: ModularPipeline
129+
- local: modular_diffusers/components_manager
130+
title: ComponentsManager
131+
- local: modular_diffusers/guiders
132+
title: Guiders
131133

132134
- title: Training
133135
isExpanded: false
@@ -282,6 +284,18 @@
282284
title: Outputs
283285
- local: api/quantization
284286
title: Quantization
287+
- title: Modular
288+
sections:
289+
- local: api/modular_diffusers/pipeline
290+
title: Pipeline
291+
- local: api/modular_diffusers/pipeline_blocks
292+
title: Blocks
293+
- local: api/modular_diffusers/pipeline_states
294+
title: States
295+
- local: api/modular_diffusers/pipeline_components
296+
title: Components and configs
297+
- local: api/modular_diffusers/guiders
298+
title: Guiders
285299
- title: Loaders
286300
sections:
287301
- local: api/loaders/ip_adapter
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
# Guiders
2+
3+
Guiders are components in Modular Diffusers that control how the diffusion process is guided during generation. They implement various guidance techniques to improve generation quality and control.
4+
5+
## BaseGuidance
6+
7+
[[autodoc]] diffusers.guiders.guider_utils.BaseGuidance
8+
9+
## ClassifierFreeGuidance
10+
11+
[[autodoc]] diffusers.guiders.classifier_free_guidance.ClassifierFreeGuidance
12+
13+
## ClassifierFreeZeroStarGuidance
14+
15+
[[autodoc]] diffusers.guiders.classifier_free_zero_star_guidance.ClassifierFreeZeroStarGuidance
16+
17+
## SkipLayerGuidance
18+
19+
[[autodoc]] diffusers.guiders.skip_layer_guidance.SkipLayerGuidance
20+
21+
## SmoothedEnergyGuidance
22+
23+
[[autodoc]] diffusers.guiders.smoothed_energy_guidance.SmoothedEnergyGuidance
24+
25+
## PerturbedAttentionGuidance
26+
27+
[[autodoc]] diffusers.guiders.perturbed_attention_guidance.PerturbedAttentionGuidance
28+
29+
## AdaptiveProjectedGuidance
30+
31+
[[autodoc]] diffusers.guiders.adaptive_projected_guidance.AdaptiveProjectedGuidance
32+
33+
## AutoGuidance
34+
35+
[[autodoc]] diffusers.guiders.auto_guidance.AutoGuidance
36+
37+
## TangentialClassifierFreeGuidance
38+
39+
[[autodoc]] diffusers.guiders.tangential_classifier_free_guidance.TangentialClassifierFreeGuidance
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
# Pipeline
2+
3+
## ModularPipeline
4+
5+
[[autodoc]] diffusers.modular_pipelines.modular_pipeline.ModularPipeline
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
# Pipeline blocks
2+
3+
## ModularPipelineBlocks
4+
5+
[[autodoc]] diffusers.modular_pipelines.modular_pipeline.ModularPipelineBlocks
6+
7+
## SequentialPipelineBlocks
8+
9+
[[autodoc]] diffusers.modular_pipelines.modular_pipeline.SequentialPipelineBlocks
10+
11+
## LoopSequentialPipelineBlocks
12+
13+
[[autodoc]] diffusers.modular_pipelines.modular_pipeline.LoopSequentialPipelineBlocks
14+
15+
## AutoPipelineBlocks
16+
17+
[[autodoc]] diffusers.modular_pipelines.modular_pipeline.AutoPipelineBlocks
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
# Components and configs
2+
3+
## ComponentSpec
4+
5+
[[autodoc]] diffusers.modular_pipelines.modular_pipeline.ComponentSpec
6+
7+
## ConfigSpec
8+
9+
[[autodoc]] diffusers.modular_pipelines.modular_pipeline.ConfigSpec
10+
11+
## ComponentsManager
12+
13+
[[autodoc]] diffusers.modular_pipelines.components_manager.ComponentsManager
14+
15+
## InsertableDict
16+
17+
[[autodoc]] diffusers.modular_pipelines.modular_pipeline_utils.InsertableDict
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
# Pipeline states
2+
3+
## PipelineState
4+
5+
[[autodoc]] diffusers.modular_pipelines.modular_pipeline.PipelineState
6+
7+
## BlockState
8+
9+
[[autodoc]] diffusers.modular_pipelines.modular_pipeline.BlockState

docs/source/en/api/pipelines/flux.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,8 @@ Original model checkpoints for Flux can be found [here](https://huggingface.co/b
2525

2626
Flux can be quite expensive to run on consumer hardware devices. However, you can perform a suite of optimizations to run it faster and in a more memory-friendly manner. Check out [this section](https://huggingface.co/blog/sd3#memory-optimizations-for-sd3) for more details. Additionally, Flux can benefit from quantization for memory efficiency with a trade-off in inference latency. Refer to [this blog post](https://huggingface.co/blog/quanto-diffusers) to learn more. For an exhaustive list of resources, check out [this gist](https://gist.github.com/sayakpaul/b664605caf0aa3bf8585ab109dd5ac9c).
2727

28+
[Caching](../../optimization/cache) may also speed up inference by storing and reusing intermediate outputs.
29+
2830
</Tip>
2931

3032
Flux comes in the following variants:

docs/source/en/api/pipelines/hidream.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818

1919
<Tip>
2020

21-
Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.
21+
[Caching](../../optimization/cache) may also speed up inference by storing and reusing intermediate outputs.
2222

2323
</Tip>
2424

docs/source/en/api/pipelines/ltx_video.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ export_to_video(video, "output.mp4", fps=24)
8888
</hfoption>
8989
<hfoption id="inference speed">
9090

91-
[Compilation](../../optimization/fp16#torchcompile) is slow the first time but subsequent calls to the pipeline are faster.
91+
[Compilation](../../optimization/fp16#torchcompile) is slow the first time but subsequent calls to the pipeline are faster. [Caching](../../optimization/cache) may also speed up inference by storing and reusing intermediate outputs.
9292

9393
```py
9494
import torch

docs/source/en/api/pipelines/qwenimage.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ Check out the model card [here](https://huggingface.co/Qwen/Qwen-Image) to learn
2020

2121
<Tip>
2222

23-
Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.
23+
[Caching](../../optimization/cache) may also speed up inference by storing and reusing intermediate outputs.
2424

2525
</Tip>
2626

0 commit comments

Comments
 (0)