Skip to content

Commit 70b816a

Browse files
authored
Merge branch 'main' into torch-main-dep
2 parents c46cefb + 5e181ed commit 70b816a

File tree

482 files changed

+12126
-1955
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

482 files changed

+12126
-1955
lines changed

.github/workflows/nightly_tests.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -340,6 +340,9 @@ jobs:
340340
- backend: "optimum_quanto"
341341
test_location: "quanto"
342342
additional_deps: []
343+
- backend: "nvidia_modelopt"
344+
test_location: "modelopt"
345+
additional_deps: []
343346
runs-on:
344347
group: aws-g6e-xlarge-plus
345348
container:

README.md

Lines changed: 1 addition & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ limitations under the License.
3737

3838
## Installation
3939

40-
We recommend installing 🤗 Diffusers in a virtual environment from PyPI or Conda. For more details about installing [PyTorch](https://pytorch.org/get-started/locally/) and [Flax](https://flax.readthedocs.io/en/latest/#installation), please refer to their official documentation.
40+
We recommend installing 🤗 Diffusers in a virtual environment from PyPI or Conda. For more details about installing [PyTorch](https://pytorch.org/get-started/locally/), please refer to their official documentation.
4141

4242
### PyTorch
4343

@@ -53,14 +53,6 @@ With `conda` (maintained by the community):
5353
conda install -c conda-forge diffusers
5454
```
5555

56-
### Flax
57-
58-
With `pip` (official package):
59-
60-
```bash
61-
pip install --upgrade diffusers[flax]
62-
```
63-
6456
### Apple Silicon (M1/M2) support
6557

6658
Please refer to the [How to use Stable Diffusion in Apple Silicon](https://huggingface.co/docs/diffusers/optimization/mps) guide.

docs/source/en/_toctree.yml

Lines changed: 8 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -21,15 +21,17 @@
2121
- local: using-diffusers/callback
2222
title: Pipeline callbacks
2323
- local: using-diffusers/reusing_seeds
24-
title: Reproducible pipelines
24+
title: Reproducibility
2525
- local: using-diffusers/schedulers
2626
title: Load schedulers and models
27+
- local: using-diffusers/models
28+
title: Models
2729
- local: using-diffusers/scheduler_features
2830
title: Scheduler features
2931
- local: using-diffusers/other-formats
3032
title: Model files and layouts
3133
- local: using-diffusers/push_to_hub
32-
title: Push files to the Hub
34+
title: Sharing pipelines and models
3335

3436
- title: Adapters
3537
isExpanded: false
@@ -58,14 +60,6 @@
5860
title: Batch inference
5961
- local: training/distributed_inference
6062
title: Distributed inference
61-
- local: using-diffusers/scheduler_features
62-
title: Scheduler features
63-
- local: using-diffusers/callback
64-
title: Pipeline callbacks
65-
- local: using-diffusers/reusing_seeds
66-
title: Reproducible pipelines
67-
- local: using-diffusers/image_quality
68-
title: Controlling image quality
6963

7064
- title: Inference optimization
7165
isExpanded: false
@@ -94,6 +88,8 @@
9488
title: xDiT
9589
- local: optimization/para_attn
9690
title: ParaAttention
91+
- local: using-diffusers/image_quality
92+
title: FreeU
9793

9894
- title: Hybrid Inference
9995
isExpanded: false
@@ -190,12 +186,12 @@
190186
title: torchao
191187
- local: quantization/quanto
192188
title: quanto
189+
- local: quantization/modelopt
190+
title: NVIDIA ModelOpt
193191

194192
- title: Model accelerators and hardware
195193
isExpanded: false
196194
sections:
197-
- local: using-diffusers/stable_diffusion_jax_how_to
198-
title: JAX/Flax
199195
- local: optimization/onnx
200196
title: ONNX
201197
- local: optimization/open_vino

docs/source/en/api/image_processor.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,12 @@ All pipelines with [`VaeImageProcessor`] accept PIL Image, PyTorch tensor, or Nu
2020

2121
[[autodoc]] image_processor.VaeImageProcessor
2222

23+
## InpaintProcessor
24+
25+
The [`InpaintProcessor`] accepts `mask` and `image` inputs and process them together. Optionally, it can accept padding_mask_crop and apply mask overlay.
26+
27+
[[autodoc]] image_processor.InpaintProcessor
28+
2329
## VaeImageProcessorLDM3D
2430

2531
The [`VaeImageProcessorLDM3D`] accepts RGB and depth inputs and returns RGB and depth outputs.

docs/source/en/api/models/autoencoderkl.md

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -44,15 +44,3 @@ model = AutoencoderKL.from_single_file(url)
4444
## DecoderOutput
4545

4646
[[autodoc]] models.autoencoders.vae.DecoderOutput
47-
48-
## FlaxAutoencoderKL
49-
50-
[[autodoc]] FlaxAutoencoderKL
51-
52-
## FlaxAutoencoderKLOutput
53-
54-
[[autodoc]] models.vae_flax.FlaxAutoencoderKLOutput
55-
56-
## FlaxDecoderOutput
57-
58-
[[autodoc]] models.vae_flax.FlaxDecoderOutput

docs/source/en/api/models/controlnet.md

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -40,11 +40,3 @@ pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=contro
4040
## ControlNetOutput
4141

4242
[[autodoc]] models.controlnets.controlnet.ControlNetOutput
43-
44-
## FlaxControlNetModel
45-
46-
[[autodoc]] FlaxControlNetModel
47-
48-
## FlaxControlNetOutput
49-
50-
[[autodoc]] models.controlnets.controlnet_flax.FlaxControlNetOutput

docs/source/en/api/models/overview.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -19,10 +19,6 @@ All models are built from the base [`ModelMixin`] class which is a [`torch.nn.Mo
1919
## ModelMixin
2020
[[autodoc]] ModelMixin
2121

22-
## FlaxModelMixin
23-
24-
[[autodoc]] FlaxModelMixin
25-
2622
## PushToHubMixin
2723

2824
[[autodoc]] utils.PushToHubMixin

docs/source/en/api/models/unet2d-cond.md

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -23,9 +23,3 @@ The abstract from the paper is:
2323

2424
## UNet2DConditionOutput
2525
[[autodoc]] models.unets.unet_2d_condition.UNet2DConditionOutput
26-
27-
## FlaxUNet2DConditionModel
28-
[[autodoc]] models.unets.unet_2d_condition_flax.FlaxUNet2DConditionModel
29-
30-
## FlaxUNet2DConditionOutput
31-
[[autodoc]] models.unets.unet_2d_condition_flax.FlaxUNet2DConditionOutput

docs/source/en/api/outputs.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -54,10 +54,6 @@ To check a specific pipeline or model output, refer to its corresponding API doc
5454

5555
[[autodoc]] pipelines.ImagePipelineOutput
5656

57-
## FlaxImagePipelineOutput
58-
59-
[[autodoc]] pipelines.pipeline_flax_utils.FlaxImagePipelineOutput
60-
6157
## AudioPipelineOutput
6258

6359
[[autodoc]] pipelines.AudioPipelineOutput

docs/source/en/api/pipelines/cogvideox.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ from diffusers.utils import export_to_video
5050
pipeline_quant_config = PipelineQuantizationConfig(
5151
quant_backend="torchao",
5252
quant_kwargs={"quant_type": "int8wo"},
53-
components_to_quantize=["transformer"]
53+
components_to_quantize="transformer"
5454
)
5555

5656
# fp8 layerwise weight-casting

0 commit comments

Comments
 (0)