You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/en/api/pipelines/flux.md
+47Lines changed: 47 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -309,6 +309,53 @@ image.save("output.png")
309
309
310
310
When unloading the Control LoRA weights, call `pipe.unload_lora_weights(reset_to_overwritten_params=True)` to reset the `pipe.transformer` completely back to its original form. The resultant pipeline can then be used with methods like [`DiffusionPipeline.from_pipe`]. More details about this argument are available in [this PR](https://github.com/huggingface/diffusers/pull/10397).
311
311
312
+
## IP-Adapter
313
+
314
+
<Tip>
315
+
316
+
Check out [IP-Adapter](../../../using-diffusers/ip_adapter) to learn more about how IP-Adapters work.
317
+
318
+
</Tip>
319
+
320
+
An IP-Adapter lets you prompt Flux with images, in addition to the text prompt. This is especially useful when describing complex concepts that are difficult to articulate through text alone and you have reference images.
<figcaption class="mt-2 text-sm text-center text-gray-500">IP-Adapter examples with prompt "wearing sunglasses"</figcaption>
356
+
</div>
357
+
358
+
312
359
## Running FP16 inference
313
360
314
361
Flux can generate high-quality images with FP16 (i.e. to accelerate inference on Turing/Volta GPUs) but produces different outputs compared to FP32/BF16. The issue is that some activations in the text encoders have to be clipped when running in FP16, which affects the overall image. Forcing text encoders to run with FP32 inference thus removes this output difference. See [here](https://github.com/huggingface/diffusers/pull/9097#issuecomment-2272292516) for details.
Copy file name to clipboardExpand all lines: docs/source/en/installation.md
+34-6Lines changed: 34 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,32 +23,60 @@ You should install 🤗 Diffusers in a [virtual environment](https://docs.python
23
23
If you're unfamiliar with Python virtual environments, take a look at this [guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
24
24
A virtual environment makes it easier to manage different projects and avoid compatibility issues between dependencies.
25
25
26
-
Start by creating a virtual environment in your project directory:
26
+
Create a virtual environment with Python or [uv](https://docs.astral.sh/uv/) (refer to [Installation](https://docs.astral.sh/uv/getting-started/installation/) for installation instructions), a fast Rust-based Python package and project manager.
27
+
28
+
<hfoptionsid="install">
29
+
<hfoptionid="uv">
27
30
28
31
```bash
29
-
python -m venv .env
32
+
uv venv my-env
33
+
source my-env/bin/activate
30
34
```
31
35
32
-
Activate the virtual environment:
36
+
</hfoption>
37
+
<hfoptionid="Python">
33
38
34
39
```bash
35
-
source .env/bin/activate
40
+
python -m venv my-env
41
+
source my-env/bin/activate
36
42
```
37
43
38
-
You should also install 🤗 Transformers because 🤗 Diffusers relies on its models:
44
+
</hfoption>
45
+
</hfoptions>
46
+
47
+
You should also install 🤗 Transformers because 🤗 Diffusers relies on its models.
39
48
40
49
41
50
<frameworkcontent>
42
51
<pt>
43
-
Note - PyTorch only supports Python 3.8 - 3.11 on Windows.
52
+
53
+
PyTorch only supports Python 3.8 - 3.11 on Windows. Install Diffusers with uv.
@@ -77,6 +77,7 @@ Please also check out our [Community Scripts](https://github.com/huggingface/dif
77
77
PIXART-α Controlnet pipeline | Implementation of the controlnet model for pixart alpha and its diffusers pipeline | [PIXART-α Controlnet pipeline](#pixart-α-controlnet-pipeline) | - | [Raul Ciotescu](https://github.com/raulc0399/) |
78
78
| HunyuanDiT Differential Diffusion Pipeline | Applies [Differential Diffusion](https://github.com/exx8/differential-diffusion) to [HunyuanDiT](https://github.com/huggingface/diffusers/pull/8240). |[HunyuanDiT with Differential Diffusion](#hunyuandit-with-differential-diffusion)|[](https://colab.research.google.com/drive/1v44a5fpzyr4Ffr4v2XBQ7BajzG874N4P?usp=sharing)|[Monjoy Choudhury](https://github.com/MnCSSJ4x)|
79
79
|[🪆Matryoshka Diffusion Models](https://huggingface.co/papers/2310.15111)| A diffusion process that denoises inputs at multiple resolutions jointly and uses a NestedUNet architecture where features and parameters for small scale inputs are nested within those of the large scales. See [original codebase](https://github.com/apple/ml-mdm). |[🪆Matryoshka Diffusion Models](#matryoshka-diffusion-models)|[](https://huggingface.co/spaces/pcuenq/mdm)[](https://colab.research.google.com/gist/tolgacangoz/1f54875fc7aeaabcf284ebde64820966/matryoshka_hf.ipynb)|[M. Tolga Cangöz](https://github.com/tolgacangoz)|
80
+
| Stable Diffusion XL Attentive Eraser Pipeline |[[AAAI2025 Oral] Attentive Eraser](https://github.com/Anonym0u3/AttentiveEraser) is a novel tuning-free method that enhances object removal capabilities in pre-trained diffusion models.|[Stable Diffusion XL Attentive Eraser Pipeline](#stable-diffusion-xl-attentive-eraser-pipeline)|-|[Wenhao Sun](https://github.com/Anonym0u3) and [Benlei Cui](https://github.com/Benny079)|
80
81
81
82
To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
A colab notebook demonstrating all results can be found [here](https://colab.research.google.com/drive/1v44a5fpzyr4Ffr4v2XBQ7BajzG874N4P?usp=sharing). Depth Maps have also been added in the same colab.
**Stable Diffusion XL Attentive Eraser Pipeline** is an advanced object removal pipeline that leverages SDXL for precise content suppression and seamless region completion. This pipeline uses **self-attention redirection guidance** to modify the model’s self-attention mechanism, allowing for effective removal and inpainting across various levels of mask precision, including semantic segmentation masks, bounding boxes, and hand-drawn masks. If you are interested in more detailed information and have any questions, please refer to the [paper](https://arxiv.org/abs/2412.12974) and [official implementation](https://github.com/Anonym0u3/AttentiveEraser).
4642
+
4643
+
#### Key features
4644
+
4645
+
-**Tuning-Free**: No additional training is required, making it easy to integrate and use.
4646
+
-**Flexible Mask Support**: Works with different types of masks for targeted object removal.
4647
+
-**High-Quality Results**: Utilizes the inherent generative power of diffusion models for realistic content completion.
4648
+
4649
+
#### Usage example
4650
+
To use the Stable Diffusion XL Attentive Eraser Pipeline, you can initialize it as follows:
4651
+
```py
4652
+
import torch
4653
+
from diffusers import DDIMScheduler, DiffusionPipeline
4654
+
from diffusers.utils import load_image
4655
+
import torch.nn.functional as F
4656
+
from torchvision.transforms.functional import to_tensor, gaussian_blur
4657
+
4658
+
dtype = torch.float16
4659
+
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
0 commit comments