You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/en/api/pipelines/flux.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -148,7 +148,7 @@ image.save("output.png")
148
148
**Note:**`black-forest-labs/Flux.1-Depth-dev` is _not_ a ControlNet model. [`ControlNetModel`] models are a separate component from the UNet/Transformer whose residuals are added to the actual underlying model. Depth Control is an alternate architecture that achieves effectively the same results as a ControlNet model would, by using channel-wise concatenation with input control condition and ensuring the transformer learns structure control by following the condition as closely as possible.
This pipeline uses the Reference. Refer to the [stable_diffusion_reference](https://github.com/huggingface/diffusers/blob/main/examples/community/README.md#stable-diffusion-reference).
2622
+
This pipeline uses the Reference. Refer to the [Stable Diffusion Reference](https://github.com/huggingface/diffusers/blob/main/examples/community/README.md#stable-diffusion-reference) section for more information.
2623
2623
2624
2624
```py
2625
2625
import torch
2626
-
fromPILimportImage
2626
+
#from diffusers import DiffusionPipeline
2627
2627
from diffusers.utils import load_image
2628
-
from diffusers import DiffusionPipeline
2629
2628
from diffusers.schedulers import UniPCMultistepScheduler
This pipeline uses the Reference Control and with ControlNet. Refer to the [Stable Diffusion ControlNet Reference](https://github.com/huggingface/diffusers/blob/main/examples/community/README.md#stable-diffusion-controlnet-reference) and [Stable Diffusion XL Reference](https://github.com/huggingface/diffusers/blob/main/examples/community/README.md#stable-diffusion-xl-reference) sections for more information.
2690
+
2691
+
```py
2692
+
from diffusers import ControlNetModel, AutoencoderKL
2693
+
from diffusers.schedulers import UniPCMultistepScheduler
2694
+
from diffusers.utils import load_image
2695
+
import numpy as np
2696
+
import torch
2697
+
2698
+
import cv2
2699
+
fromPILimport Image
2700
+
2701
+
from .stable_diffusion_xl_controlnet_reference import StableDiffusionXLControlNetReferencePipeline
FABRIC approach applicable to a wide range of popular diffusion models, which exploits
@@ -3378,6 +3461,20 @@ best quality, 3persons in garden, a boy blue shirt BREAK
3378
3461
best quality, 3persons in garden, an old man red suit
3379
3462
```
3380
3463
3464
+
### Use base prompt
3465
+
3466
+
You can use a base prompt to apply the prompt to all areas. You can set a base prompt by adding `ADDBASE` at the end. Base prompts can also be combined with common prompts, but the base prompt must be specified first.
3467
+
3468
+
```
3469
+
2d animation style ADDBASE
3470
+
masterpiece, high quality ADDCOMM
3471
+
(blue sky)++ BREAK
3472
+
green hair twintail BREAK
3473
+
book shelf BREAK
3474
+
messy desk BREAK
3475
+
orange++ dress and sofa
3476
+
```
3477
+
3381
3478
### Negative prompt
3382
3479
3383
3480
Negative prompts are equally effective across all regions, but it is possible to set region-specific prompts for negative prompts as well. The number of BREAKs must be the same as the number of prompts. If the number of prompts does not match, the negative prompts will be used without being divided into regions.
-`save_mask`: In `Prompt` mode, choose whether to output the generated mask along with the image. The default is `False`.
3508
+
-`base_ratio`: Used with `ADDBASE`. Sets the ratio of the base prompt; if base ratio is set to 0.2, then resulting images will consist of `20%*BASE_PROMPT + 80%*REGION_PROMPT`
3411
3509
3412
3510
The Pipeline supports `compel` syntax. Input prompts using the `compel` structure will be automatically applied and processed.
3413
3511
@@ -4696,4 +4794,4 @@ with torch.no_grad():
4696
4794
```
4697
4795
4698
4796
In the folder examples/pixart there is also a script that can be used to train new models.
4699
-
Please check the script `train_controlnet_hf_diffusers.sh` on how to start the training.
4797
+
Please check the script `train_controlnet_hf_diffusers.sh` on how to start the training.
0 commit comments