File tree Expand file tree Collapse file tree 1 file changed +4
-2
lines changed
docs/source/en/quantization Expand file tree Collapse file tree 1 file changed +4
-2
lines changed Original file line number Diff line number Diff line change @@ -27,7 +27,7 @@ The example below only quantizes the weights to int8.
27
27
``` python
28
28
from diffusers import FluxPipeline, FluxTransformer2DModel, TorchAoConfig
29
29
30
- model_id = " black-forest-labs/Flux .1-Dev "
30
+ model_id = " black-forest-labs/FLUX .1-dev "
31
31
dtype = torch.bfloat16
32
32
33
33
quantization_config = TorchAoConfig(" int8wo" )
@@ -45,7 +45,9 @@ pipe = FluxPipeline.from_pretrained(
45
45
pipe.to(" cuda" )
46
46
47
47
prompt = " A cat holding a sign that says hello world"
48
- image = pipe(prompt, num_inference_steps = 28 , guidance_scale = 0.0 ).images[0 ]
48
+ image = pipe(
49
+ prompt, num_inference_steps = 50 , guidance_scale = 4.5 , max_sequence_length = 512
50
+ ).images[0 ]
49
51
image.save(" output.png" )
50
52
```
51
53
You can’t perform that action at this time.
0 commit comments