File tree Expand file tree Collapse file tree 1 file changed +4
-2
lines changed
docs/source/en/quantization Expand file tree Collapse file tree 1 file changed +4
-2
lines changed Original file line number Diff line number Diff line change @@ -27,7 +27,7 @@ The example below only quantizes the weights to int8.
2727``` python
2828from diffusers import FluxPipeline, FluxTransformer2DModel, TorchAoConfig
2929
30- model_id = " black-forest-labs/Flux .1-Dev "
30+ model_id = " black-forest-labs/FLUX .1-dev "
3131dtype = torch.bfloat16
3232
3333quantization_config = TorchAoConfig(" int8wo" )
@@ -45,7 +45,9 @@ pipe = FluxPipeline.from_pretrained(
4545pipe.to(" cuda" )
4646
4747prompt = " A cat holding a sign that says hello world"
48- image = pipe(prompt, num_inference_steps = 28 , guidance_scale = 0.0 ).images[0 ]
48+ image = pipe(
49+ prompt, num_inference_steps = 50 , guidance_scale = 4.5 , max_sequence_length = 512
50+ ).images[0 ]
4951image.save(" output.png" )
5052```
5153
You can’t perform that action at this time.
0 commit comments