@@ -989,7 +989,8 @@ def __call__(
989989 The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
990990 `text_encoder_2`. If not defined, `negative_prompt` is used in all the text-encoders.
991991 true_cfg_scale (`float`, *optional*, defaults to 1.0):
992- When > 1.0 and a provided `negative_prompt`, enables true classifier-free guidance.
992+ True classifier-free guidance (guidance scale) is enabled when `true_cfg_scale` > 1 and
993+ `negative_prompt` is provided.
993994 height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
994995 The height in pixels of the generated image. This is set to 1024 by default for the best results.
995996 width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
@@ -1015,11 +1016,11 @@ def __call__(
10151016 their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
10161017 will be used.
10171018 guidance_scale (`float`, *optional*, defaults to 3.5):
1018- Guidance scale as defined in [Classifier-Free Diffusion
1019- Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2 .
1020- of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
1021- `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
1022- the text `prompt`, usually at the expense of lower image quality .
1019+ Embedded guidance scale is enabled by setting `guidance_scale` > 1. Higher `guidance_scale` encourages
1020+ a model to generate images more aligned with `prompt` at the expense of lower image quality .
1021+
1022+ Guidance-distilled models approximates true classifier-free guidance for `guidance_scale` > 1. Refer to
1023+ the [paper](https://huggingface.co/papers/2210.03142) to learn more .
10231024 num_images_per_prompt (`int`, *optional*, defaults to 1):
10241025 The number of images to generate per prompt.
10251026 generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
0 commit comments