Skip to content

Commit 90b9b42

Browse files
authored
Merge branch 'main' into benchmarking-overhaul
2 parents a28c881 + b975bce commit 90b9b42

30 files changed

+69
-64
lines changed

docs/source/en/api/pipelines/sana_sprint.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ image = pipe(
113113
height=832,
114114
width=480
115115
).images[0]
116-
image[0].save("output.png")
116+
image.save("output.png")
117117
```
118118

119119
## SanaSprintPipeline

docs/source/en/quantization/torchao.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ The quantization methods supported are as follows:
9191

9292
Some quantization methods are aliases (for example, `int8wo` is the commonly used shorthand for `int8_weight_only`). This allows using the quantization methods described in the torchao docs as-is, while also making it convenient to remember their shorthand notations.
9393

94-
Refer to the official torchao documentation for a better understanding of the available quantization methods and the exhaustive list of configuration options available.
94+
Refer to the [official torchao documentation](https://docs.pytorch.org/ao/stable/index.html) for a better understanding of the available quantization methods and the exhaustive list of configuration options available.
9595

9696
## Serializing and Deserializing quantized models
9797

@@ -155,5 +155,5 @@ transformer.load_state_dict(state_dict, strict=True, assign=True)
155155
156156
## Resources
157157

158-
- [TorchAO Quantization API](https://github.com/pytorch/ao/blob/main/torchao/quantization/README.md)
158+
- [TorchAO Quantization API](https://docs.pytorch.org/ao/stable/index.html)
159159
- [Diffusers-TorchAO examples](https://github.com/sayakpaul/diffusers-torchao)

examples/cogvideo/train_cogvideox_image_to_video_lora.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -555,7 +555,7 @@ def _load_dataset_from_local_path(self):
555555

556556
if any(not path.is_file() for path in instance_videos):
557557
raise ValueError(
558-
"Expected '--video_column' to be a path to a file in `--instance_data_root` containing line-separated paths to video data but found atleast one path that is not a valid file."
558+
"Expected '--video_column' to be a path to a file in `--instance_data_root` containing line-separated paths to video data but found at least one path that is not a valid file."
559559
)
560560

561561
return instance_prompts, instance_videos

examples/cogvideo/train_cogvideox_lora.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -539,7 +539,7 @@ def _load_dataset_from_local_path(self):
539539

540540
if any(not path.is_file() for path in instance_videos):
541541
raise ValueError(
542-
"Expected '--video_column' to be a path to a file in `--instance_data_root` containing line-separated paths to video data but found atleast one path that is not a valid file."
542+
"Expected '--video_column' to be a path to a file in `--instance_data_root` containing line-separated paths to video data but found at least one path that is not a valid file."
543543
)
544544

545545
return instance_prompts, instance_videos

examples/research_projects/multi_subject_dreambooth_inpainting/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ accelerate launch train_multi_subject_dreambooth_inpaint.py \
7373

7474
## 3. Results
7575

76-
A [![Weights & Biases](https://img.shields.io/badge/Weights%20&%20Biases-Report-blue)](https://wandb.ai/gzguevara/uncategorized/reports/Multi-Subject-Dreambooth-for-Inpainting--Vmlldzo2MzY5NDQ4?accessToken=y0nya2d7baguhbryxaikbfr1203amvn1jsmyl07vk122mrs7tnph037u1nqgse8t) is provided showing the training progress by every 50 steps. Note, the reported weights & baises run was performed on a A100 GPU with the following stetting:
76+
A [![Weights & Biases](https://img.shields.io/badge/Weights%20&%20Biases-Report-blue)](https://wandb.ai/gzguevara/uncategorized/reports/Multi-Subject-Dreambooth-for-Inpainting--Vmlldzo2MzY5NDQ4?accessToken=y0nya2d7baguhbryxaikbfr1203amvn1jsmyl07vk122mrs7tnph037u1nqgse8t) is provided showing the training progress by every 50 steps. Note, the reported weights & biases run was performed on a A100 GPU with the following stetting:
7777

7878
```bash
7979
accelerate launch train_multi_subject_dreambooth_inpaint.py \

src/diffusers/hooks/faster_cache.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -146,7 +146,7 @@ class FasterCacheConfig:
146146
alpha_low_frequency: float = 1.1
147147
alpha_high_frequency: float = 1.1
148148

149-
# n as described in CFG-Cache explanation in the paper - dependant on the model
149+
# n as described in CFG-Cache explanation in the paper - dependent on the model
150150
unconditional_batch_skip_range: int = 5
151151
unconditional_batch_timestep_skip_range: Tuple[int, int] = (-1, 641)
152152

src/diffusers/hooks/hooks.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ def initialize_hook(self, module: torch.nn.Module) -> torch.nn.Module:
4545

4646
def deinitalize_hook(self, module: torch.nn.Module) -> torch.nn.Module:
4747
r"""
48-
Hook that is executed when a model is deinitalized.
48+
Hook that is executed when a model is deinitialized.
4949
5050
Args:
5151
module (`torch.nn.Module`):

src/diffusers/hooks/layerwise_casting.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ def initialize_hook(self, module: torch.nn.Module):
6262

6363
def deinitalize_hook(self, module: torch.nn.Module):
6464
raise NotImplementedError(
65-
"LayerwiseCastingHook does not support deinitalization. A model once enabled with layerwise casting will "
65+
"LayerwiseCastingHook does not support deinitialization. A model once enabled with layerwise casting will "
6666
"have casted its weights to a lower precision dtype for storage. Casting this back to the original dtype "
6767
"will lead to precision loss, which might have an impact on the model's generation quality. The model should "
6868
"be re-initialized and loaded in the original dtype."

src/diffusers/loaders/peft.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -251,7 +251,7 @@ def load_lora_adapter(
251251

252252
rank = {}
253253
for key, val in state_dict.items():
254-
# Cannot figure out rank from lora layers that don't have atleast 2 dimensions.
254+
# Cannot figure out rank from lora layers that don't have at least 2 dimensions.
255255
# Bias layers in LoRA only have a single dimension
256256
if "lora_B" in key and val.ndim > 1:
257257
# Check out https://github.com/huggingface/peft/pull/2419 for the `^` symbol.

src/diffusers/models/autoencoders/autoencoder_kl.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -63,8 +63,8 @@ class AutoencoderKL(ModelMixin, ConfigMixin, FromOriginalModelMixin, PeftAdapter
6363
Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) paper.
6464
force_upcast (`bool`, *optional*, default to `True`):
6565
If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE
66-
can be fine-tuned / trained to a lower range without loosing too much precision in which case
67-
`force_upcast` can be set to `False` - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix
66+
can be fine-tuned / trained to a lower range without losing too much precision in which case `force_upcast`
67+
can be set to `False` - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix
6868
mid_block_add_attention (`bool`, *optional*, default to `True`):
6969
If enabled, the mid_block of the Encoder and Decoder will have attention blocks. If set to false, the
7070
mid_block will only have resnet blocks

0 commit comments

Comments
 (0)