Skip to content

Commit cca6a08

Browse files
committed
Fix typos in strings and comments
Signed-off-by: co63oc <[email protected]>
1 parent 86294d3 commit cca6a08

24 files changed

+34
-34
lines changed

examples/cogvideo/train_cogvideox_image_to_video_lora.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -555,7 +555,7 @@ def _load_dataset_from_local_path(self):
555555

556556
if any(not path.is_file() for path in instance_videos):
557557
raise ValueError(
558-
"Expected '--video_column' to be a path to a file in `--instance_data_root` containing line-separated paths to video data but found atleast one path that is not a valid file."
558+
"Expected '--video_column' to be a path to a file in `--instance_data_root` containing line-separated paths to video data but found at least one path that is not a valid file."
559559
)
560560

561561
return instance_prompts, instance_videos

examples/cogvideo/train_cogvideox_lora.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -539,7 +539,7 @@ def _load_dataset_from_local_path(self):
539539

540540
if any(not path.is_file() for path in instance_videos):
541541
raise ValueError(
542-
"Expected '--video_column' to be a path to a file in `--instance_data_root` containing line-separated paths to video data but found atleast one path that is not a valid file."
542+
"Expected '--video_column' to be a path to a file in `--instance_data_root` containing line-separated paths to video data but found at least one path that is not a valid file."
543543
)
544544

545545
return instance_prompts, instance_videos

examples/research_projects/multi_subject_dreambooth_inpainting/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
Please note that this project is not actively maintained. However, you can open an issue and tag @gzguevara.
44

5-
[DreamBooth](https://arxiv.org/abs/2208.12242) is a method to personalize text2image models like stable diffusion given just a few(3~5) images of a subject. This project consists of **two parts**. Training Stable Diffusion for inpainting requieres prompt-image-mask pairs. The Unet of inpainiting models have 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself).
5+
[DreamBooth](https://arxiv.org/abs/2208.12242) is a method to personalize text2image models like stable diffusion given just a few(3~5) images of a subject. This project consists of **two parts**. Training Stable Diffusion for inpainting requires prompt-image-mask pairs. The Unet of inpainting models have 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself).
66

77
**The first part**, the `multi_inpaint_dataset.ipynb` notebook, demonstrates how make a 🤗 dataset of prompt-image-mask pairs. You can, however, skip the first part and move straight to the second part with the example datasets in this project. ([cat toy dataset masked](https://huggingface.co/datasets/gzguevara/cat_toy_masked), [mr. potato head dataset masked](https://huggingface.co/datasets/gzguevara/mr_potato_head_masked))
88

@@ -73,7 +73,7 @@ accelerate launch train_multi_subject_dreambooth_inpaint.py \
7373

7474
## 3. Results
7575

76-
A [![Weights & Biases](https://img.shields.io/badge/Weights%20&%20Biases-Report-blue)](https://wandb.ai/gzguevara/uncategorized/reports/Multi-Subject-Dreambooth-for-Inpainting--Vmlldzo2MzY5NDQ4?accessToken=y0nya2d7baguhbryxaikbfr1203amvn1jsmyl07vk122mrs7tnph037u1nqgse8t) is provided showing the training progress by every 50 steps. Note, the reported weights & baises run was performed on a A100 GPU with the following stetting:
76+
A [![Weights & Biases](https://img.shields.io/badge/Weights%20&%20Biases-Report-blue)](https://wandb.ai/gzguevara/uncategorized/reports/Multi-Subject-Dreambooth-for-Inpainting--Vmlldzo2MzY5NDQ4?accessToken=y0nya2d7baguhbryxaikbfr1203amvn1jsmyl07vk122mrs7tnph037u1nqgse8t) is provided showing the training progress by every 50 steps. Note, the reported weights & biases run was performed on a A100 GPU with the following stetting:
7777

7878
```bash
7979
accelerate launch train_multi_subject_dreambooth_inpaint.py \

src/diffusers/hooks/faster_cache.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -146,7 +146,7 @@ class FasterCacheConfig:
146146
alpha_low_frequency: float = 1.1
147147
alpha_high_frequency: float = 1.1
148148

149-
# n as described in CFG-Cache explanation in the paper - dependant on the model
149+
# n as described in CFG-Cache explanation in the paper - dependent on the model
150150
unconditional_batch_skip_range: int = 5
151151
unconditional_batch_timestep_skip_range: Tuple[int, int] = (-1, 641)
152152

src/diffusers/hooks/hooks.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -43,9 +43,9 @@ def initialize_hook(self, module: torch.nn.Module) -> torch.nn.Module:
4343
"""
4444
return module
4545

46-
def deinitalize_hook(self, module: torch.nn.Module) -> torch.nn.Module:
46+
def deinitialize_hook(self, module: torch.nn.Module) -> torch.nn.Module:
4747
r"""
48-
Hook that is executed when a model is deinitalized.
48+
Hook that is executed when a model is deinitialized.
4949
5050
Args:
5151
module (`torch.nn.Module`):
@@ -192,7 +192,7 @@ def remove_hook(self, name: str, recurse: bool = True) -> None:
192192
else:
193193
self._fn_refs[index + 1].forward = old_forward
194194

195-
self._module_ref = hook.deinitalize_hook(self._module_ref)
195+
self._module_ref = hook.deinitialize_hook(self._module_ref)
196196
del self.hooks[name]
197197
self._hook_order.pop(index)
198198
self._fn_refs.pop(index)

src/diffusers/hooks/layerwise_casting.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -60,9 +60,9 @@ def initialize_hook(self, module: torch.nn.Module):
6060
module.to(dtype=self.storage_dtype, non_blocking=self.non_blocking)
6161
return module
6262

63-
def deinitalize_hook(self, module: torch.nn.Module):
63+
def deinitialize_hook(self, module: torch.nn.Module):
6464
raise NotImplementedError(
65-
"LayerwiseCastingHook does not support deinitalization. A model once enabled with layerwise casting will "
65+
"LayerwiseCastingHook does not support deinitialization. A model once enabled with layerwise casting will "
6666
"have casted its weights to a lower precision dtype for storage. Casting this back to the original dtype "
6767
"will lead to precision loss, which might have an impact on the model's generation quality. The model should "
6868
"be re-initialized and loaded in the original dtype."

src/diffusers/loaders/peft.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -250,7 +250,7 @@ def load_lora_adapter(
250250

251251
rank = {}
252252
for key, val in state_dict.items():
253-
# Cannot figure out rank from lora layers that don't have atleast 2 dimensions.
253+
# Cannot figure out rank from lora layers that don't have at least 2 dimensions.
254254
# Bias layers in LoRA only have a single dimension
255255
if "lora_B" in key and val.ndim > 1:
256256
# Check out https://github.com/huggingface/peft/pull/2419 for the `^` symbol.

src/diffusers/models/autoencoders/autoencoder_kl.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ class AutoencoderKL(ModelMixin, ConfigMixin, FromOriginalModelMixin, PeftAdapter
6363
Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) paper.
6464
force_upcast (`bool`, *optional*, default to `True`):
6565
If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE
66-
can be fine-tuned / trained to a lower range without loosing too much precision in which case
66+
can be fine-tuned / trained to a lower range without losing too much precision in which case
6767
`force_upcast` can be set to `False` - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix
6868
mid_block_add_attention (`bool`, *optional*, default to `True`):
6969
If enabled, the mid_block of the Encoder and Decoder will have attention blocks. If set to false, the

src/diffusers/models/autoencoders/autoencoder_kl_allegro.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -715,7 +715,7 @@ class AutoencoderKLAllegro(ModelMixin, ConfigMixin):
715715
Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) paper.
716716
force_upcast (`bool`, default to `True`):
717717
If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE
718-
can be fine-tuned / trained to a lower range without loosing too much precision in which case
718+
can be fine-tuned / trained to a lower range without losing too much precision in which case
719719
`force_upcast` can be set to `False` - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix
720720
"""
721721

src/diffusers/models/autoencoders/autoencoder_kl_cogvideox.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -983,7 +983,7 @@ class AutoencoderKLCogVideoX(ModelMixin, ConfigMixin, FromOriginalModelMixin):
983983
Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) paper.
984984
force_upcast (`bool`, *optional*, default to `True`):
985985
If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE
986-
can be fine-tuned / trained to a lower range without loosing too much precision in which case
986+
can be fine-tuned / trained to a lower range without losing too much precision in which case
987987
`force_upcast` can be set to `False` - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix
988988
"""
989989

0 commit comments

Comments
 (0)