Skip to content
Merged
Show file tree
Hide file tree
Changes from 16 commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
25033da
add; utility to check if attn_procs,norms,acts are properly documented.
sayakpaul Apr 24, 2024
9398e0f
add support listing to the workflows.
sayakpaul Apr 24, 2024
132c68b
Merge branch 'main' into feat/check-doc-listing
sayakpaul Apr 24, 2024
57ca5be
change to 2024.
sayakpaul Apr 24, 2024
8532285
Merge branch 'main' into feat/check-doc-listing
sayakpaul Apr 24, 2024
b5c9aeb
small fixes.
sayakpaul Apr 24, 2024
40128ac
Merge branch 'main' into feat/check-doc-listing
sayakpaul Apr 25, 2024
c625166
does adding detailed docstrings help?
sayakpaul Apr 25, 2024
8b58696
Merge branch 'main' into feat/check-doc-listing
sayakpaul Apr 25, 2024
80d0a7f
Merge branch 'main' into feat/check-doc-listing
sayakpaul Apr 29, 2024
d064b11
Merge branch 'main' into feat/check-doc-listing
sayakpaul May 1, 2024
45daa98
Merge branch 'main' into feat/check-doc-listing
sayakpaul May 2, 2024
5663ba5
fix
sayakpaul May 10, 2024
0653e2d
Merge branch 'main' into feat/check-doc-listing
sayakpaul May 10, 2024
dac63dd
uncomment image processor check
sayakpaul May 10, 2024
900cd1c
quality
sayakpaul May 10, 2024
6dc3d19
Merge branch 'main' into feat/check-doc-listing
sayakpaul May 14, 2024
8449186
fix, thanks to @mishig.
sayakpaul May 14, 2024
af2370b
Apply suggestions from code review
sayakpaul May 15, 2024
c4c9fc4
Merge branch 'main' into feat/check-doc-listing
sayakpaul May 15, 2024
15b2f57
style
sayakpaul May 15, 2024
12c9ac4
Merge branch 'main' into feat/check-doc-listing
sayakpaul May 21, 2024
f3443d0
Merge branch 'main' into feat/check-doc-listing
sayakpaul May 22, 2024
b8b0fd1
Merge branch 'main' into feat/check-doc-listing
sayakpaul May 28, 2024
63989af
resolve conflicts.
sayakpaul Dec 8, 2024
4227392
JointAttnProcessor2_0
sayakpaul Dec 8, 2024
0034db2
fixes
sayakpaul Dec 8, 2024
eb5a8b2
resolve conflicts.
sayakpaul Dec 16, 2024
a2aa752
fixes
sayakpaul Dec 17, 2024
005a2e9
fixes
sayakpaul Dec 17, 2024
b653eaa
fixes
sayakpaul Dec 17, 2024
75136e6
fixes
sayakpaul Dec 17, 2024
7eb617a
fixes
sayakpaul Dec 17, 2024
80be186
Merge branch 'main' into feat/check-doc-listing
sayakpaul Feb 19, 2025
ef03777
Merge branch 'main' into feat/check-doc-listing
sayakpaul Feb 20, 2025
53a3361
fixes
sayakpaul Feb 20, 2025
be989a6
Update docs/source/en/api/normalization.md
sayakpaul Feb 20, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/workflows/pr_test_peft_backend.yml
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@ jobs:
run: |
python utils/check_copies.py
python utils/check_dummies.py
python utils/check_support_list.py
make deps_table_check_updated
- name: Check if failure
if: ${{ failure() }}
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/pr_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@ jobs:
run: |
python utils/check_copies.py
python utils/check_dummies.py
python utils/check_support_list.py
make deps_table_check_updated
- name: Check if failure
if: ${{ failure() }}
Expand Down
25 changes: 14 additions & 11 deletions docs/source/en/api/attnprocessor.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,12 +20,21 @@ An attention processor is a class for applying different types of attention mech
## AttnProcessor2_0
[[autodoc]] models.attention_processor.AttnProcessor2_0

## FusedAttnProcessor2_0
[[autodoc]] models.attention_processor.FusedAttnProcessor2_0

## XFormersAttnProcessor
[[autodoc]] models.attention_processor.XFormersAttnProcessor

## AttnAddedKVProcessor
[[autodoc]] models.attention_processor.AttnAddedKVProcessor

## AttnAddedKVProcessor2_0
[[autodoc]] models.attention_processor.AttnAddedKVProcessor2_0

## XFormersAttnAddedKVProcessor
[[autodoc]] models.attention_processor.XFormersAttnAddedKVProcessor

## CrossFrameAttnProcessor
[[autodoc]] pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.CrossFrameAttnProcessor

Expand All @@ -38,23 +47,17 @@ An attention processor is a class for applying different types of attention mech
## CustomDiffusionXFormersAttnProcessor
[[autodoc]] models.attention_processor.CustomDiffusionXFormersAttnProcessor

## FusedAttnProcessor2_0
[[autodoc]] models.attention_processor.FusedAttnProcessor2_0

## LoRAAttnAddedKVProcessor
[[autodoc]] models.attention_processor.LoRAAttnAddedKVProcessor

## LoRAXFormersAttnProcessor
[[autodoc]] models.attention_processor.LoRAXFormersAttnProcessor

## SlicedAttnProcessor
[[autodoc]] models.attention_processor.SlicedAttnProcessor

## SlicedAttnAddedKVProcessor
[[autodoc]] models.attention_processor.SlicedAttnAddedKVProcessor

## XFormersAttnProcessor
[[autodoc]] models.attention_processor.XFormersAttnProcessor
## IPAdapterAttnProcessor
[[autodoc]] models.attention_processor.IPAdapterAttnProcessor

## IPAdapterAttnProcessor2_0
[[autodoc]] models.attention_processor.IPAdapterAttnProcessor2_0

## AttnProcessorNPU
[[autodoc]] models.attention_processor.AttnProcessorNPU
16 changes: 16 additions & 0 deletions docs/source/en/api/normalization.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,3 +29,19 @@ Customized normalization layers for supporting various models in 🤗 Diffusers.
## AdaGroupNorm

[[autodoc]] models.normalization.AdaGroupNorm

## AdaLayerNormContinuous

[[autodoc]] models.normalization.AdaLayerNormContinuous

## LayerNorm

[[autodoc]] models.normalization.LayerNorm
Copy link
Contributor

@mishig25 mishig25 May 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LayerNorm is the reason why doc-builder is not building.

I've inspected a bit.
All other norms are diffuser defined classes:

>>> import diffusers
>>> diffusers.models.normalization.GlobalResponseNorm
<class 'diffusers.models.normalization.GlobalResponseNorm'>

whereas LayerNorm seems to be alias to pytorch defined class (and doc-builder autodoc does not work with pytorch defined classes):

>>> import diffusers
>>> diffusers.models.normalization.LayerNorm
<class 'torch.nn.modules.normalization.LayerNorm'>

LayerNorm = nn.LayerNorm

Therefore, as a simple fix: I'd suggest replacing [[autodoc]] models.normalization.LayerNorm with something like: You can also use LayerNorm and add markdown link to pytorch layernorm doc

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @mishig25. I think it'd be okay to remove LayerNorm from our doc because our implementation is really a special case and is already supported in the latest versions of PyTorch.


## RMSNorm

[[autodoc]] models.normalization.RMSNorm

## GlobalResponseNorm

[[autodoc]] models.normalization.GlobalResponseNorm
38 changes: 38 additions & 0 deletions src/diffusers/models/normalization.py
Original file line number Diff line number Diff line change
Expand Up @@ -151,6 +151,18 @@ def forward(self, x: torch.Tensor, emb: torch.Tensor) -> torch.Tensor:


class AdaLayerNormContinuous(nn.Module):
r"""
Adaptive normalization layer with a norm layer (layer_norm or rms_norm).

Args:
embedding_dim (`int`): Embedding dimension to use during projection.
conditioning_embedding_dim (`int`): Dimension of the input condition.
elementwise_affine (`bool`): Boolean flag to denote if affine transformation should be applied.
eps (`int`): Epsilon factor.
bias (`bias`): Boolean flag to denote if bias should be use.
norm_type (`str`): Normalization layer to use. Values supported: "layer_norm", "rms_norm".
"""

def __init__(
self,
embedding_dim: int,
Expand Down Expand Up @@ -188,6 +200,16 @@ def forward(self, x: torch.Tensor, conditioning_embedding: torch.Tensor) -> torc
# Has optional bias parameter compared to torch layer norm
# TODO: replace with torch layernorm once min required torch version >= 2.1
class LayerNorm(nn.Module):
r"""
LayerNorm with the bias parameter.

Args:
dim (`int`): Dimensionality to use for the parameters.
eps (`int`): Epsilon factor.
elementwise_affine (`bool`): Boolean flag to denote if affine transformation should be applied.
bias (`bias`): Boolean flag to denote if bias should be use.
"""

def __init__(self, dim, eps: float = 1e-5, elementwise_affine: bool = True, bias: bool = True):
super().__init__()

Expand All @@ -210,6 +232,15 @@ def forward(self, input):


class RMSNorm(nn.Module):
r"""
RMS Norm as introduced in https://arxiv.org/abs/1910.07467 by Zhang et al.

Args:
dim (`int`): Number of dimensions to use for `weights`. Only effective when `elementwise_affine` is True.
eps (`float`): Small value to use when calculating the reciprocal of the square-root.
elementwise_affine (`bool`): Boolean flag to denote if affine transformation should be applied.
"""

def __init__(self, dim, eps: float, elementwise_affine: bool = True):
super().__init__()

Expand Down Expand Up @@ -242,6 +273,13 @@ def forward(self, hidden_states):


class GlobalResponseNorm(nn.Module):
r"""
Global response normalization as introduced in ConvNeXt-v2 (https://arxiv.org/abs/2301.00808).

Args:
dim (`int`): Number of dimensions to use for the `gamma` and `beta`.
"""

# Taken from https://github.com/facebookresearch/ConvNeXt-V2/blob/3608f67cc1dae164790c5d0aead7bf2d73d9719b/models/utils.py#L105
def __init__(self, dim):
super().__init__()
Expand Down
106 changes: 106 additions & 0 deletions utils/check_support_list.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
# coding=utf-8
# Copyright 2024 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Utility that checks that modules like attention processors are listed in the documentation file.

```bash
python utils/check_support_list.py
```

It has no auto-fix mode.
"""
import os
import re


# All paths are set with the intent you should run this script from the root of the repo with the command
# python utils/check_doctest_list.py
REPO_PATH = "."


def check_attention_processors():
with open(os.path.join(REPO_PATH, "docs/source/en/api/attnprocessor.md"), "r") as f:
doctext = f.read()
matches = re.findall(r"\[\[autodoc\]\]\s([^\n]+)", doctext)
documented_attention_processors = [match.split(".")[-1] for match in matches]

with open(os.path.join(REPO_PATH, "src/diffusers/models/attention_processor.py"), "r") as f:
doctext = f.read()
processor_classes = re.findall(r"class\s+(\w+Processor(?:\d*_?\d*))[(:]", doctext)
processor_classes = [proc for proc in processor_classes if "LoRA" not in proc and proc != "Attention"]

for processor in processor_classes:
if processor not in documented_attention_processors:
raise ValueError(
f"{processor} should be in listed in the attention processor documentation but is not. Please update the documentation."
)


def check_image_processors():
with open(os.path.join(REPO_PATH, "docs/source/en/api/image_processor.md"), "r") as f:
doctext = f.read()
matches = re.findall(r"\[\[autodoc\]\]\s([^\n]+)", doctext)
documented_image_processors = [match.split(".")[-1] for match in matches]

with open(os.path.join(REPO_PATH, "src/diffusers/image_processor.py"), "r") as f:
doctext = f.read()
processor_classes = re.findall(r"class\s+(\w+Processor(?:\d*_?\d*))[(:]", doctext)

for processor in processor_classes:
if processor not in documented_image_processors:
raise ValueError(
f"{processor} should be in listed in the image processor documentation but is not. Please update the documentation."
)


def check_activations():
with open(os.path.join(REPO_PATH, "docs/source/en/api/activations.md"), "r") as f:
doctext = f.read()
matches = re.findall(r"\[\[autodoc\]\]\s([^\n]+)", doctext)
documented_activations = [match.split(".")[-1] for match in matches]

with open(os.path.join(REPO_PATH, "src/diffusers/models/activations.py"), "r") as f:
doctext = f.read()
activation_classes = re.findall(r"class\s+(\w+)\s*\(.*?nn\.Module.*?\):", doctext)

for activation in activation_classes:
if activation not in documented_activations:
raise ValueError(
f"{activation} should be in listed in the activations documentation but is not. Please update the documentation."
)


def check_normalizations():
with open(os.path.join(REPO_PATH, "docs/source/en/api/normalization.md"), "r") as f:
doctext = f.read()
matches = re.findall(r"\[\[autodoc\]\]\s([^\n]+)", doctext)
documented_normalizations = [match.split(".")[-1] for match in matches]

with open(os.path.join(REPO_PATH, "src/diffusers/models/normalization.py"), "r") as f:
doctext = f.read()
normalization_classes = re.findall(r"class\s+(\w+)\s*\(.*?nn\.Module.*?\):", doctext)

for norm in normalization_classes:
if norm not in documented_normalizations:
raise ValueError(
f"{norm} should be in listed in the normalizations documentation but is not. Please update the documentation."
)


if __name__ == "__main__":
check_attention_processors()
check_image_processors()
check_activations()
check_normalizations()