-
Couldn't load subscription status.
- Fork 6.5k
[Utils] add utilities for checking if certain utilities are properly documented #7763
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@stevhliu any idea why the doc build fails? |
|
I'm not sure, the formatting all looks correct to me but for reason |
|
@mishig25 a gentle ping. |
|
let me check |
docs/source/en/api/normalization.md
Outdated
| ## LayerNorm | ||
|
|
||
| [[autodoc]] models.normalization.LayerNorm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LayerNorm is the reason why doc-builder is not building.
I've inspected a bit.
All other norms are diffuser defined classes:
>>> import diffusers
>>> diffusers.models.normalization.GlobalResponseNorm
<class 'diffusers.models.normalization.GlobalResponseNorm'>whereas LayerNorm seems to be alias to pytorch defined class (and doc-builder autodoc does not work with pytorch defined classes):
>>> import diffusers
>>> diffusers.models.normalization.LayerNorm
<class 'torch.nn.modules.normalization.LayerNorm'>| LayerNorm = nn.LayerNorm |
Therefore, as a simple fix: I'd suggest replacing [[autodoc]] models.normalization.LayerNorm with something like: You can also use LayerNorm and add markdown link to pytorch layernorm doc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @mishig25. I think it'd be okay to remove LayerNorm from our doc because our implementation is really a special case and is already supported in the latest versions of PyTorch.
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
Co-authored-by: Steven Liu <[email protected]>
|
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
|
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
|
can you take a look here @DN6? |
|
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
super cooll! thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
Co-authored-by: hlky <[email protected]>
What does this PR do?
We have a bunch of attention processors, normalization layers, and activation layers in the codebase and they have thin documentation. This is fine because users can always see their usage in the context of a model or a pipeline to get the fuller picture.
But since we document them in separate pages too, (such as this, this, and this), I think it makes sense to ensure these docs properly reflect what all we support.
This PR adds utilities to do that. @stevhliu I think you will like it :-)