We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
2 parents 64b84dc + 2b62203 commit 7dcaabbCopy full SHA for 7dcaabb
pytorch_optimizer/base/optimizer.py
@@ -200,7 +200,7 @@ def get_adanorm_gradient(
200
) -> torch.Tensor:
201
r"""Get AdaNorm gradient.
202
203
- :param grad. torch.Tensor. gradient.
+ :param grad: torch.Tensor. gradient.
204
:param adanorm: bool. whether to apply AdaNorm.
205
:param exp_grad_norm: Optional[torch.Tensor]. exp_grad_norm.
206
:param r: float. Optional[float]. momentum (ratio).
0 commit comments