Skip to content

Commit f5027b1

Browse files
committed
refactor: MADGRAD
1 parent 370169c commit f5027b1

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

pytorch_optimizer/madgrad.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ def __init__(
3838
):
3939
"""A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic (slightly modified)
4040
:param params: PARAMETERS. iterable of parameters to optimize or dicts defining parameter groups
41-
:param lr: float. learning rate.
41+
:param lr: float. learning rate
4242
:param eps: float. term added to the denominator to improve numerical stability
4343
:param weight_decay: float. weight decay (L2 penalty)
4444
MADGRAD optimizer requires less weight decay than other methods, often as little as zero

0 commit comments

Comments
 (0)