We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent cdc33f3 commit 62ff084Copy full SHA for 62ff084
pytorch_optimizer/optimizer/adai.py
@@ -18,8 +18,7 @@ class Adai(Optimizer, BaseOptimizer):
18
:param weight_decay: float. weight decay (L2 penalty).
19
:param weight_decouple: bool. the optimizer uses decoupled weight decay as in AdamW.
20
:param use_stable_weight_decay: bool. perform stable weight decay.
21
- :param dampening: float. dampening for momentum. where dampening < 1,
22
- it will show some adaptive-moment behavior.
+ :param dampening: float. dampening for momentum. where dampening < 1, it will show some adaptive-moment behavior.
23
:param use_gc: bool. use gradient centralization.
24
:param eps: float. term added to the denominator to improve numerical stability.
25
"""
0 commit comments