We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 522dd44 commit 83edf01Copy full SHA for 83edf01
pytorch_optimizer/optimizer/kate.py
@@ -11,7 +11,7 @@ class Kate(Optimizer, BaseOptimizer):
11
12
:param params: PARAMETERS. iterable of parameters to optimize or dicts defining parameter groups.
13
:param lr: float. learning rate.
14
- :param delta: float. delta.
+ :param delta: float. delta. 0.0 or 1e-8.
15
:param weight_decay: float. weight decay (L2 penalty).
16
:param weight_decouple: bool. the optimizer uses decoupled weight decay as in AdamW.
17
:param fixed_decay: bool. fix weight decay.
0 commit comments