We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent ad230b8 commit 862ec9dCopy full SHA for 862ec9d
docs/changelogs/v3.0.2.md
@@ -5,6 +5,8 @@
5
* Implement `WSD` LR Scheduler. (#247, #248)
6
* [Warmup-Stable-Decay LR Scheduler](https://arxiv.org/abs/2404.06395)
7
* Add more Pytorch built-in lr schedulers. (#248)
8
+* Implement `Kate` optimizer. (#249, #251)
9
+ * [Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad](https://arxiv.org/abs/2403.02648)
10
11
### Refactor
12
0 commit comments