|
| 1 | +## Change Log |
| 2 | + |
| 3 | +### Feature |
| 4 | + |
| 5 | +* Implement A2Grad optimizer (#136) |
| 6 | + * [Optimal Adaptive and Accelerated Stochastic Gradient Descent](https://arxiv.org/abs/1810.00553) |
| 7 | +* Implement Accelerated SGD optimizer (#137) |
| 8 | + * [Accelerating Stochastic Gradient Descent For Least Squares Regression](https://arxiv.org/abs/1704.08227) |
| 9 | +* Implement Adaptive SGD optimizer (#139) |
| 10 | + * [Adaptive Gradient Descent without Descent](https://arxiv.org/abs/1910.09529) |
| 11 | +* Implement SGDW optimizer (#139) |
| 12 | + * [Decoupled Weight Decay Regularization](https://arxiv.org/abs/1711.05101) |
| 13 | +* Implement Yogi optimizer (#140) |
| 14 | + * [Adaptive Methods for Nonconvex Optimization](https://papers.nips.cc/paper_files/paper/2018/hash/90365351ccc7437a1309dc64e4db32a3-Abstract.html) |
| 15 | +* Implement SWATS optimizer (#141) |
| 16 | + * [Improving Generalization Performance by Switching from Adam to SGD](https://arxiv.org/abs/1712.07628) |
| 17 | +* Implement Fromage optimizer (#142) |
| 18 | + * [On the distance between two neural networks and the stability of learning](https://arxiv.org/abs/2002.03432) |
| 19 | +* Implement MSVAG optimizer (#143) |
| 20 | + * [Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients](https://arxiv.org/abs/1705.07774) |
| 21 | +* Implement AdaMod optimizer (#144) |
| 22 | + * [An Adaptive and Momental Bound Method for Stochastic Learning](https://arxiv.org/abs/1910.12249) |
| 23 | +* Implement AggMo optimizer (#145) |
| 24 | + * [Aggregated Momentum: Stability Through Passive Damping](https://arxiv.org/abs/1804.00325) |
| 25 | +* Implement QHAdam, QHM optimizers (#146) |
| 26 | + * [Quasi-hyperbolic momentum and Adam for deep learning](https://arxiv.org/abs/1810.06801) |
| 27 | +* Implement PID optimizer (#147) |
| 28 | + * [A PID Controller Approach for Stochastic Optimization of Deep Networks](http://www4.comp.polyu.edu.hk/~cslzhang/paper/CVPR18_PID.pdf) |
| 29 | + |
| 30 | +### Bug |
| 31 | + |
| 32 | +* Fix `update` in Lion optimizer (#135) |
| 33 | +* Fix `momentum_buffer` in SGDP optimizer (#139) |
| 34 | + |
| 35 | +### Diff |
| 36 | + |
| 37 | +[2.7.0...2.8.0](https://github.com/kozistr/pytorch_optimizer/compare/v2.7.0...v2.8.0) |
0 commit comments