Skip to content

Commit 3c02a48

Browse files
authored
[Feature] Implement EmoNavi, EmoFact, and EmoLynx optimizers (#400)
* docs: EmoNavi optimizer * feature: EmoNavi optimizer * chore: keyword * update: codes * docs: v3.6.2 changelog * fix: alpha to weight * update: closure * update: recipes * fix: closure * update: loss * refactor: shadow_weight
1 parent 22abf5a commit 3c02a48

File tree

11 files changed

+483
-13
lines changed

11 files changed

+483
-13
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -218,6 +218,7 @@ get_supported_optimizers(['adam*', 'ranger*'])
218218
| AdamC | *Why Gradients Rapidly Increase Near the End of Training* | | <https://arxiv.org/abs/2506.02285> | [cite](https://ui.adsabs.harvard.edu/abs/2025arXiv250602285D/exportcitation) |
219219
| AdaMuon | *Adaptive Muon Optimizer* | | <https://arxiv.org/abs/2507.11005v1> | [cite](https://ui.adsabs.harvard.edu/abs/2025arXiv250711005S/exportcitation) |
220220
| SPlus | *A Stable Whitening Optimizer for Efficient Neural Network Training* | [github](https://github.com/kvfrans/splus) | <https://arxiv.org/abs/2506.07254> | [cite](https://ui.adsabs.harvard.edu/abs/2025arXiv250607254F/exportcitation) |
221+
| EmoNavi | *An emotion-driven optimizer that feels loss and navigates accordingly.* | [github](https://github.com/muooon/EmoNavi) | | |
221222

222223
## Supported LR Scheduler
223224

docs/changelogs/v3.6.2.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@
66
* [Adaptive Muon Optimizer](https://arxiv.org/abs/2507.11005v1)
77
* Implement `SPlus` optimizer. (#396, #399)
88
* [A Stable Whitening Optimizer for Efficient Neural Network Training](https://arxiv.org/abs/2506.07254)
9+
* Implement `EmoNavi`, `EmoFact`, and `EmoLynx` optimizers. (#393, #400)
10+
* [An emotion-driven optimizer that feels loss and navigates accordingly](https://github.com/muooon/EmoNavi)
911

1012
### Fix
1113

docs/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -218,6 +218,7 @@ get_supported_optimizers(['adam*', 'ranger*'])
218218
| AdamC | *Why Gradients Rapidly Increase Near the End of Training* | | <https://arxiv.org/abs/2506.02285> | [cite](https://ui.adsabs.harvard.edu/abs/2025arXiv250602285D/exportcitation) |
219219
| AdaMuon | *Adaptive Muon Optimizer* | | <https://arxiv.org/abs/2507.11005v1> | [cite](https://ui.adsabs.harvard.edu/abs/2025arXiv250711005S/exportcitation) |
220220
| SPlus | *A Stable Whitening Optimizer for Efficient Neural Network Training* | [github](https://github.com/kvfrans/splus) | <https://arxiv.org/abs/2506.07254> | [cite](https://ui.adsabs.harvard.edu/abs/2025arXiv250607254F/exportcitation) |
221+
| EmoNavi | *An emotion-driven optimizer that feels loss and navigates accordingly.* | [github](https://github.com/muooon/EmoNavi) | | |
221222

222223
## Supported LR Scheduler
223224

docs/optimizer.md

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -184,6 +184,18 @@
184184
:docstring:
185185
:members:
186186

187+
::: pytorch_optimizer.EmoFact
188+
:docstring:
189+
:members:
190+
191+
::: pytorch_optimizer.EmoLynx
192+
:docstring:
193+
:members:
194+
195+
::: pytorch_optimizer.EmoNavi
196+
:docstring:
197+
:members:
198+
187199
::: pytorch_optimizer.EXAdam
188200
:docstring:
189201
:members:

pyproject.toml

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -14,15 +14,15 @@ keywords = [
1414
"AdaBound", "AdaDelta", "AdaFactor", "AdaGC", "AdaMax", "AdaMuon", "AdamG", "AdaMod", "AdaNorm", "AdaPNM",
1515
"AdaSmooth", "AdEMAMix", "Simplified-AdEMAMix", "ADOPT", "AdaHessian", "Adai", "Adalite", "AdaLomo", "AdamMini",
1616
"AdamP", "AdamS", "Adan", "AggMo", "Aida", "AliG", "Amos", "Apollo", "APOLLO", "AvaGrad", "bSAM", "CAME",
17-
"DAdaptAdaGrad", "DAdaptAdam", "DAdaptAdan", "DAdaptSGD", "DAdaptLion", "DeMo", "DiffGrad", "EXAdam", "FAdam",
18-
"Fira", "FOCUS", "Fromage", "FTRL", "GaLore", "Grams", "Gravity", "GrokFast", "GSAM", "Kate", "Lamb", "LaProp",
19-
"LARS", "Lion", "LOMO", "Lookahead", "MADGRAD", "MARS", "MSVAG", "Muno", "Nero", "NovoGrad", "OrthoGrad", "PAdam",
20-
"PCGrad", "PID", "PNM", "Prodigy", "PSGD", "QHAdam", "QHM", "RACS", "RAdam", "Ranger", "Ranger21", "RotoGrad",
21-
"SAM", "GCSAM", "LookSAM", "ScheduleFreeSGD", "ScheduleFreeAdamW", "ScheduleFreeRAdam", "SCION", "SGDP", "Shampoo",
22-
"ScalableShampoo", "SGDW", "SignSGD", "SM3", "SOAP", "SopihaH", "SPAM", "StableSPAM", "SPlus", "SRMM",
23-
"StableAdamW", "SWATS", "TAM", "Tiger", "TRAC", "VSGD", "WSAM", "Yogi", "BCE", "BCEFocal", "Focal", "FocalCosine",
24-
"SoftF1", "Dice", "LDAM", "Jaccard", "Bi-Tempered", "Tversky", "FocalTversky", "LovaszHinge", "bitsandbytes", "WSD",
25-
"QGaLore",
17+
"DAdaptAdaGrad", "DAdaptAdam", "DAdaptAdan", "DAdaptSGD", "DAdaptLion", "DeMo", "DiffGrad", "EmoFact", "EmoLynx",
18+
"EmoNavi", "EXAdam", "FAdam", "Fira", "FOCUS", "Fromage", "FTRL", "GaLore", "Grams", "Gravity", "GrokFast", "GSAM",
19+
"Kate", "Lamb", "LaProp", "LARS", "Lion", "LOMO", "Lookahead", "MADGRAD", "MARS", "MSVAG", "Muno", "Nero",
20+
"NovoGrad", "OrthoGrad", "PAdam", "PCGrad", "PID", "PNM", "Prodigy", "PSGD", "QHAdam", "QHM", "RACS", "RAdam",
21+
"Ranger", "Ranger21", "RotoGrad", "SAM", "GCSAM", "LookSAM", "ScheduleFreeSGD", "ScheduleFreeAdamW",
22+
"ScheduleFreeRAdam", "SCION", "SGDP", "Shampoo", "ScalableShampoo", "SGDW", "SignSGD", "SM3", "SOAP", "SopihaH",
23+
"SPAM", "StableSPAM", "SPlus", "SRMM", "StableAdamW", "SWATS", "TAM", "Tiger", "TRAC", "VSGD", "WSAM", "Yogi",
24+
"BCE", "BCEFocal", "Focal", "FocalCosine", "SoftF1", "Dice", "LDAM", "Jaccard", "Bi-Tempered", "Tversky",
25+
"FocalTversky", "LovaszHinge", "bitsandbytes", "WSD", "QGaLore",
2626
]
2727
classifiers = [
2828
"License :: OSI Approved :: Apache Software License",

pytorch_optimizer/__init__.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -114,6 +114,9 @@
114114
DeMo,
115115
DiffGrad,
116116
DynamicLossScaler,
117+
EmoFact,
118+
EmoLynx,
119+
EmoNavi,
117120
EXAdam,
118121
FAdam,
119122
Fira,

pytorch_optimizer/optimizer/__init__.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,7 @@
4343
from pytorch_optimizer.optimizer.dadapt import DAdaptAdaGrad, DAdaptAdam, DAdaptAdan, DAdaptLion, DAdaptSGD
4444
from pytorch_optimizer.optimizer.demo import DeMo
4545
from pytorch_optimizer.optimizer.diffgrad import DiffGrad
46+
from pytorch_optimizer.optimizer.emonavi import EmoFact, EmoLynx, EmoNavi
4647
from pytorch_optimizer.optimizer.exadam import EXAdam
4748
from pytorch_optimizer.optimizer.experimental.ranger25 import Ranger25
4849
from pytorch_optimizer.optimizer.fadam import FAdam
@@ -325,6 +326,9 @@ def load_optimizer(optimizer: str) -> OPTIMIZER:
325326
VSGD,
326327
AdaMuon,
327328
SPlus,
329+
EmoFact,
330+
EmoLynx,
331+
EmoNavi,
328332
]
329333
OPTIMIZERS: Dict[str, OPTIMIZER] = {str(optimizer.__name__).lower(): optimizer for optimizer in OPTIMIZER_LIST}
330334

0 commit comments

Comments
 (0)