You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fix LR not being correctly set after using LearningRateFinder callback (#21068)
* fix(tuner/lr_finder): apply LR suggestion after checkpoint restore when used as callback
Previously, LearningRateFinder applied the suggested LR before restoring the
checkpoint, so the optimizer LR was reverted by the restore step. This caused
the callback to print “Learning rate set to …” without persisting the change.
Change:
- Move LR application to after checkpoint restore and update both the LM attr
and active optimizer param groups so the LR persists for training.
Tests:
- Add unit test [test_lr_finder_callback_applies_lr_after_restore] to assert the
optimizer LR matches the LR Finder suggestion after the search completes.
* changelog
* Apply suggestions from code review
---------
Co-authored-by: Nicki Skafte Detlefsen <[email protected]>
Co-authored-by: Jirka Borovec <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
(cherry picked from commit 3ed9d4e)
Copy file name to clipboardExpand all lines: src/lightning/pytorch/CHANGELOG.md
+29-4Lines changed: 29 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,33 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
6
6
7
7
---
8
8
9
-
## [2.5.3] - 2025-08-DD
9
+
## [unreleased] - YYYY-MM-DD
10
+
11
+
### Added
12
+
13
+
-
14
+
15
+
16
+
### Changed
17
+
18
+
-
19
+
20
+
21
+
### Removed
22
+
23
+
-
24
+
25
+
26
+
### Fixed
27
+
28
+
-
29
+
30
+
31
+
- Fixed learning rate not being correctly set after using `LearningRateFinder` callback ([#21068](https://github.com/Lightning-AI/pytorch-lightning/pull/21068))
32
+
33
+
---
34
+
35
+
## [2.5.3] - 2025-08-13
10
36
11
37
### Changed
12
38
@@ -57,14 +83,13 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
57
83
58
84
- Allow LightningCLI to use a customized argument parser class ([#20596](https://github.com/Lightning-AI/pytorch-lightning/pull/20596))
59
85
- Change `wandb` default x-axis to `tensorboard`'s `global_step` when `sync_tensorboard=True` ([#20611](https://github.com/Lightning-AI/pytorch-lightning/pull/20611))
60
-
- Added a new `checkpoint_path_prefix` parameter to the MLflow logger which can control the path to where the MLflow artifacts for the model checkpoints are stored ([#20538](https://github.com/Lightning-AI/pytorch-lightning/pull/20538))
61
86
- CometML logger was updated to support the recent Comet SDK ([#20275](https://github.com/Lightning-AI/pytorch-lightning/pull/20275))
62
87
- bump: testing with latest `torch` 2.6 ([#20509](https://github.com/Lightning-AI/pytorch-lightning/pull/20509))
63
88
64
89
### Fixed
65
90
66
-
- Fixed `CSVLogger` logging hyperparameter at every write which increase latency ([#20594](https://github.com/Lightning-AI/pytorch-lightning/pull/20594))
67
-
- Fixed `OverflowError` when resuming from checkpoint with an iterable dataset ([#20565](https://github.com/Lightning-AI/pytorch-lightning/issues/20565))
91
+
- Fixed CSVLogger logging hyperparameter at every write which increase latency ([#20594](https://github.com/Lightning-AI/pytorch-lightning/pull/20594))
92
+
- Fixed OverflowError when resuming from checkpoint with an iterable dataset ([#20565](https://github.com/Lightning-AI/pytorch-lightning/issues/20565))
68
93
- Fixed swapped _R_co and _P to prevent type error ([#20508](https://github.com/Lightning-AI/pytorch-lightning/issues/20508))
69
94
- Always call `WandbLogger.experiment` first in `_call_setup_hook` to ensure `tensorboard` logs can sync to `wandb` ([#20610](https://github.com/Lightning-AI/pytorch-lightning/pull/20610))
70
95
- Fixed TBPTT example ([#20528](https://github.com/Lightning-AI/pytorch-lightning/pull/20528))
0 commit comments