You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source-pytorch/common/early_stopping.rst
+33-1Lines changed: 33 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,7 @@
1
1
.. testsetup:: *
2
2
3
-
from lightning.pytorch.callbacks.early_stopping import EarlyStopping
3
+
from lightning.pytorch.callbacks.early_stopping import EarlyStopping, EarlyStoppingReason
4
+
from lightning.pytorch import Trainer, LightningModule
4
5
5
6
.. _early_stopping:
6
7
@@ -71,6 +72,37 @@ Additional parameters that stop training at extreme points:
71
72
- ``check_on_train_epoch_end``: When turned on, it checks the metric at the end of a training epoch. Use this only when you are monitoring any metric logged within
72
73
training-specific hooks on epoch-level.
73
74
75
+
After training completes, you can programmatically check why early stopping occurred using the ``stopping_reason``
76
+
attribute, which returns an ``EarlyStoppingReason`` enum value.
77
+
78
+
.. code-block:: python
79
+
80
+
from lightning.pytorch.callbacks import EarlyStopping
81
+
from lightning.pytorch.callbacks.early_stopping import EarlyStoppingReason
Copy file name to clipboardExpand all lines: src/lightning/pytorch/CHANGELOG.md
+17-2Lines changed: 17 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,12 +19,21 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
19
19
- Added time-based validation support though `val_check_interval` ([#21071](https://github.com/Lightning-AI/pytorch-lightning/pull/21071))
20
20
21
21
22
+
- Added attributes to access stopping reason in `EarlyStopping` callback ([#21188](https://github.com/Lightning-AI/pytorch-lightning/pull/21188))
23
+
24
+
25
+
- Added support for variable batch size in `ThroughputMonitor` ([#20236](https://github.com/Lightning-AI/pytorch-lightning/pull/20236))
26
+
27
+
22
28
### Changed
23
29
24
30
- Default to `weights_only=True` for `torch>=2.6` when loading checkpoints. ([#21072](https://github.com/Lightning-AI/pytorch-lightning/pull/21072))
25
31
26
32
27
-
-
33
+
- Default to `RichProgressBar` and `RichModelSummary` if the rich package is available. Fallback to TQDMProgressBar and ModelSummary otherwise ([#20896](https://github.com/Lightning-AI/pytorch-lightning/pull/20896))
@@ -34,7 +43,13 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
34
43
35
44
### Fixed
36
45
37
-
-
46
+
- Fixed edgecase when `max_trials` is reached in `Tuner.scale_batch_size` ([#21187](https://github.com/Lightning-AI/pytorch-lightning/pull/21187))
47
+
48
+
49
+
- Fixed case where `LightningCLI` could not be initialized with `trainer_default` containing callbacks ([#21192](https://github.com/Lightning-AI/pytorch-lightning/pull/21192))
50
+
51
+
52
+
- Fixed missing reset when `ModelPruning` is applied with lottery ticket hypothesis ([#21191](https://github.com/Lightning-AI/pytorch-lightning/pull/21191))
0 commit comments