Skip to content

Commit 3d16a68

Browse files
astariulwilliamFalcon
authored andcommitted
Add EarlyStop documentation (#245)
* Update Training Loop.md * Update index.md * Update README.md * Update Training Loop.md * Update Training Loop.md
1 parent eb268c4 commit 3d16a68

File tree

3 files changed

+20
-0
lines changed

3 files changed

+20
-0
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -324,6 +324,7 @@ tensorboard --logdir /some/path
324324

325325
- [Accumulate gradients](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#accumulated-gradients)
326326
- [Force training for min or max epochs](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#force-training-for-min-or-max-epochs)
327+
- [Early stopping](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#early-stopping)
327328
- [Force disable early stop](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#force-disable-early-stop)
328329
- [Gradient Clipping](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#gradient-clipping)
329330
- [Hooks](https://williamfalcon.github.io/pytorch-lightning/Trainer/hooks/)

docs/Trainer/Training Loop.md

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,24 @@ It can be useful to force training for a minimum number of epochs or limit to a
1919
trainer = Trainer(min_nb_epochs=1, max_nb_epochs=1000)
2020
```
2121

22+
---
23+
#### Early stopping
24+
To enable ealry-stopping, define the callback and give it to the trainer.
25+
``` {.python}
26+
from pytorch_lightning.callbacks import EarlyStopping
27+
28+
# DEFAULTS
29+
early_stop_callback = EarlyStopping(
30+
monitor='val_loss',
31+
min_delta=0.00,
32+
patience=0,
33+
verbose=False,
34+
mode='auto'
35+
)
36+
37+
trainer = Trainer(early_stop_callback=early_stop_callback)
38+
```
39+
2240
---
2341
#### Force disable early stop
2442
Use this to turn off early stopping and run training to the [max_epoch](#force-training-for-min-or-max-epochs)

docs/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -120,6 +120,7 @@ Notice a few things about this flow:
120120

121121
- [Accumulate gradients](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#accumulated-gradients)
122122
- [Force training for min or max epochs](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#force-training-for-min-or-max-epochs)
123+
- [Early stopping](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#early-stopping)
123124
- [Force disable early stop](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#force-disable-early-stop)
124125
- [Gradient Clipping](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#gradient-clipping)
125126
- [Hooks](https://williamfalcon.github.io/pytorch-lightning/Trainer/hooks/)

0 commit comments

Comments
 (0)