File tree Expand file tree Collapse file tree 3 files changed +20
-0
lines changed Expand file tree Collapse file tree 3 files changed +20
-0
lines changed Original file line number Diff line number Diff line change @@ -324,6 +324,7 @@ tensorboard --logdir /some/path
324
324
325
325
- [ Accumulate gradients] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#accumulated-gradients )
326
326
- [ Force training for min or max epochs] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#force-training-for-min-or-max-epochs )
327
+ - [ Early stopping] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#early-stopping )
327
328
- [ Force disable early stop] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#force-disable-early-stop )
328
329
- [ Gradient Clipping] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#gradient-clipping )
329
330
- [ Hooks] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/hooks/ )
Original file line number Diff line number Diff line change @@ -19,6 +19,24 @@ It can be useful to force training for a minimum number of epochs or limit to a
19
19
trainer = Trainer(min_nb_epochs=1, max_nb_epochs=1000)
20
20
```
21
21
22
+ ---
23
+ #### Early stopping
24
+ To enable ealry-stopping, define the callback and give it to the trainer.
25
+ ``` {.python}
26
+ from pytorch_lightning.callbacks import EarlyStopping
27
+
28
+ # DEFAULTS
29
+ early_stop_callback = EarlyStopping(
30
+ monitor='val_loss',
31
+ min_delta=0.00,
32
+ patience=0,
33
+ verbose=False,
34
+ mode='auto'
35
+ )
36
+
37
+ trainer = Trainer(early_stop_callback=early_stop_callback)
38
+ ```
39
+
22
40
---
23
41
#### Force disable early stop
24
42
Use this to turn off early stopping and run training to the [ max_epoch] ( #force-training-for-min-or-max-epochs )
Original file line number Diff line number Diff line change @@ -120,6 +120,7 @@ Notice a few things about this flow:
120
120
121
121
- [ Accumulate gradients] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#accumulated-gradients )
122
122
- [ Force training for min or max epochs] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#force-training-for-min-or-max-epochs )
123
+ - [ Early stopping] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#early-stopping )
123
124
- [ Force disable early stop] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#force-disable-early-stop )
124
125
- [ Gradient Clipping] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#gradient-clipping )
125
126
- [ Hooks] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/hooks/ )
You can’t perform that action at this time.
0 commit comments