How many epocs? #729
Replies: 2 comments
-
So you are afraid of overfitting your model with large amounts of epoch. there is no relevant answer to this question. Generally, if you use small datasets, you can use 100 epoch it would be fine. else if you use large datasets. 1000 epochs will be fine also remember this 1000 epoch is quite a large number for CNNs (convolutional Nural networks), so for computer vision you will use 4-7 the data scientist moto experiment, experiment and visualize, visualize. |
Beta Was this translation helpful? Give feedback.
-
thank you for your answer, I have moved on and than I employ strategies for
early termination.
Thank you for helping
Renato
Il giorno sab 16 dic 2023 alle ore 11:44 shisirkha ***@***.***>
ha scritto:
… So you are afraid of overfitting your model with large amounts of epoch.
there is no relevant answer to this question.
Generally, if you use small datasets, you can use 100 epoch it would be
fine.
else if you use large datasets. 1000 epochs will be fine
also remember this 1000 epoch is quite a large number for CNNs
(convolutional Nural networks), so for computer vision you will use 4-7
epochs and if it don't give you good results in one go run it again and
experiment with it.
the data scientist moto experiment, experiment and visualize, visualize.
—
Reply to this email directly, view it on GitHub
<#729 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAG5FOM3KU77A2STW2EB25TYJV3R5AVCNFSM6AAAAAA7V7LGTWVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TQNZQGM4DK>
.
You are receiving this because you authored the thread.Message ID:
<mrdbourke/pytorch-deep-learning/repo-discussions/729/comments/7870385@
github.com>
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I just concluded the module 01. It is not clear to me if there is a method to optimize the loop training the model. In the material, it is mentioned the best moment to stop would be when the gradient is zero ( or close to zero) is there a general criteria for deciding when to stop? Now that we play with small datasets it does not make much of a difference, I speculate that when we would be performing an analysis with million of variables and examples, having a general criteria for not performing additional training iterations would be welcomed.
Beta Was this translation helpful? Give feedback.
All reactions