Confusion regarding metric logging #7722
Replies: 2 comments
-
Dear @ajinkyaambatwar, Thanks for the excellent questions. Imagine the following example.
Internally, we are going to store all those values every time training_step is being called. If you specify on_step=True and on_epoch=True, we will add I hope it is more clear. Best, |
Beta Was this translation helpful? Give feedback.
-
Thank you @tchaton for your explanation. From the docs I read that and you also mentioned this, if we put |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I am a bit confused about metric logging in
training_step
/validation_step
.Now a standard
training_step
isNow here my doubt is, the
train_loss
that I am logging here, is it the train loss for this particular batch or averaged loos over entire epoch. Now as I have set,on_step
andon_epoch
both True, what is actually getting logged and when(after each batch or at the epoch end)?About
training_acc
, when I have seton_step
to True, does it only log the per batch accuracy during training and not the overall epoch accuracy?Now with this
training_step
, if I add a customtraining_epoch_end
like thisIs the
train_epoch_acc
here same as the average of per batchtraining_acc
?I intend to put an
EarlyStoppingCallBack
with monitoring validation loss of the epoch, defined in a same fashion as fortrain_loss
.If I just put
early_stop_callback = pl.callbacks.EarlyStopping(monitor="val_loss", patience=p)
, will it monitor per batchval_loss
or epoch wiseval_loss
as logging forval_loss
is happening during batch end and epoch end as well.Sorry if my questions are a little too silly, but I am confused about this!
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions