v1.3.0 queries #7482
Darshan-Ramesh
started this conversation in
General
v1.3.0 queries
#7482
Replies: 1 comment
-
Got it the "checkpoint_callback" is bool now, which wasn't in v1.1.1 so the verbose is working for me. I am still waiting for the answers as to why epochs are plotted in a step-by-step way and not a straight line. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello Team,
I recently moved from v1.1.1 to v1.3.0 and I observe a few things.
a. I use wandb as the logger.
b. I use model checkpoint callback to monitor my validation loss with verbose set to True.
c. I log the training and validation loss and f1 scores only on epochs like the following image
self.log('train_loss',loss,prog_bar=True,on_step=False, on_epoch=True)
\
2. I now found a new metric, global_step being logged in wandb as seen in the attached images, which is not really clear.
3. Also, why epoch is being logged in this manner?
Could someone help me understand what might be the reasons for all the above observations?
Beta Was this translation helpful? Give feedback.
All reactions