Skip to content
Discussion options

You must be logged in to vote

I got it. By default the logger logs every 50 steps (https://pytorch-lightning.readthedocs.io/en/1.3.0/extensions/logging.html#control-logging-frequency). It so happened that my dataset was very small so there were way more epochs than training steps. If I increase the frequency trainer = Trainer(log_every_n_steps=1) then I do get a log for every steps in the training process.

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@Kalyankr
Comment options

Answer selected by JackCaster
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment