Can you print the model summary at an arbitrary time during training? #9689
mattwarkentin
started this conversation in
General
Replies: 1 comment 3 replies
-
does this work? from pytorch_lightning.utilities.model_summary import summarize
class UnfreezeLayers(pl.Callback):
def on_train_epoch_start(self, trainer: "pl.Trainer", pl_module: "pl.LightningModule"):
print(summarize(pl_module)) |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
The model summary (and soon rich model summary) only shows up once at the very beginning of the
trainer.fit()
procedure whentrainer.is_global_zero
, but I am wondering if there is a straightforward way for the model summary to be printed at some other point during training?My use case is that I am doing some transfer learning, where, at the beginning of the fitting procedure, some number of layers are "frozen", and then based on either (1) a certain number of epochs, or (2) some metric has stopped improving, I will unfreeze layers to start updating them.
I have a simple example of the callback shown here:
Instead of printing
pl_module.print("Training all parameters now!")
, I would like to show the model summary again to see how many trainable parameters there are after unfreezing.Beta Was this translation helpful? Give feedback.
All reactions