Confused about Logging hyper parameters #10400
Replies: 2 comments 1 reply
-
I'm not sure if this is what you're looking for but PL has a bunch of external loggers which save hyper-parameters. I work for W&B, and our logger can be used to track hyper-parameters, log metrics / models and even log media such as images or text. |
Beta Was this translation helpful? Give feedback.
-
Hey @MinWang1997, Prince Canuma here, a Data Scientist at Neptune.ai, There are multiple ways and tools to help you manually or automatically log your experiments(metrics, hyperparameters, model checkpoints and etc) that work with PTL. Are you currently using any experiment tracker/logger at the moment in your workflow? I also noticed that the Here is a list of experiment trackers/loggers that are supported by PTL: https://pytorch-lightning.readthedocs.io/en/stable/extensions/logging.html#supported-loggers Check it out and let me know which one you are using 😃 ! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Does anyone know how to log hyper parameters? The documentation is a little bit unclear. https://pytorch-lightning.readthedocs.io/en/stable/extensions/logging.html
When training a model, it’s useful to know what hyperparams went into that model. When Lightning creates a checkpoint, it stores a key “hyper_parameters” with the hyperparams.
lightning_checkpoint = torch.load(filepath, map_location=lambda storage, loc: storage)
hyperparams = lightning_checkpoint["hyper_parameters"]
Beta Was this translation helpful? Give feedback.
All reactions