MLFlow logging using PyTorch Lightning, how to log by steps instead of epochs? #9804
Unanswered
FeryET
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi
Reading MLFlow's pytorch integration, there is an autolog option using which you can log using mlflow easily. But there arises some problems:
1 - I cannot apparently log by steps, it can only log the metric after N epochs (N is a natural number).
2 - I cannot log the trainer's parameters. Things like grad_clip_value, or early stopping patience and etc. This is extremely important in hyperparameter optimization.
How can I achieve these? Thank you.
P.S: Overall Lightning + MLFlow need more documentation, both are very good tools and apparently they integrate well enough, but the documnetation is lacking a bit. I know this is a community endeavor, but still I find it a bit behind the curve.
Beta Was this translation helpful? Give feedback.
All reactions