Train one optimizer/ model each batch #8481
Unanswered
sachinruk
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 1 comment 2 replies
-
You can check out the We also make sure only the gradients of the current optimizer’s parameters are calculated in the training step to prevent dangling gradients in multiple-optimizer setups |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am wondering what the default behaviour of lightning is when it comes to training multiple optimizers. I am not entirely sure, but it feels like maybe lightning is optimizing all available parameters from all available models in each backward_step.
Here is the desired behaviour:
I am running pl version
1.1.8
. Happy to upgrade if it helps.Beta Was this translation helpful? Give feedback.
All reactions