Replies: 1 comment 2 replies
-
I am afraid there is no clean way to do this since by default it will .. one lazy hack can be, just flip the optimizer & scheduler: class LitModel(LightningModule):
def on_train_epoch_start(self):
if self.current_epoch == 100:
self.trainer.accelerator.setup_optimizers(self.trainer)
def configure_optimizers(self):
if self.current_epoch == 100:
optimizer = optim.get_optimizer(self.parameters(), 'sgd', self.lr)
lr_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer)
lr_scheduler.load_state_dict(self.trainer.lr_schedulers[0].state_dict()) ## only if you want to load the current state of the old learning rate.
else:
optimizer = optim.get_optimizer(self.parameters(), 'adam', self.lr)
lr_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer)
return {'optimizer': optimizer, 'lr_scheduler': lr_scheduler, "monitor": "val_loss"}
do you want the new optimizer to have recently update lr during init? |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
this is my code,
I want to use Adam in the first 100 epoch, and then use sgd, can I use a global learning rate scheduler to adjust the learning rate?
Beta Was this translation helpful? Give feedback.
All reactions