Trainer: loss stagnates, whereas custom train implementation continues converging ?? #12667
-
Hi, Background Issue : Can you help me understand where my mistake is? Did I implement PLModule:
Trainer:
vs. pytorch training:
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 2 replies
-
update: loss = self.criterion(y, y_hat) to loss = self.criterion(y_hat, y) everywhere. also: trainer.fit(lightningmodule, train_dataloaders=datamodule.train_dataloader(), val_dataloaders=datamodule.val_dataloader()) can be just trainer.fit(lightningmodule, datamodule=datamodule) |
Beta Was this translation helpful? Give feedback.
-
Thanks for the swift reply! But I have difficulties understanding why. It is mean squared error, meaning that |
Beta Was this translation helpful? Give feedback.
update:
to
everywhere.
also:
can be just