Error when implementing Mixed Precision (16-bit) Training #12871
Unanswered
oschan77
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 1 comment 2 replies
-
Hi @oschan77! Could you share your script or reproduce it with the BoringModel? https://github.com/PyTorchLightning/pytorch-lightning/tree/master/pl_examples/bug_report |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello there,
I have trouble implementing Mixed Precision (16-bit) Training. I am trying to use precision=16 in Trainer to reduce my memory used in GPUs.
trainer = pl.Trainer(profiler='pytorch', max_epochs=1000, num_nodes=1, gpus=8, logger=tb_logger, strategy='ddp', precision=16) trainer.fit(stylegan2_model)
However, I encountered this error.
File "/scratch/PI/cemclo/stylegan2ada_pl.py", line 193, in forward out = F.linear(x, self.weight(), bias=self.bias) RuntimeError: expected scalar type Float but found Half
What causes this issue and how could I solve this issue? Thanks a lot.
Beta Was this translation helpful? Give feedback.
All reactions