Tensor type conflicts #8640
Unanswered
thistlillo
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 1 comment
-
Dear @thistlillo, Using import torch
from torch.cuda.amp import autocast
x = torch.tensor(0, device="cuda").float()
y = torch.tensor(0, device="cuda").double()
a_float32 = torch.rand((8, 8), device="cuda")
b_float32 = torch.rand((8, 8), device="cuda")
c_float32 = torch.rand((8, 8), device="cuda")
d_float32 = torch.rand((8, 8), device="cuda")
with autocast(enabled=True):
# attention, not all operations are auto autocast
z = torch.add(x, y)
print(f"X dtype {x.dtype}")
print(f"Y dtype {y.dtype}")
print(f"Z dtype {z.dtype}")
# torch.mm is on autocast's list of ops that should run in float16.
# Inputs are float32 or float64, but the op runs in float16 and produces float16 output.
# No manual casts are required.
e_float16 = torch.mm(a_float32, b_float32)
e_float16_2 = torch.mm(a_float32.double(), b_float32.double())
# Also handles mixed input types
f_float16 = torch.mm(d_float32, e_float16)
e_float16 = torch.mm(a_float32.double(), b_float32.double())
print(f"b_float32 dtype {b_float32.dtype}")
print(f"f_float16 dtype {f_float16.dtype}")
print(f"e_float16 dtype {e_float16.dtype}")
print(f"f_float16 dtype {f_float16.dtype}") X dtype torch.float32
Y dtype torch.float64
Z dtype torch.float64
b_float32 dtype torch.float32
f_float16 dtype torch.float16
e_float16 dtype torch.float64
f_float16 dtype torch.float16 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I get a type error when backpropagating the loss in my model. The reason is:
I never specify the data type explicitly for any tensor/layer.
It is easy to solve, but it is not clear to me what will happen when I ask to run the model in mixed precision. I could not find any documentation on this, but I searched so forgive me if the question is very basic. In mixed precision, what about tensors whose type has been fixed to float/double?
In this case, for example, it is enough to specify the data type of the target tensor as torch.float before providing it to the trainer. Will this simple solution be ok when I run the model in MP?
I know I will find out soon, but - may be - there are modifications that I can do now and that could speed up the future development.
Beta Was this translation helpful? Give feedback.
All reactions