Option for disable tf32 #12601
-
Hi, is there a way in trainer to disable tf32 for ampere architecture? It's motivated by this discussion:https://discuss.pytorch.org/t/numerical-error-on-a100-gpus/148032/2 |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 1 reply
-
Hi @dnnspark! Simply setting the flags in your script doesn't work? torch.backends.cuda.matmul.allow_tf32 = False
torch.backends.cudnn.allow_tf32 = False |
Beta Was this translation helpful? Give feedback.
-
Hi @akihironitta, Yup. that works. I wanted to check if there's a way (e.g. flag of |
Beta Was this translation helpful? Give feedback.
-
Hi, @dnnspark - Thanks for creating the issue and the question. From what I know, there is no such flag to do this currently with the Hope you're enjoying using PyTorch Lightning ⚡ ! :) |
Beta Was this translation helpful? Give feedback.
-
Just leaving my two cents here: When manually setting these, it might not work with Not sure, how to properly do it either. For the beginning a hardcoded list of properties may be sufficient. |
Beta Was this translation helpful? Give feedback.
-
@Borda just fyi: I think this should have been an issue. A lot easier to track there especially if it shows a possible limitation :) |
Beta Was this translation helpful? Give feedback.
Hi @dnnspark! Simply setting the flags in your script doesn't work?