torch.set_grad_enabled fails to work in test_step #15326
-
I wrote the following code in test_step: with torch.set_grad_enabled(True):
x = torch.zeros(4, requires_grad=True)
print(x * 2) but only got a regular tensor (without grad_fn): tensor([0., 0., 0., 0.]) This seems very weird. However, the same code works fine in validation_step, giving: tensor([0., 0., 0., 0.], grad_fn=<MulBackward0>) I'm using the latest version 1.7.7. Also note that 1.6.1 works fine without such a problem. I think there are some changes about the test routine but I can't find any related changelog. Is this the expected behavior? Thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 2 replies
-
@bennyguo It may be related to #14497. With Lightning 1.8 release, you will be able to disable PyTorch's inference mode (which is enabled by Lightning by default): # default used by the Trainer
trainer = Trainer(inference_mode=True)
# Use `torch.no_grad` instead
trainer = Trainer(inference_mode=False) |
Beta Was this translation helpful? Give feedback.
-
Thanks @akihironitta ! This makes sense. Any workaround for the current stable release? (or should I just stick to the old pl version temporarily) |
Beta Was this translation helpful? Give feedback.
@bennyguo It may be related to #14497. With Lightning 1.8 release, you will be able to disable PyTorch's inference mode (which is enabled by Lightning by default):