Remove parameters from autograd backward hook #11835
-
Hello, I am trying to remove some layers from I spent last 6 hours googling, and I have found out, that there's a attribute Is there some simpler way to remove some layer / module from being synchronized between more devices in DDP strategy? Thank you in advance for any help! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
hey @Honzys ! can you confirm whether the list of the param names you have set to be ignored are same as that of then your model is initialized in LightningModule: class LitModel(LightningModule):
def __init__(self, ...):
super().__init__()
self.model = ...
self.named_parameters() # extract key names from this and check whether the list of keys are exactly same for reference, you can check out the test here where this feature was integrated: https://github.com/pytorch/pytorch/pull/44826/files |
Beta Was this translation helpful? Give feedback.
-
Okay, It was my mistake, I deeply apologize for wasting your time there. The layer indeeds gets removed from the DistributedDataParallel (or rather not even getting there). But I've found another error when trying to set the Thank you anyway! |
Beta Was this translation helpful? Give feedback.
Okay, It was my mistake, I deeply apologize for wasting your time there. The layer indeeds gets removed from the DistributedDataParallel (or rather not even getting there).
But I've found another error when trying to set the
_ddp_params_and_buffers_to_ignore
inside theLightningModule
, so I've created issue here - #11844 .Thank you anyway!