-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
Thanks for sharing the code.
I wanted to apply the code on multiple GPUs so I used torch.nn.dataparrallel(). However, I found it really hard to change the code, for you used F.conv2d() which accept tensor as paramaters and I need to duplicate it on all the GPUs. Then how to get the grad of the input paramaters and how to ensure all the input paramaters sharing memory?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels