Skip to content

About using dataparrallel #3

@czb2133

Description

@czb2133

Thanks for sharing the code.
I wanted to apply the code on multiple GPUs so I used torch.nn.dataparrallel(). However, I found it really hard to change the code, for you used F.conv2d() which accept tensor as paramaters and I need to duplicate it on all the GPUs. Then how to get the grad of the input paramaters and how to ensure all the input paramaters sharing memory?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions