Dataloader pickle torchio #10898
-
I am trying to get multi-gpu training working, on single gpu it is al working fine. However, when I increase the number of GPUs I get a pickling error, and I don't know what to do about it. For the dataloader I am using the patch-based approach from TorchIO, which creates a Queue, maybe that is the cause? Does anyone has experience with TorchIO Queue and Lightning multi-gpu maybe? Or is something else going on? The error i am getting is as follows:
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
How are launching your training? Can you try to set |
Beta Was this translation helpful? Give feedback.
How are launching your training? Can you try to set
strategy='ddp'
instead of the default 'ddp_spawn' for multiple GPUs. For me that works and in opposite to ddp_spawn it does not pickle your dataloader.