Replies: 1 comment
-
Hi @chaoscls, typical usage is to accelerate light-weight preprocessing (usually cached all the deterministic transforms and no IO operations), because it leverages the separate thread to execute preprocessing to avoid unnecessary IPC between multiple workers of DataLoader. And as CUDA may not work well with the multi-processing of DataLoader, Hope it can help you, thanks! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I think
ThreadDataloader
is just use another thread to fetch data, and put it into a buffer queue. I don't know if it really makes sense to do that, or if it is really faster to useThreadDataloader
? Because I found there is a similar queue to buffer data in pytorch's Dataloader (https://github.com/pytorch/pytorch/blob/f4228e7037c614ed10b4d5c6fbd1467999e8d3e9/torch/utils/data/dataloader.py#LL1015C4-L1015C4).So I'm really confused when should I use
ThreadDataloader
?Beta Was this translation helpful? Give feedback.
All reactions