Skip to content

iterations/gpu don't scale when using custom sampler #3716

Discussion options

You must be logged in to vote

All you need is a custom sampler here + set replace_sampler_ddp=False in Trainer. WeightedRandomSampler just uses the weights which you need to define to sample the batch from the dataset something similar to what boosting algorithms do while sampling. In your use case, you need some kind of DistributedBalancedSampler that can do either oversampling or undersampling. There are some discussions here which partially might solve your use-case in the future. But for now neither lightning nor PyTorch has this yet. But you check some custom sampler here that might help.

Replies: 7 comments 1 reply

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Answer selected by Borda
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@bw4sz
Comment options

Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
5 participants
Converted from issue

This discussion was converted from issue #3716 on December 23, 2020 19:50.