-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Seed NumPy using np.random.SeedSequence()
in pl_worker_init_function()
to robustly seed NumPy-dependent dataloader workers
#20369
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
for more information, see https://pre-commit.ci
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #20369 +/- ##
=======================================
Coverage 88% 88%
=======================================
Files 267 267
Lines 23203 23203
=======================================
Hits 20313 20313
Misses 2890 2890 |
PL_GLOBAL_SEED
in pl_worker_init_function()
to correctly seed dataloader workersnp.random.SeedSequence()
in pl_worker_init_function()
to correctly seed dataloader workers
np.random.SeedSequence()
in pl_worker_init_function()
to correctly seed dataloader workersnp.random.SeedSequence()
in pl_worker_init_function()
to robustly seed dataloader workers
np.random.SeedSequence()
in pl_worker_init_function()
to robustly seed dataloader workersnp.random.SeedSequence()
in pl_worker_init_function()
to robustly seed NumPy-dependent dataloader workers
Nice catch! Can you please add a test that verifies the desired behavior so we know it won't regress in the future? |
@amorehead Does this mean that previously |
@adosar, if your multiprocessing dataloaders (plural) use NumPy's random module for dataset index sampling, then yes, until this PR was merged in, Lightning was not handling random seed definitions correctly. |
@amorehead If I understand correctly, this affect any dataset with a |
@adosar, yes, any |
What does this PR do?
Fixes an issue where small changes to a user's random seed (specified via
seed_everything()
, e.g., incrementingseed
by 1) would not result in a new (unique) random seed for NumPy (set withinpl_worker_init_function()
). This could cause multi-process dataloader workers to not change their random state based on a user's desired seed set viaseed_everything()
before training. In other words, if users were relying onseed_everything()
to change e.g., the order of the indices sampled by each dataloader worker (if their dataloader(s) use NumPy'srandom
module for index sampling), this would not work until now.Below is an example of a
seed_sequence
that previously would not have generated a distinct random seed for NumPy. For example,7768447584330995212 & 0xFFFFFFFF
and13249712275147347468 & 0xFFFFFFFF
yield the same seed even though these former numbers were generated using differentbase_seed
s. To reproduce more of these issues, setbase_seed=42,worker_id=0,global_rank=0
withinseed_sequence = _generate_seed_sequence(base_seed, worker_id, global_rank, count=4)
and then trying incrementing or decrementingbase_seed
. Most of these small changes tobase_seed
yield the same random seed for NumPy.Before submitting
PR review
Anyone in the community is welcome to review the PR.
Before you start reviewing, make sure you have read the review guidelines. In short, see the following bullet-list:
Reviewer checklist
📚 Documentation preview 📚: https://pytorch-lightning--20369.org.readthedocs.build/en/20369/