Make sharding plan explicit in torchrec dlrm example#243
Make sharding plan explicit in torchrec dlrm example#243SungMinCho wants to merge 1 commit intofacebookresearch:mainfrom
Conversation
In previous code, DistributedModelParallel was responsible for creating sharding plans for DLRM. It relied on hard-coded constants to create the topology for planning (e.g. batch size=512, HBM_CAP=32GB and so on). This was problematic because it did not reflect the true system topology. Also, testing different constraints for sharding types and compute kernels were inconvenient in the previous code. Thus, we explicitly created sharding plans for DLRM before DMP and provided some simple options to make life easier.
|
Hi @SungMinCho! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks! |
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
In previous code, DistributedModelParallel was responsible for creating
sharding plans for DLRM. It relied on hard-coded constants to create the
topology for planning (e.g. batch size=512, HBM_CAP=32GB and so on).
This was problematic because it did not reflect the true system
topology. Also, testing different constraints for sharding types and
compute kernels were inconvenient in the previous code.
Thus, we explicitly created sharding plans for DLRM before DMP and
provided some simple options to make life easier.