How to call torch.distributed.get_rank() in model building phase #12017
-
I implemented pytorch lightning-based learning as follows. dm = build_datamodule(config) trainer = Trainer( trainer.fit(model, dm) In this situation, in order to set different model parameters for each gpu process, distributed.get_rank() must be called at the stage of model building. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hello, you are right that this currently isn't supported. I am working on adding this feature as part of this issue: #11922 Could you confirm that the issue and proposed solution meet your needs? |
Beta Was this translation helpful? Give feedback.
Hello, you are right that this currently isn't supported. I am working on adding this feature as part of this issue: #11922
Could you confirm that the issue and proposed solution meet your needs?