Multi gpus resume error #12458
Unanswered
minhoooo1
asked this question in
DDP / multi-GPU / multi-node
Replies: 2 comments
-
@minhoooo1 did you find the solution? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Nope. I haven't paid attention to this issue for a long time
…------------------ 原始邮件 ------------------
发件人: ***@***.***>;
发送时间: 2024年2月7日(星期三) 上午7:07
收件人: ***@***.***>;
抄送: ***@***.***>; ***@***.***>;
主题: Re: [Lightning-AI/pytorch-lightning] Multi gpus resume error (Discussion #12458)
@minhoooo1 did you find the solution?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
When I use multi gpus to resume ,the following error reports are generated : Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! May be I should change map_location ,but where can i change in pytorch-lighning.
Beta Was this translation helpful? Give feedback.
All reactions