-
首先非常感谢大佬们的无私开源。 非常感谢大佬能够解惑! |
Beta Was this translation helpful? Give feedback.
Answered by
a710128
Dec 8, 2022
Replies: 1 comment 3 replies
-
多卡的话需要注意一下GPU之间的通讯带宽。多机训练还需要额外注意机器之间的通信带宽。 |
Beta Was this translation helpful? Give feedback.
3 replies
Answer selected by
hmzo
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
多卡的话需要注意一下GPU之间的通讯带宽。多机训练还需要额外注意机器之间的通信带宽。
多卡训练如果速度显著的慢于单卡训练,很有可能是因为用于训练的两个GPU之间通信带宽太小,导致了GPU之间通信成为模型训练的瓶颈。