What is the NUMA situation for 4090 p2p ? #717
Unanswered
scientific-coder
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
The next tinybox (tinybox pro) will have 8 GPUs and 2 CPU, so I'm wondering about the p2p situation, presuming that each CPU will have a PCIe root.
On one hand, 8 is much better than 6, but on the other hand, I'm worried that for training tasks that could be accomplished by 6 4090 but wouldn't fit in 4, the tinybox pro could be less performant than the tinybox.
Surprisingly, it seems that p2p can work across NUMA but I wonder if this is supported by this driver and about the performance implications for training.
Could a clarification about the NUMA situation be added to the documentation ?
Thx !
Beta Was this translation helpful? Give feedback.
All reactions