Replies: 1 comment
-
The idea is that you would have a loader for every GPU you are training on. Here are two examples: |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am operating on a single graph, so I first split the graph into partitions using Neighborloader. Next, I want to know how to use muti-GPU. Here are my codes. How can I modify my code to meet multi-GPU?And Do I also need to use Neighborloader in my test_loader? because when I evaluate model in test_loader (singer garph) , I wonder if it may cause out of memory. thanks a lot
Beta Was this translation helpful? Give feedback.
All reactions