Replies: 1 comment 15 replies
-
As far as I know, |
Beta Was this translation helpful? Give feedback.
15 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello.
I've been experimenting with PyTorch Geometric's GCNConv layer on CPUs, and it seems like
they do not leverage much parallelism when given multiple threads.
Here is the code I use:
3 GCN layers back to back with a hidden feature size of 2000.
If I run this with 1 thread (confirmed with torch.get_num_threads()) the forward phases takes 68 seconds.
The data batch print shown is the number of rows and edges in the batch for reference.
If I run with 32 threads (the number of cores on the machine I am running on), the time improves, but
not by much (note the data batch is different since sampling occurs randomly, but it's in the same ballpark):
How much parallelism is expected from the GCNConv layer? It seems like going from 1 thread to 32 threads doesn't
improve the runtime much, and from my observations with
top
, most of the time seems to be spentnot using all the allocated threads (i.e. there doesn't seem to be much parallel execution going on).
Thank you.
Beta Was this translation helpful? Give feedback.
All reactions