-
Hi here! I want to train the node embedding for each node, so follow the example here: https://github.com/pyg-team/pytorch_geometric/blob/master/examples/graph_sage_unsup.py the only thing I change is the data and I also increase the batch_size of LinkNeighborLoader from 256 to 1024. I move the data and model to my GPU (Tesla T4) for training, but it's still slow. it will take around 10 seconds for a batch, and I have 17331 batches in my training data. I figure out that the training process only uses 1GB of my GPU memory, and GPU usage is around 1%. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
I increased my batch_size in the LinkNeighborLoader to 1048576, and it's fast now. |
Beta Was this translation helpful? Give feedback.
I increased my batch_size in the LinkNeighborLoader to 1048576, and it's fast now.