NotImplementedError #8390
Unanswered
onlyonewater
asked this question in
Q&A
Replies: 1 comment 6 replies
-
It looks like you are using |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, pyg team, I am a freshman for pgy package, when I run the code, I get the error like this:
File "/mnt/workspace/duhao/.conda/envs/diffdock_pp/lib/python3.10/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/mnt/workspace/duhao/.conda/envs/diffdock_pp/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/mnt/workspace/duhao/.conda/envs/diffdock_pp/lib/python3.10/site-packages/torch_geometric/nn/data_parallel.py", line 69, in forward inputs = self.scatter(data_list, self.device_ids) File "/mnt/workspace/duhao/.conda/envs/diffdock_pp/lib/python3.10/site-packages/torch_geometric/nn/data_parallel.py", line 77, in scatter count = torch.tensor([data.num_nodes for data in data_list]) File "/mnt/workspace/duhao/.conda/envs/diffdock_pp/lib/python3.10/site-packages/torch_geometric/data/feature_store.py", line 515, in __iter__ raise NotImplementedError
I do not why, since I use the data_parallel and the batch size is 4 with 4 GPUs
Beta Was this translation helpful? Give feedback.
All reactions