-
Notifications
You must be signed in to change notification settings - Fork 4k
Open
Labels
Description
🐛 Describe the bug
The RandomNodeSplit Transformation is only able to process homogenous graphs (i.e. Data) and not heterogenous graphs (i.e. HeteroData). Probably it's also just a documentation issue. I followed the data split guide.
In my tests, RandomNodeSplit fails on the datasets MovieLens1M and OGB_MAG (which are heterogenous), but works flawless on Amazon (a homogenous graph).
The error message:
Traceback (most recent call last):
File "/home/devbox/PycharmProjects/torchgeometrics-bug/movie_test.py", line 41, in <module>
node_splits.train_mask
File "/home/devbox/PycharmProjects/torchgeometrics-bug/.venv/lib/python3.13/site-packages/torch_geometric/data/hetero_data.py", line 162, in __getattr__
raise AttributeError(f"'{self.__class__.__name__}' has no "
f"attribute '{key}'")
AttributeError: 'HeteroData' has no attribute 'train_mask'
Example Code:
from torch_geometric.transforms import RandomNodeSplit, RandomLinkSplit
from torch_geometric.datasets import MovieLens1M, OGB_MAG, Amazon
from torch_geometric.data.data import DataEdgeAttr
from torch_geometric.data.data import DataTensorAttr
from torch_geometric.data.storage import GlobalStorage
from torch.serialization import add_safe_globals
add_safe_globals(
[
DataEdgeAttr,
DataTensorAttr,
GlobalStorage,
]
)
# works on RandomNodeSplit
# dataset = Amazon(root="dataset", name="Computers")
# fails on RandomNodeSplit
# dataset = OGB_MAG(
# root="dataset",
# preprocess="TransE",
# )
# fails on RandomNodeSplit
dataset = MovieLens1M(root="./dataset/")
data = dataset[0]
node_transform = RandomNodeSplit(
split="random",
num_train_per_class=10_000,
num_val=0.3,
num_test=0.2,
)
node_splits = node_transform(data)
node_splits.train_mask
pyproject.toml for reproduction:
[project]
name = "torchgeometrics-bug"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.13"
dependencies = [
"ogb>=1.3.6",
"torch==2.8.0",
"torch-geometric>=2.7.0",
"torch-scatter>=2.1.2",
"torch-sparse>=0.6.18",
]
Versions
Collecting environment information...
PyTorch version: 2.8.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.3 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04.1) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.13.5 (main, Jul 23 2025, 00:37:22) [Clang 20.1.4 ] (64-bit runtime)
Python platform: Linux-6.14.0-37-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 580.126.09
cuDNN version: Could not collect
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Caching allocator config: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper 3960X 24-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 60%
CPU max MHz: 4572.0000
CPU min MHz: 550.0000
BogoMIPS: 7585.67
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 12 MiB (24 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Ghostwrite: Not affected
Vulnerability Indirect target selection: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsa: Not affected
Vulnerability Tsx async abort: Not affected
Vulnerability Vmscape: Mitigation; IBPB before exit to userspace
Versions of relevant libraries:
[pip3] Could not collect
[conda] _anaconda_depends 2024.10 py312_mkl_0
[conda] blas 1.0 mkl
[conda] intel-openmp 2023.1.0 hdb19cb5_46306
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py312h5eee18b_1
[conda] mkl_fft 1.3.10 py312h5eee18b_0
[conda] mkl_random 1.2.7 py312h526ad5a_0
[conda] numpy 1.26.4 py312hc5e2394_0
[conda] numpy-base 1.26.4 py312h0da6c21_0
[conda] numpydoc 1.7.0 py312h06a4308_0
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.26.2 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] tbb 2021.8.0 hdb19cb5_0
[conda] torch 2.7.1 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt28cu128 pypi_0 pypi
[conda] torch-sparse 0.6.18 pypi_0 pypi
[conda] triton 3.3.1 pypi_0 pypi
Reactions are currently unavailable