-
I'm new to graph models and am working on a problem for energy system prediction. I model the system as a graph with each node representing a certain component. I can place sensors in certain nodes but not all. So in real life, I can only know the behavior of a subset of nodes of the graph through sensor measurements. My goal is to train a (static) graph model that reads in the sensor measurements of a subset of nodes and predicts the behavior of all nodes. To do that, I ran a few hundred simulations and get the behavior of all nodes under certain conditions. I want to perform the training based on these results, so the model can be used for unseen conditions. But I didn't find a similar example to my problem. I read the Colab tutorial for node classification, where we define train_mask so that model reads in the features of all the nodes and calculates the error function based on the labels of a subset of nodes. Looks like contrary to my case, but when I tried to apply the train_mask to input( data.x[data.train_mask]), error showed up, as looks like x[data.train_mask] is inconsistent with the given graph. I wonder is there any possible solution to my problem? Would padding the unknown node with zero in the input be a viable option? Any suggestions/examples would be highly appreciated! Thank you in advance! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 8 replies
-
Shapes of train_data = data.subgraph(train_mask_or_indices) and use it to train your model in a fully inductive mode. WDYT? |
Beta Was this translation helpful? Give feedback.
Shapes of
x
andtrain_mask
need to match in the first dimension, that is, the both should hold values for every node in the graph. If you only want to learn on a subgraph but apply the model on the full graph later on, how about you generate your sub-data viaand use it to train your model in a fully inductive mode. WDYT?