Replies: 1 comment
-
This is similar to our for batch in loader:
batch = batch.to(device)
out = model(batch.x, batch.edge_index)
loss = loss_fn(out[batch.train_mask], batch.y[batch.train_mask])
loss.backward()
optimizer.step() |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello, I'm working with multiple graphs where each node is assigned a label of either 0 or 1. I'm interested in implementing semi-supervised learning, where I train a Graph Convolutional Network (GCN) using only a subset of labeled nodes in each graph.
The approach I'm considering involves iteratively selecting a portion of nodes in each graph and training the GCN to predict the labels of the remaining unlabeled nodes. I plan to repeat this process across multiple graphs until approximately 70% of the dataset has been utilized for training in this manner. Once the model is trained, I intend to evaluate its performance on test data to assess its ability to accurately predict node labels.
Do you have any examples or resources demonstrating how to implement this semi-supervised learning approach using GCNs?
Beta Was this translation helpful? Give feedback.
All reactions