Replies: 1 comment
-
Do you observe the same issue when reducing the number of |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, dear community.
I'm facing an issue with the outputs of a network based on the GCNConv Layer. I'm trying to build a graph classification network heavily based on the official collab notebook. My dataset(wich construction was talked about in this previous post) is a list of
data
objects, classified into three categories with 10 features per node.The class of the network is as follows:
On the training loop, I'm using the labels as hot-encoded vectors:
where the function
one_hot(np_array,n_classes)
takes an array of labels and returns a torch tensor whose components are one-hot encoded vectors({1,0,0},{0,1,0},{0,0,1}
).The issue is that after training, when I apply the network to the test data, the only label it gives me is
{1,0,0}
. This is confusing to me since I wrote a simple MLP that concatenates all the 10 features of each node, and it was able to return some results, so I think there is some detectable structure in the data.I appreciate any help on this matter, this is driving me crazy!
Beta Was this translation helpful? Give feedback.
All reactions