Replies: 1 comment 1 reply
-
How about: for i in range(0, 271378):
tokens[i] = tokenizer.tokenize(tokens_df.loc[i].iat[j])
out = torch.tensor(tokenizer.convert_tokens_to_ids(tokens[i]))
tokens[i] = torch.zeros((9, ), dtype=torch.long)
tokens[i][:out.size(0)] = out
print(i, out.shape, tokens[i])
tn = torch.stack(tokens) |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I have spent probably more time than I should trying to get this book crossing data bundled up into a Data object so I can start processing it. I've got an edge index and a label index (probably). the problem I'm running into is when I'm encoding the data from the columns for the node features, they become mismatched lengths, which makes tensorizing them very difficult.
The Raw data is below
Book-Crossing.zip
Beta Was this translation helpful? Give feedback.
All reactions