Skip to content
Discussion options

You must be logged in to vote

Yeah, the standard approach to solve this (as utilized in NLP) is to pad the label dimension before the torch.cat call. You can implement that as a transform on your data, e.g.:

def my_pad_transform(data):
    size = [data.y.size(0), max_seq - data.y.size(1)]
    data.y = torch.cat([data.y, data.y.new_zeros(size)], dim=-1)

Does that work for you?

Replies: 1 comment 3 replies

Comment options

You must be logged in to vote
3 replies
@DanielFPerez
Comment options

@rusty1s
Comment options

@DanielFPerez
Comment options

Answer selected by DanielFPerez
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants