Skip to content

Dealing Start and Stop Tokens #2

@vumaasha

Description

@vumaasha

Hi,
It is difficult to understand how you have dealt with the start and stop tokens. I see that you are appending stop token (2) to decoder_output in the end. This decoder_output is only used to compute the loss.
https://github.com/lancopku/SRB/blob/master/DataLoader.py#L51

You do not seem to append any stop token for decoder_input which is used to train the decoder. You are only using a start token at the training time.
https://github.com/lancopku/SRB/blob/master/SeqUnit.py#L156

However, during the generation time, you check if the token predicted equals stop token.
https://github.com/lancopku/SRB/blob/master/SeqUnit.py#L199

How do you expect to predict a stop token? when you are not using one in the training time. Is this a bug? or am i missing something obvious. Would appreciate your response regarding this.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions