-
Notifications
You must be signed in to change notification settings - Fork 45
Open
Description
Source code use nn.Embedding to load pretrained word embedding
nn.Embedding(self.config.data_word_vec.shape[0], self.config.data_word_vec.shape[1])
self.word_embedding.weight.data.copy_(torch.from_numpy(self.config.data_word_vec))
actually optimizer will update both Embedding layer.
Is that a bug or just on propose?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels