Skip to content

Conversation

@nihilistsumo
Copy link

The PyTorch/main.py was throwing the following error:
Traceback (most recent call last):
File "main.py", line 63, in
run_iterations.train_iters()
File "/home/sumanta/Manhattan-LSTM/PyTorch/run_iterations.py", line 58, in train_iters
loss, _ = self.model.train(input_variables, similarity_scores, self.criterion, model_optimizer)
File "/home/sumanta/Manhattan-LSTM/PyTorch/train_network.py", line 32, in train
output_scores = self.manhattan_lstm((sequences_1, sequences_2), hidden).view(-1)
File "/home/sumanta/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/sumanta/Manhattan-LSTM/PyTorch/manhattan_lstm.py", line 42, in forward
outputs_1, hidden_1 = self.lstm_1(embedded_1, hidden)
File "/home/sumanta/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/sumanta/.local/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 559, in forward
self.dropout, self.training, self.bidirectional, self.batch_first)
TypeError: lstm() received an invalid combination of arguments - got (Tensor, Tensor, list, bool, int, float, bool, bool, bool), but expected one of:

  • (Tensor data, Tensor batch_sizes, tuple of Tensors hx, tuple of Tensors params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional)
    didn't match because some of the arguments have invalid types: (Tensor, Tensor, list, bool, int, float, bool, bool, bool)
  • (Tensor input, tuple of Tensors hx, tuple of Tensors params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first)
    didn't match because some of the arguments have invalid types: (Tensor, Tensor, list, bool, int, float, bool, bool, bool)

The root cause was erroneous hidden initialization.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant