Skip to content

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation #2

@blldd

Description

@blldd

HI, very lucky to study your repo, unfortunately, I encountered an error while running the code as follows:

parse with gowalla default settings
use device: cpu
Split.TRAIN load 7768 users with max_seq_count 72 batches: 345
Split.TEST load 7768 users with max_seq_count 18 batches: 76
Use flashback training. Use pytorch RNN implementation.
Warning: Error detected in AddmmBackward. Traceback of forward call that caused the error:
File "/home/dedong/pycharmProjects/traj_pred/Flashback_code/train.py", line 68, in
loss, h = trainer.loss(x, t, s, y, y_t, y_s, h, active_users)
File "/home/dedong/pycharmProjects/traj_pred/Flashback_code/trainer.py", line 50, in loss
out, h = self.model(x, t, s, y_t, y_s, h, active_users)
File "/home/dedong/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in call
result = self.forward(*input, **kwargs)
File "/home/dedong/pycharmProjects/traj_pred/Flashback_code/network.py", line 70, in forward
out, h = self.rnn(x_emb, h)
File "/home/dedong/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in call
result = self.forward(*input, **kwargs)
File "/home/dedong/anaconda3/lib/python3.7/site-packages/torch/nn/modules/rnn.py", line 228, in forward
self.dropout, self.training, self.bidirectional, self.batch_first)
(print_stack at /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:60)
Traceback (most recent call last):
File "/home/dedong/pycharmProjects/traj_pred/Flashback_code/train.py", line 69, in
loss.backward(retain_graph=True)
File "/home/dedong/anaconda3/lib/python3.7/site-packages/torch/tensor.py", line 198, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/dedong/anaconda3/lib/python3.7/site-packages/torch/autograd/init.py", line 100, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [10, 10]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

Process finished with exit code 1


so, would you please help me to figure it out? thanks a lot!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions