-
Hello, I've noticed that the values produced by the GCNConv layer stay the same on each run whether I'm using the normal model or the TorchScript. However, every time I convert the model to TorchScript, the GCNConv layer produces different values. Those differences propagate all the way to the results of my model. Is that something to be expected? Values produced by the normal model:
After TorchScript conversion:
Now those values stay the same for every execution, but if I convert the same model again I'll get different values. Am I missing something? Thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
Do you have a small example to reproduce? The set of parameters after calling |
Beta Was this translation helpful? Give feedback.
-
Well, I had started working on a minimal example to recreate the issue which is when I realized that the values were changing on every run even without the torchscript conversion. The reason was that I actually wasn't loading my previously saved model when converting... So I had model = ModelHeterogeneous(input_dim=args.input_dim, hidden_dim=args.hidden_dim, ngcn=args.ngcn, nmlp=args.nmlp)
ts_model = torch.jit.script(model) instead of model = ModelHeterogeneous(input_dim=args.input_dim, hidden_dim=args.hidden_dim, ngcn=args.ngcn, nmlp=args.nmlp)
model.load_state_dict(torch.load(args.model_path))
model.eval()
ts_model = torch.jit.script(model) Sorry for the trouble! |
Beta Was this translation helpful? Give feedback.
Well, I had started working on a minimal example to recreate the issue which is when I realized that the values were changing on every run even without the torchscript conversion. The reason was that I actually wasn't loading my previously saved model when converting...
So I had
instead of
Sorry for the trouble!