Replies: 2 comments 1 reply
-
Hi, I think what makes you confuse is the difference between tn += 1 and tn = tn+1. The former one will do an in-place increment on tn, while the latter one, as you said, is reassigning to a new variable with the old name. See this thread for details https://stackoverflow.com/questions/41446833/what-is-the-difference-between-i-i-1-and-i-1-in-a-for-loop Hope this helps. |
Beta Was this translation helpful? Give feedback.
-
That's really interesting :D import torch
import numpy as np
device = 'cuda' if torch.cuda.is_available() else 'cpu'
with torch.no_grad():
tn = torch.tensor([1., 2., 3.],device=device)
ar1 = tn.detach().cpu().resolve_conj().resolve_neg().numpy()
ar2 = tn.numpy(force=True)
# ar3 = tn.detach().cpu().clone().numpy(force=False)
print(tn, ar1, ar2)
tn += 1
print(tn, ar1, ar2) Output: |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Following your lecture I did something like this
Output
(tensor([1, 2, 3]), array([1, 2, 3]))
Output
(tensor([2, 3, 4]), array([1, 2, 3]))
It is clear as we created a new object
tn
, but with the old name.However, if slightly modify the statement
Output
(tensor([2, 3, 4]), array([2, 3, 4]))
Now, let's try modify one particular item
Output
(tensor([-10, 2, 3]), array([-10, 2, 3]))
Briefly, methinks, they share memory.
Beta Was this translation helpful? Give feedback.
All reactions