You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: RELEASENOTES.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,6 +16,8 @@ __API Changes__:
16
16
-#1291`Tensor.grad()` and `Tensor.set_grad()` have been replaced by a new property `Tensor.grad`.
17
17
- A potential memory leak caused by `set_grad` has been resolved.
18
18
-`Include` method of dispose scopes has been removed. Use `Attach` instead.
19
+
- Two more `Attach` methods that accepts `IEnumerable<IDisposable>`s and arrays as the parameter have been added into dispose scopes.
20
+
- A new property `torch.CurrentDisposeScope` has been added to provide the ability to get the current dispose scope.
19
21
20
22
__Bug Fixes__:
21
23
@@ -27,7 +29,7 @@ __Bug Fixes__:
27
29
__Bug Fixes__:
28
30
29
31
-`TensorDataset` will now keep the aliases detached from dispose scopes, to avoid the unexpected disposal.
30
-
-`DataLoaderEnumerator` has been completely rewritten to resolve the unexpected shuffler disposal, the ignorance of drop last and the incorrect count of worker.
32
+
-`DataLoaderEnumerator` has been completely rewritten to resolve the unexpected shuffler disposal, the ignorance of `drop_last`, the incorrect count of worker, and the potential leak cause by multithreading.
31
33
-#1303 Allow dispose scopes to be disposed out of LIFO order.
In this case, you may notice that `batch` (at least the first batch) is created outside the dispose scope, which would cause a potential memory leak.
317
+
318
+
Of course we could manually dispose them. But actually we don't have to care about that, because the data loader will automatically dispose it before the next iteration.
319
+
320
+
However this might cause another problem. For example, we will get disposed tensors when using Linq. The behavior could be modified by setting `disposeBatch` to `false`:
But those tensors would be detached from all the dispose scopes, even if the whole process is wrapped by a scope. (Otherwise it may lead to confusion since the iterations may not happen in the same dispose scope.) So don't forget to dispose them later or manually attach them to a scope. Also, be aware that enumerating the same `IEnumerable` twice could produce different instances:
// The tensor is not the one you have passed into `DoSomeThing`.
351
+
}
352
+
```
353
+
354
+
Meanwhile, when writing a dataset on your own, it should be noticed that the data loaders will dispose the tensors created in `GetTensor` after collation. So a dataset like this will not work because the saved tensor will be disposed:
previous.Dispose(); // Don't forget to dispose the previous one.
391
+
returnnew() { ["tensor"] =tensor };
392
+
}
393
+
394
+
publicoverridelongCount=>3;
395
+
}
396
+
```
397
+
398
+
Also, if you want a "`Lazy`" collate function, do not directly save the tensors that are passed in. And `DetachFromDisposeScope` does not work in this case because they are kept in another list instead of dispose scopes, due to some multithreading issues. Instead, you could create aliases for them.
0 commit comments