forked from pytorch/pytorch
-
Notifications
You must be signed in to change notification settings - Fork 1
1.3.1 Release Notes
gchanan edited this page Nov 5, 2019
·
5 revisions
Type Promotion: fixed a bug where type promotion, combined with non-contiguous tensors could compute incorrect results. (28253)
| Version 1.3.0 | Version 1.3.1 |
|---|---|
>>> a = torch.tensor([[True, True],
[False, True]])
# get a non-contiguous tensor
>>> a_transpose = a.t()
# type promote by comparing across dtypes (bool -> long)
>>> a_transpose == 0
# POTENTIALLY INCORRECT VALUES
|
>>> a = torch.tensor([[True, True],
[False, True]])
# get a non-contiguous tensor
>>> a_transpose = a.t()
# type promote by comparing across dtypes (bool -> long)
>>> a_transpose == 0
tensor([[False, True],
[False, False]])
|
Type Promotion / Indexing: Fixed a Bug that Allowed Mixed-Dtype Indexing and assignment could lead to incorrect results. Mixed dtype operations of this form are currently disabled, as they were in 1.2. (28231)
| Version 1.3.0 | Version 1.3.1 |
|---|---|
>>> a = torch.ones(5, 2, dtype=torch.float)
>>> b = torch.zeros(5, dtype=torch.long)
>>> a[:, [1]] = b.unsqueeze(-1)
>>> a
# POTENTIALLY INCORRECT VALUES
|
>>> a = torch.ones(5, 2, dtype=torch.float)
>>> b = torch.zeros(5, dtype=torch.long)
>>> a[:, [1]] = b.unsqueeze(-1)
RuntimeError: expected dtype Float but got dtype Long
|
torch.where(condition, x, y): fixed a bug on CPU where incorrect results could be returned if x and y were of different dtypes. Mixed dtype operations of this form are currently disabled, as they were in version 1.2. (29078)
| Version 1.3.0 | Version 1.3.1 |
|---|---|
>>> x = torch.randn(2, 3)
>>> y = torch.randint(0, 10, (2, 3))
>>> torch.where(x < 0, x, y)
tensor(...)
# POTENTIALLY INCORRECT VALUES
|
>>> x = torch.randn(2, 3)
>>> y = torch.randint(0, 10, (2, 3))
>>> torch.where(x < 0, x, y)
RuntimeError: expected scalar type Float but found Long
|
-
torch.argmax: fix regression on CUDA that disabled support fortorch.float16inputs. (28915) - NamedTensor: fix Python refcounting bug with
Tensor.names. (28922) - Quantization: support
deepcopyfor quantized tensors. (28612) - Quantization: support
nn.quantized.ReLUwithinplace=True. (28710) - Documentation:
torch.lgammaandtorch.polygammaare now documented. (28964)