Skip to content

Commit 386a541

Browse files
committed
In ux_limitations: use "elements" instead of "memory"
To avoid any confusion. Previously the messaging about in-place operations was ambiguous and could have been taken to mean "you need more memory", which is not the case. Fixes #604
1 parent ba5dcd3 commit 386a541

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

docs/source/ux_limitations.rst

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -107,8 +107,8 @@ Mutation: in-place PyTorch Operations
107107

108108
:func:`vmap` will raise an error if it encounters an unsupported PyTorch
109109
in-place operation and it will succeed otherwise. Unsupported operations
110-
are those that would cause a Tensor with more memory to be written to a
111-
Tensor with less memory. Here's an example of how this can occur:
110+
are those that would cause a Tensor with more elements to be written to a
111+
Tensor with fewer elements. Here's an example of how this can occur:
112112

113113
::
114114

@@ -119,16 +119,16 @@ Tensor with less memory. Here's an example of how this can occur:
119119
x = torch.randn(1)
120120
y = torch.randn(3)
121121

122-
# Raises an error
122+
# Raises an error because `y` has fewer elements than `x`.
123123
vmap(f, in_dims=(None, 0))(x, y)
124124

125125
``x`` is a Tensor with one element, ``y`` is a Tensor with three elements.
126126
``x + y`` has three elements (due to broadcasting), but attempting to write
127127
three elements back into ``x``, which only has one element, raises an error
128-
due to there not being enough memory to hold three elements.
128+
due to attempting to write three elements into a Tensor with a single element.
129129

130-
There is no problem if there is sufficient memory for the in-place operations
131-
to occur:
130+
There is no problem if the Tensor being written to has the same number of
131+
elements (or more):
132132

133133
::
134134

@@ -140,7 +140,7 @@ to occur:
140140
y = torch.randn(3)
141141
expected = x + y
142142

143-
# Does not raise an error
143+
# Does not raise an error because x and y have the same number of elements.
144144
vmap(f, in_dims=(0, 0))(x, y)
145145
assert torch.allclose(x, expected)
146146

0 commit comments

Comments
 (0)