Skip to content

Commit 25ec3f2

Browse files
NicolasHugpmeier
andauthored
tv_tensor -> TVTensor where it matters (#7904)
Co-authored-by: Philip Meier <[email protected]>
1 parent d5f4cc3 commit 25ec3f2

File tree

9 files changed

+84
-65
lines changed

9 files changed

+84
-65
lines changed

docs/source/transforms.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -183,7 +183,7 @@ Transforms are available as classes like
183183
This is very much like the :mod:`torch.nn` package which defines both classes
184184
and functional equivalents in :mod:`torch.nn.functional`.
185185

186-
The functionals support PIL images, pure tensors, or :ref:`tv_tensors
186+
The functionals support PIL images, pure tensors, or :ref:`TVTensors
187187
<tv_tensors>`, e.g. both ``resize(image_tensor)`` and ``resize(bboxes)`` are
188188
valid.
189189

docs/source/tv_tensors.rst

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,10 +5,11 @@ TVTensors
55

66
.. currentmodule:: torchvision.tv_tensors
77

8-
TVTensors are tensor subclasses which the :mod:`~torchvision.transforms.v2` v2 transforms use under the hood to
9-
dispatch their inputs to the appropriate lower-level kernels. Most users do not
10-
need to manipulate tv_tensors directly and can simply rely on dataset wrapping -
11-
see e.g. :ref:`sphx_glr_auto_examples_transforms_plot_transforms_e2e.py`.
8+
TVTensors are :class:`torch.Tensor` subclasses which the v2 :ref:`transforms
9+
<transforms>` use under the hood to dispatch their inputs to the appropriate
10+
lower-level kernels. Most users do not need to manipulate TVTensors directly and
11+
can simply rely on dataset wrapping - see e.g.
12+
:ref:`sphx_glr_auto_examples_transforms_plot_transforms_e2e.py`.
1213

1314
.. autosummary::
1415
:toctree: generated/

gallery/transforms/plot_custom_transforms.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ def forward(self, img, bboxes, label): # we assume inputs are always structured
7474
print(f"Output image shape: {out_img.shape}\nout_bboxes = {out_bboxes}\n{out_label = }")
7575
# %%
7676
# .. note::
77-
# While working with tv_tensor classes in your code, make sure to
77+
# While working with TVTensor classes in your code, make sure to
7878
# familiarize yourself with this section:
7979
# :ref:`tv_tensor_unwrapping_behaviour`
8080
#
@@ -111,7 +111,7 @@ def forward(self, img, bboxes, label): # we assume inputs are always structured
111111
# In brief, the core logic is to unpack the input into a flat list using `pytree
112112
# <https://github.com/pytorch/pytorch/blob/main/torch/utils/_pytree.py>`_, and
113113
# then transform only the entries that can be transformed (the decision is made
114-
# based on the **class** of the entries, as all tv_tensors are
114+
# based on the **class** of the entries, as all TVTensors are
115115
# tensor-subclasses) plus some custom logic that is out of score here - check the
116116
# code for details. The (potentially transformed) entries are then repacked and
117117
# returned, in the same structure as the input.

gallery/transforms/plot_custom_tv_tensors.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
"""
2-
=====================================
2+
====================================
33
How to write your own TVTensor class
4-
=====================================
4+
====================================
55
66
.. note::
77
Try on `collab <https://colab.research.google.com/github/pytorch/vision/blob/gh-pages/main/_generated_ipynb_notebooks/plot_custom_tv_tensors.ipynb>`_
88
or :ref:`go to the end <sphx_glr_download_auto_examples_transforms_plot_custom_tv_tensors.py>` to download the full example code.
99
1010
This guide is intended for advanced users and downstream library maintainers. We explain how to
11-
write your own tv_tensor class, and how to make it compatible with the built-in
11+
write your own TVTensor class, and how to make it compatible with the built-in
1212
Torchvision v2 transforms. Before continuing, make sure you have read
1313
:ref:`sphx_glr_auto_examples_transforms_plot_tv_tensors.py`.
1414
"""

gallery/transforms/plot_transforms_getting_started.py

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@
115115
# segmentation, or videos (:class:`torchvision.tv_tensors.Video`), we could have
116116
# passed them to the transforms in exactly the same way.
117117
#
118-
# By now you likely have a few questions: what are these tv_tensors, how do we
118+
# By now you likely have a few questions: what are these TVTensors, how do we
119119
# use them, and what is the expected input/output of those transforms? We'll
120120
# answer these in the next sections.
121121

@@ -126,15 +126,15 @@
126126
# What are TVTensors?
127127
# --------------------
128128
#
129-
# TVTensors are :class:`torch.Tensor` subclasses. The available tv_tensors are
129+
# TVTensors are :class:`torch.Tensor` subclasses. The available TVTensors are
130130
# :class:`~torchvision.tv_tensors.Image`,
131131
# :class:`~torchvision.tv_tensors.BoundingBoxes`,
132132
# :class:`~torchvision.tv_tensors.Mask`, and
133133
# :class:`~torchvision.tv_tensors.Video`.
134134
#
135135
# TVTensors look and feel just like regular tensors - they **are** tensors.
136136
# Everything that is supported on a plain :class:`torch.Tensor` like ``.sum()``
137-
# or any ``torch.*`` operator will also work on a tv_tensor:
137+
# or any ``torch.*`` operator will also work on a TVTensor:
138138

139139
img_dp = tv_tensors.Image(torch.randint(0, 256, (3, 256, 256), dtype=torch.uint8))
140140

@@ -146,7 +146,7 @@
146146
# transform a given input, the transforms first look at the **class** of the
147147
# object, and dispatch to the appropriate implementation accordingly.
148148
#
149-
# You don't need to know much more about tv_tensors at this point, but advanced
149+
# You don't need to know much more about TVTensors at this point, but advanced
150150
# users who want to learn more can refer to
151151
# :ref:`sphx_glr_auto_examples_transforms_plot_tv_tensors.py`.
152152
#
@@ -234,9 +234,9 @@
234234
# Torchvision also supports datasets for object detection or segmentation like
235235
# :class:`torchvision.datasets.CocoDetection`. Those datasets predate
236236
# the existence of the :mod:`torchvision.transforms.v2` module and of the
237-
# tv_tensors, so they don't return tv_tensors out of the box.
237+
# TVTensors, so they don't return TVTensors out of the box.
238238
#
239-
# An easy way to force those datasets to return tv_tensors and to make them
239+
# An easy way to force those datasets to return TVTensors and to make them
240240
# compatible with v2 transforms is to use the
241241
# :func:`torchvision.datasets.wrap_dataset_for_transforms_v2` function:
242242
#
@@ -246,7 +246,7 @@
246246
#
247247
# dataset = CocoDetection(..., transforms=my_transforms)
248248
# dataset = wrap_dataset_for_transforms_v2(dataset)
249-
# # Now the dataset returns tv_tensors!
249+
# # Now the dataset returns TVTensors!
250250
#
251251
# Using your own datasets
252252
# ^^^^^^^^^^^^^^^^^^^^^^^

gallery/transforms/plot_tv_tensors.py

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -9,18 +9,18 @@
99
1010
1111
TVTensors are Tensor subclasses introduced together with
12-
``torchvision.transforms.v2``. This example showcases what these tv_tensors are
12+
``torchvision.transforms.v2``. This example showcases what these TVTensors are
1313
and how they behave.
1414
1515
.. warning::
1616
17-
**Intended Audience** Unless you're writing your own transforms or your own tv_tensors, you
17+
**Intended Audience** Unless you're writing your own transforms or your own TVTensors, you
1818
probably do not need to read this guide. This is a fairly low-level topic
1919
that most users will not need to worry about: you do not need to understand
20-
the internals of tv_tensors to efficiently rely on
20+
the internals of TVTensors to efficiently rely on
2121
``torchvision.transforms.v2``. It may however be useful for advanced users
2222
trying to implement their own datasets, transforms, or work directly with
23-
the tv_tensors.
23+
the TVTensors.
2424
"""
2525

2626
# %%
@@ -31,8 +31,8 @@
3131

3232

3333
# %%
34-
# What are tv_tensors?
35-
# --------------------
34+
# What are TVTensors?
35+
# -------------------
3636
#
3737
# TVTensors are zero-copy tensor subclasses:
3838

@@ -46,31 +46,31 @@
4646
# Under the hood, they are needed in :mod:`torchvision.transforms.v2` to correctly dispatch to the appropriate function
4747
# for the input data.
4848
#
49-
# :mod:`torchvision.tv_tensors` supports four types of tv_tensors:
49+
# :mod:`torchvision.tv_tensors` supports four types of TVTensors:
5050
#
5151
# * :class:`~torchvision.tv_tensors.Image`
5252
# * :class:`~torchvision.tv_tensors.Video`
5353
# * :class:`~torchvision.tv_tensors.BoundingBoxes`
5454
# * :class:`~torchvision.tv_tensors.Mask`
5555
#
56-
# What can I do with a tv_tensor?
57-
# -------------------------------
56+
# What can I do with a TVTensor?
57+
# ------------------------------
5858
#
5959
# TVTensors look and feel just like regular tensors - they **are** tensors.
6060
# Everything that is supported on a plain :class:`torch.Tensor` like ``.sum()`` or
61-
# any ``torch.*`` operator will also work on tv_tensors. See
61+
# any ``torch.*`` operator will also work on TVTensors. See
6262
# :ref:`tv_tensor_unwrapping_behaviour` for a few gotchas.
6363

6464
# %%
6565
# .. _tv_tensor_creation:
6666
#
67-
# How do I construct a tv_tensor?
68-
# -------------------------------
67+
# How do I construct a TVTensor?
68+
# ------------------------------
6969
#
7070
# Using the constructor
7171
# ^^^^^^^^^^^^^^^^^^^^^
7272
#
73-
# Each tv_tensor class takes any tensor-like data that can be turned into a :class:`~torch.Tensor`
73+
# Each TVTensor class takes any tensor-like data that can be turned into a :class:`~torch.Tensor`
7474

7575
image = tv_tensors.Image([[[[0, 1], [1, 0]]]])
7676
print(image)
@@ -92,7 +92,7 @@
9292
print(image.shape, image.dtype)
9393

9494
# %%
95-
# Some tv_tensors require additional metadata to be passed in ordered to be constructed. For example,
95+
# Some TVTensors require additional metadata to be passed in ordered to be constructed. For example,
9696
# :class:`~torchvision.tv_tensors.BoundingBoxes` requires the coordinate format as well as the size of the
9797
# corresponding image (``canvas_size``) alongside the actual values. These
9898
# metadata are required to properly transform the bounding boxes.
@@ -109,7 +109,7 @@
109109
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^
110110
#
111111
# You can also use the :func:`~torchvision.tv_tensors.wrap` function to wrap a tensor object
112-
# into a tv_tensor. This is useful when you already have an object of the
112+
# into a TVTensor. This is useful when you already have an object of the
113113
# desired type, which typically happens when writing transforms: you just want
114114
# to wrap the output like the input.
115115

@@ -125,7 +125,7 @@
125125
# .. _tv_tensor_unwrapping_behaviour:
126126
#
127127
# I had a TVTensor but now I have a Tensor. Help!
128-
# ------------------------------------------------
128+
# -----------------------------------------------
129129
#
130130
# By default, operations on :class:`~torchvision.tv_tensors.TVTensor` objects
131131
# will return a pure Tensor:
@@ -151,7 +151,7 @@
151151
# But I want a TVTensor back!
152152
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^
153153
#
154-
# You can re-wrap a pure tensor into a tv_tensor by just calling the tv_tensor
154+
# You can re-wrap a pure tensor into a TVTensor by just calling the TVTensor
155155
# constructor, or by using the :func:`~torchvision.tv_tensors.wrap` function
156156
# (see more details above in :ref:`tv_tensor_creation`):
157157

@@ -164,7 +164,7 @@
164164
# as a global config setting for the whole program, or as a context manager
165165
# (read its docs to learn more about caveats):
166166

167-
with tv_tensors.set_return_type("tv_tensor"):
167+
with tv_tensors.set_return_type("TVTensor"):
168168
new_bboxes = bboxes + 3
169169
assert isinstance(new_bboxes, tv_tensors.BoundingBoxes)
170170

@@ -203,17 +203,17 @@
203203
# There are a few exceptions to this "unwrapping" rule:
204204
# :meth:`~torch.Tensor.clone`, :meth:`~torch.Tensor.to`,
205205
# :meth:`torch.Tensor.detach`, and :meth:`~torch.Tensor.requires_grad_` retain
206-
# the tv_tensor type.
206+
# the TVTensor type.
207207
#
208-
# Inplace operations on tv_tensors like ``obj.add_()`` will preserve the type of
208+
# Inplace operations on TVTensors like ``obj.add_()`` will preserve the type of
209209
# ``obj``. However, the **returned** value of inplace operations will be a pure
210210
# tensor:
211211

212212
image = tv_tensors.Image([[[0, 1], [1, 0]]])
213213

214214
new_image = image.add_(1).mul_(2)
215215

216-
# image got transformed in-place and is still an Image tv_tensor, but new_image
216+
# image got transformed in-place and is still a TVTensor Image, but new_image
217217
# is a Tensor. They share the same underlying data and they're equal, just
218218
# different classes.
219219
assert isinstance(image, tv_tensors.Image)

0 commit comments

Comments
 (0)