@@ -201,7 +201,7 @@ Transforms are common image transforms. They can be chained together
201
201
using ``transforms.Compose ``
202
202
203
203
``transforms.Compose ``
204
- ~~~~~~~~~~~~~~~~~~~~~~
204
+ ^^^^^^^^^^^^^^^^^^^^^^
205
205
206
206
One can compose several transforms together. For example.
207
207
@@ -216,10 +216,10 @@ One can compose several transforms together. For example.
216
216
])
217
217
218
218
Transforms on PIL.Image
219
- -----------------------
219
+ ~~~~~~~~~~~~~~~~~~~~~~~
220
220
221
221
``Scale(size, interpolation=Image.BILINEAR) ``
222
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
222
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
223
223
224
224
Rescales the input PIL.Image to the given 'size'. 'size' will be the
225
225
size of the smaller edge.
@@ -229,14 +229,14 @@ height / width, size) - size: size of the smaller edge - interpolation:
229
229
Default: PIL.Image.BILINEAR
230
230
231
231
``CenterCrop(size) `` - center-crops the image to the given size
232
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
232
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
233
233
234
234
Crops the given PIL.Image at the center to have a region of the given
235
235
size. size can be a tuple (target\_ height, target\_ width) or an integer,
236
236
in which case the target will be of a square shape (size, size)
237
237
238
238
``RandomCrop(size, padding=0) ``
239
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
239
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
240
240
241
241
Crops the given PIL.Image at a random location to have a region of the
242
242
given size. size can be a tuple (target\_ height, target\_ width) or an
@@ -245,13 +245,13 @@ If ``padding`` is non-zero, then the image is first zero-padded on each
245
245
side with ``padding `` pixels.
246
246
247
247
``RandomHorizontalFlip() ``
248
- ~~~~~~~~~~~~~~~~~~~~~~~~~~
248
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^
249
249
250
250
Randomly horizontally flips the given PIL.Image with a probability of
251
251
0.5
252
252
253
253
``RandomSizedCrop(size, interpolation=Image.BILINEAR) ``
254
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
254
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
255
255
256
256
Random crop the given PIL.Image to a random size of (0.08 to 1.0) of the
257
257
original size and and a random aspect ratio of 3/4 to 4/3 of the
@@ -261,23 +261,23 @@ This is popularly used to train the Inception networks - size: size of
261
261
the smaller edge - interpolation: Default: PIL.Image.BILINEAR
262
262
263
263
``Pad(padding, fill=0) ``
264
- ~~~~~~~~~~~~~~~~~~~~~~~~
264
+ ^^^^^^^^^^^^^^^^^^^^^^^^
265
265
266
266
Pads the given image on each side with ``padding `` number of pixels, and
267
267
the padding pixels are filled with pixel value ``fill ``. If a ``5x5 ``
268
268
image is padded with ``padding=1 `` then it becomes ``7x7 ``
269
269
270
270
Transforms on torch.\* Tensor
271
- ----------------------------
271
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
272
272
273
273
``Normalize(mean, std) ``
274
- ~~~~~~~~~~~~~~~~~~~~~~~~
274
+ ^^^^^^^^^^^^^^^^^^^^^^^^
275
275
276
276
Given mean: (R, G, B) and std: (R, G, B), will normalize each channel of
277
277
the torch.\* Tensor, i.e. channel = (channel - mean) / std
278
278
279
279
Conversion Transforms
280
- ---------------------
280
+ ~~~~~~~~~~~~~~~~~~~~~
281
281
282
282
- ``ToTensor() `` - Converts a PIL.Image (RGB) or numpy.ndarray (H x W x
283
283
C) in the range [0, 255] to a torch.FloatTensor of shape (C x H x W)
@@ -287,10 +287,10 @@ Conversion Transforms
287
287
shape H x W x C to a PIL.Image of range [0, 255]
288
288
289
289
Generic Transofrms
290
- ------------------
290
+ ~~~~~~~~~~~~~~~~~~
291
291
292
292
``Lambda(lambda) ``
293
- ~~~~~~~~~~~~~~~~~~
293
+ ^^^^^^^^^^^^^^^^^^
294
294
295
295
Given a Python lambda, applies it to the input ``img `` and returns it.
296
296
For example:
0 commit comments