Skip to content

Commit f9a9a5f

Browse files
irene50288Iryna Koroliuk
andauthored
[CI SKIP] Sequential data & TPU support docs fix (#3956)
* sequential data docs fix * TPU support docs fix Co-authored-by: Iryna Koroliuk <[email protected]>
1 parent 3aef67c commit f9a9a5f

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

docs/source/sequences.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Packed sequences as inputs
1515
--------------------------
1616
When using PackedSequence, do 2 things:
1717

18-
1. return either a padded tensor in dataset or a list of variable length tensors in the dataloader collate_fn (example above shows the list implementation).
18+
1. Return either a padded tensor in dataset or a list of variable length tensors in the dataloader collate_fn (example shows the list implementation).
1919
2. Pack the sequence in forward or training and validation steps depending on use case.
2020

2121
.. testcode::

docs/source/tpu.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -163,7 +163,7 @@ on how to set up the instance groups and VMs needed to run TPU Pods.
163163
16 bit precision
164164
-----------------
165165
Lightning also supports training in 16-bit precision with TPUs.
166-
By default, TPU training will use 32-bit precision. To enable 16-bit, also
166+
By default, TPU training will use 32-bit precision. To enable 16-bit,
167167
set the 16-bit flag.
168168

169169
.. code-block:: python

0 commit comments

Comments
 (0)