Skip to content

Commit 9984eae

Browse files
authored
docs: fix typos
1 parent dd2912a commit 9984eae

File tree

10 files changed

+13
-13
lines changed

10 files changed

+13
-13
lines changed

docs/source-fabric/advanced/model_parallel/tp_fsdp.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -276,7 +276,7 @@ Next steps
276276

277277
.. displayitem::
278278
:header: Pipeline Parallelism
279-
:description: Coming sooon
279+
:description: Coming soon
280280
:col_css: col-md-4
281281
:height: 160
282282
:tag: advanced

docs/source-pytorch/advanced/compile.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -262,7 +262,7 @@ Avoid graph breaks
262262
When ``torch.compile`` looks at the code in your model's ``forward()`` or ``*_step()`` method, it will try to compile as much of the code as possible.
263263
If there are regions in the code that it doesn't understand, it will introduce a so-called "graph break" that essentially splits the code in optimized and unoptimized parts.
264264
Graph breaks aren't a deal breaker, since the optimized parts should still run faster.
265-
But if you want to get the most out of ``torch.compile``, you might want to invest rewriting the problematic section of the code that produce the breaks.
265+
But if you want to get the most out of ``torch.compile``, you might want to invest rewriting the problematic section of the code that produces the breaks.
266266

267267
You can check whether your model produces graph breaks by calling ``torch.compile`` with ``fullgraph=True``:
268268

@@ -332,7 +332,7 @@ Enabling CUDA Graphs often results in a significant speedup, but sometimes also
332332
333333
**Shape padding:** The specific shape/size of the tensors involved in the computation of your model (input, activations, weights, gradients, etc.) can have an impact on the performance.
334334
With shape padding enabled, ``torch.compile`` can extend the tensors by padding to a size that gives a better memory alignment.
335-
Naturally, the tradoff here is that it will consume a bit more memory.
335+
Naturally, the tradeoff here is that it will consume a bit more memory.
336336

337337
.. code-block:: python
338338

docs/source-pytorch/advanced/model_parallel/tp_fsdp.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -282,7 +282,7 @@ Next steps
282282

283283
.. displayitem::
284284
:header: Pipeline Parallelism
285-
:description: Coming sooon
285+
:description: Coming soon
286286
:col_css: col-md-4
287287
:height: 160
288288
:tag: advanced

docs/source-pytorch/advanced/post_training_quantization.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ The "approach" parameter in PostTrainingQuantConfig is defined by the user to ma
106106
Quantize the model
107107
==================
108108

109-
The model can be qutized by Intel® Neural Compressor with:
109+
The model can be quantized by Intel® Neural Compressor with:
110110

111111
.. code-block:: python
112112
@@ -126,7 +126,7 @@ At last, the quantized model can be saved by:
126126
Hands-on Examples
127127
*****************
128128

129-
Based on the `given example code <https://lightning.ai/docs/pytorch/2.1.0/notebooks/lightning_examples/text-transformers.html>`_, we show how Intel Neural Compressor conduct model quantization on PyTorch Lightning. We first define the basic config of the quantization process.
129+
Based on the `given example code <https://lightning.ai/docs/pytorch/2.1.0/notebooks/lightning_examples/text-transformers.html>`_, we show how Intel Neural Compressor conducts model quantization on PyTorch Lightning. We first define the basic config of the quantization process.
130130

131131
.. code-block:: python
132132

docs/source-pytorch/advanced/pruning_quantization.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ You can also perform iterative pruning, apply the `lottery ticket hypothesis <ht
3232
.. code-block:: python
3333
3434
def compute_amount(epoch):
35-
# the sum of all returned values need to be smaller than 1
35+
# the sum of all returned values needs to be smaller than 1
3636
if epoch == 10:
3737
return 0.5
3838
@@ -43,7 +43,7 @@ You can also perform iterative pruning, apply the `lottery ticket hypothesis <ht
4343
return 0.01
4444
4545
46-
# the amount can be also be a callable
46+
# the amount can also be a callable
4747
trainer = Trainer(callbacks=[ModelPruning("l1_unstructured", amount=compute_amount)])
4848
4949

docs/source-pytorch/advanced/training_tricks.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ If the Trainer's ``gradient_clip_algorithm`` is set to ``'value'`` (``'norm'`` b
4646
# clip gradients' maximum magnitude to <=0.5
4747
trainer = Trainer(gradient_clip_val=0.5, gradient_clip_algorithm="value")
4848

49-
Read more about :ref:`Configuring Gradient Clipping <configure_gradient_clipping>` for advanced use-cases.
49+
Read more about :ref:`Configuring Gradient Clipping <configure_gradient_clipping>` for advanced use cases.
5050

5151
----------
5252

src/lightning/fabric/connector.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -239,7 +239,7 @@ def _check_config_and_set_final_flags(
239239
else:
240240
raise TypeError(
241241
f"Found invalid type for plugin {plugin}. Expected one of: Precision, "
242-
"CheckpointIO, ClusterEnviroment."
242+
"CheckpointIO, ClusterEnvironment."
243243
)
244244

245245
duplicated_plugin_key = [k for k, v in plugins_flags_types.items() if v > 1]

src/lightning/fabric/plugins/precision/bitsandbytes.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -403,7 +403,7 @@ class _NF4DQLinear(_Linear4bit):
403403
def __init__(self, *args: Any, **kwargs: Any) -> None:
404404
super().__init__(*args, quant_type="nf4", compress_statistics=True, **kwargs)
405405

406-
# these classes are defined programatically like this to avoid importing bitsandbytes in environments that have
406+
# these classes are defined programmatically like this to avoid importing bitsandbytes in environments that have
407407
# it available but will not use it
408408
classes = {
409409
"_Linear8bitLt": _Linear8bitLt,

src/lightning/pytorch/trainer/connectors/accelerator_connector.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -248,7 +248,7 @@ def _check_config_and_set_final_flags(
248248
else:
249249
raise MisconfigurationException(
250250
f"Found invalid type for plugin {plugin}. Expected one of: Precision, "
251-
"CheckpointIO, ClusterEnviroment, or LayerSync."
251+
"CheckpointIO, ClusterEnvironment, or LayerSync."
252252
)
253253

254254
duplicated_plugin_key = [k for k, v in plugins_flags_types.items() if v > 1]

tests/tests_fabric/utilities/test_data.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ def test_has_len():
5353
def test_replace_dunder_methods_multiple_loaders_without_init():
5454
"""In case of a class, that inherits from a class that we are patching, but doesn't define its own `__init__`
5555
method (the one we are wrapping), it can happen, that `hasattr(cls, "__old__init__")` is True because of parent
56-
class, but it is impossible to delete, because that method is owned by parent class. Furthermore, the error occured
56+
class, but it is impossible to delete, because that method is owned by parent class. Furthermore, the error occurred
5757
only sometimes because it depends on the order in which we are iterating over a set of classes we are patching.
5858
5959
This test simulates the behavior by generating sufficient number of dummy classes, which do not define `__init__`

0 commit comments

Comments
 (0)