Skip to content

Commit b4a8c7a

Browse files
Merge branch 'Lightning-AI:master' into master
2 parents 6a86186 + 64b2b6a commit b4a8c7a

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

52 files changed

+612
-87
lines changed

.azure/gpu-tests-pytorch.yml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,6 @@ jobs:
117117
set -e
118118
extra=$(python -c "print({'lightning': 'pytorch-'}.get('$(PACKAGE_NAME)', ''))")
119119
pip install -e ".[${extra}dev]" pytest-timeout -U --extra-index-url="${TORCH_URL}"
120-
pip install setuptools==75.6.0 jsonargparse==4.35.0
121120
displayName: "Install package & dependencies"
122121
123122
- bash: pip uninstall -y lightning

.github/dependabot.yml

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,6 @@ updates:
1919
separator: "-"
2020
# Allow up to 5 open pull requests for pip dependencies
2121
open-pull-requests-limit: 10
22-
reviewers:
23-
- "Lightning-AI/teams/core-lightning"
2422

2523
# Enable version updates for GitHub Actions
2624
- package-ecosystem: "github-actions"
@@ -37,5 +35,3 @@ updates:
3735
separator: "-"
3836
# Allow up to 5 open pull requests for GitHub Actions
3937
open-pull-requests-limit: 10
40-
reviewers:
41-
- "Lightning-AI/teams/core-lightning"

.github/workflows/docker-build.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ jobs:
9797
# adding dome more images as Thunder mainly using python 3.10,
9898
# and we need to support integrations as for example LitGPT
9999
python_version: ["3.10"]
100-
pytorch_version: ["2.6.0", "2.7.0"]
100+
pytorch_version: ["2.6.0", "2.7.1"]
101101
cuda_version: ["12.6.3"]
102102
include:
103103
# These are the base images for PL release docker images.
@@ -108,7 +108,7 @@ jobs:
108108
- { python_version: "3.11", pytorch_version: "2.4.1", cuda_version: "12.1.1" }
109109
- { python_version: "3.12", pytorch_version: "2.5.1", cuda_version: "12.1.1" }
110110
- { python_version: "3.12", pytorch_version: "2.6.0", cuda_version: "12.4.1" }
111-
- { python_version: "3.12", pytorch_version: "2.7.0", cuda_version: "12.6.3" }
111+
- { python_version: "3.12", pytorch_version: "2.7.1", cuda_version: "12.6.3" }
112112
steps:
113113
- uses: actions/checkout@v4
114114
- uses: docker/setup-buildx-action@v3

dockers/base-cuda/Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ ENV \
3434
MAKEFLAGS="-j2"
3535

3636
RUN \
37-
apt-get update && apt-get install -y wget && \
37+
apt-get update --fix-missing && apt-get install -y wget && \
3838
apt-get update -qq --fix-missing && \
3939
NCCL_VER=$(dpkg -s libnccl2 | grep '^Version:' | awk -F ' ' '{print $2}' | awk -F '-' '{print $1}' | grep -ve '^\s*$') && \
4040
CUDA_VERSION_MM=${CUDA_VERSION%.*} && \

docs/source-fabric/advanced/model_parallel/tp_fsdp.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -276,7 +276,7 @@ Next steps
276276

277277
.. displayitem::
278278
:header: Pipeline Parallelism
279-
:description: Coming sooon
279+
:description: Coming soon
280280
:col_css: col-md-4
281281
:height: 160
282282
:tag: advanced

docs/source-pytorch/advanced/compile.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -262,7 +262,7 @@ Avoid graph breaks
262262
When ``torch.compile`` looks at the code in your model's ``forward()`` or ``*_step()`` method, it will try to compile as much of the code as possible.
263263
If there are regions in the code that it doesn't understand, it will introduce a so-called "graph break" that essentially splits the code in optimized and unoptimized parts.
264264
Graph breaks aren't a deal breaker, since the optimized parts should still run faster.
265-
But if you want to get the most out of ``torch.compile``, you might want to invest rewriting the problematic section of the code that produce the breaks.
265+
But if you want to get the most out of ``torch.compile``, you might want to invest rewriting the problematic section of the code that produces the breaks.
266266

267267
You can check whether your model produces graph breaks by calling ``torch.compile`` with ``fullgraph=True``:
268268

@@ -332,7 +332,7 @@ Enabling CUDA Graphs often results in a significant speedup, but sometimes also
332332
333333
**Shape padding:** The specific shape/size of the tensors involved in the computation of your model (input, activations, weights, gradients, etc.) can have an impact on the performance.
334334
With shape padding enabled, ``torch.compile`` can extend the tensors by padding to a size that gives a better memory alignment.
335-
Naturally, the tradoff here is that it will consume a bit more memory.
335+
Naturally, the tradeoff here is that it will consume a bit more memory.
336336

337337
.. code-block:: python
338338

docs/source-pytorch/advanced/model_parallel/tp_fsdp.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -282,7 +282,7 @@ Next steps
282282

283283
.. displayitem::
284284
:header: Pipeline Parallelism
285-
:description: Coming sooon
285+
:description: Coming soon
286286
:col_css: col-md-4
287287
:height: 160
288288
:tag: advanced

docs/source-pytorch/advanced/post_training_quantization.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ The "approach" parameter in PostTrainingQuantConfig is defined by the user to ma
106106
Quantize the model
107107
==================
108108

109-
The model can be qutized by Intel® Neural Compressor with:
109+
The model can be quantized by Intel® Neural Compressor with:
110110

111111
.. code-block:: python
112112
@@ -126,7 +126,7 @@ At last, the quantized model can be saved by:
126126
Hands-on Examples
127127
*****************
128128

129-
Based on the `given example code <https://lightning.ai/docs/pytorch/2.1.0/notebooks/lightning_examples/text-transformers.html>`_, we show how Intel Neural Compressor conduct model quantization on PyTorch Lightning. We first define the basic config of the quantization process.
129+
Based on the `given example code <https://lightning.ai/docs/pytorch/2.1.0/notebooks/lightning_examples/text-transformers.html>`_, we show how Intel Neural Compressor conducts model quantization on PyTorch Lightning. We first define the basic config of the quantization process.
130130

131131
.. code-block:: python
132132

docs/source-pytorch/advanced/pruning_quantization.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ You can also perform iterative pruning, apply the `lottery ticket hypothesis <ht
3232
.. code-block:: python
3333
3434
def compute_amount(epoch):
35-
# the sum of all returned values need to be smaller than 1
35+
# the sum of all returned values needs to be smaller than 1
3636
if epoch == 10:
3737
return 0.5
3838
@@ -43,7 +43,7 @@ You can also perform iterative pruning, apply the `lottery ticket hypothesis <ht
4343
return 0.01
4444
4545
46-
# the amount can be also be a callable
46+
# the amount can also be a callable
4747
trainer = Trainer(callbacks=[ModelPruning("l1_unstructured", amount=compute_amount)])
4848
4949

docs/source-pytorch/advanced/training_tricks.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ If the Trainer's ``gradient_clip_algorithm`` is set to ``'value'`` (``'norm'`` b
4646
# clip gradients' maximum magnitude to <=0.5
4747
trainer = Trainer(gradient_clip_val=0.5, gradient_clip_algorithm="value")
4848

49-
Read more about :ref:`Configuring Gradient Clipping <configure_gradient_clipping>` for advanced use-cases.
49+
Read more about :ref:`Configuring Gradient Clipping <configure_gradient_clipping>` for advanced use cases.
5050

5151
----------
5252

0 commit comments

Comments
 (0)