Skip to content

Commit e2fa8f5

Browse files
authored
Merge branch 'master' into toggle_optimizer
2 parents 018a50e + 989b759 commit e2fa8f5

File tree

38 files changed

+293
-95
lines changed

38 files changed

+293
-95
lines changed

.github/dependabot.yml

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,6 @@ updates:
1919
separator: "-"
2020
# Allow up to 5 open pull requests for pip dependencies
2121
open-pull-requests-limit: 10
22-
reviewers:
23-
- "Lightning-AI/teams/core-lightning"
2422

2523
# Enable version updates for GitHub Actions
2624
- package-ecosystem: "github-actions"
@@ -37,5 +35,3 @@ updates:
3735
separator: "-"
3836
# Allow up to 5 open pull requests for GitHub Actions
3937
open-pull-requests-limit: 10
40-
reviewers:
41-
- "Lightning-AI/teams/core-lightning"

.github/workflows/_legacy-checkpoints.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ jobs:
113113

114114
- run: pip install -r requirements/ci.txt
115115
- name: Upload checkpoints to S3
116-
if: secrets.AWS_REGION != ''
116+
if: ${{ secrets[AWS_REGION] != '' }}
117117
working-directory: ${{ env.LEGACY_FOLDER }}
118118
env:
119119
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY }}

.github/workflows/docker-build.yml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -94,6 +94,11 @@ jobs:
9494
strategy:
9595
fail-fast: false
9696
matrix:
97+
# adding dome more images as Thunder mainly using python 3.10,
98+
# and we need to support integrations as for example LitGPT
99+
python_version: ["3.10"]
100+
pytorch_version: ["2.6.0", "2.7.0"]
101+
cuda_version: ["12.6.3"]
97102
include:
98103
# These are the base images for PL release docker images.
99104
# Make sure the matrix here matches the one above.

.github/workflows/release-pkg.yml

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ on:
1313
- "requirements/ci.txt"
1414
- ".github/actions/pkg-check/*"
1515
- ".github/actions/pkg-publish/*"
16-
- ".github/workflows/_legacy-checkpoints.yml.yml"
16+
- ".github/workflows/_legacy-checkpoints.yml"
1717
- ".github/workflows/_build-packages.yml"
1818
- ".github/workflows/release-pypi.yml"
1919

@@ -179,12 +179,12 @@ jobs:
179179
with:
180180
pkg-folder: dist/${{ steps.folder.outputs.pkg }}
181181
pypi-token: ${{ secrets[format('PYPI_TOKEN_{0}', matrix.name)] }}
182-
183-
legacy-checkpoints:
184-
needs: [build-packages]
185-
uses: ./.github/workflows/_legacy-checkpoints.yml
186-
with:
187-
push_to_s3: ${{ startsWith(github.event.ref, 'refs/tags') || github.event_name == 'release' }}
188-
upload_local: ${{ startsWith(github.event.ref, 'refs/tags') || github.event_name == 'release' }}
189-
create_pr: ${{ startsWith(github.event.ref, 'refs/tags') || github.event_name == 'release' }}
190-
secrets: inherit
182+
# FIXME: this is not working suddenly, Unrecognized named-value: 'secrets'
183+
# legacy-checkpoints:
184+
# needs: [build-packages]
185+
# uses: ./.github/workflows/_legacy-checkpoints.yml
186+
# with:
187+
# push_to_s3: ${{ startsWith(github.event.ref, 'refs/tags') || github.event_name == 'release' }}
188+
# upload_local: ${{ startsWith(github.event.ref, 'refs/tags') || github.event_name == 'release' }}
189+
# create_pr: ${{ startsWith(github.event.ref, 'refs/tags') || github.event_name == 'release' }}
190+
# secrets: inherit

.gitignore

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -198,7 +198,7 @@ node_modules/
198198
**/events.out.tfevents.*
199199
examples/**/*.png
200200

201-
# instalation artifacts
201+
# installation artifacts
202202
requirements/base.txt
203203

204204
# CI

docs/source-fabric/advanced/model_parallel/tp_fsdp.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -276,7 +276,7 @@ Next steps
276276

277277
.. displayitem::
278278
:header: Pipeline Parallelism
279-
:description: Coming sooon
279+
:description: Coming soon
280280
:col_css: col-md-4
281281
:height: 160
282282
:tag: advanced

docs/source-pytorch/advanced/compile.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -262,7 +262,7 @@ Avoid graph breaks
262262
When ``torch.compile`` looks at the code in your model's ``forward()`` or ``*_step()`` method, it will try to compile as much of the code as possible.
263263
If there are regions in the code that it doesn't understand, it will introduce a so-called "graph break" that essentially splits the code in optimized and unoptimized parts.
264264
Graph breaks aren't a deal breaker, since the optimized parts should still run faster.
265-
But if you want to get the most out of ``torch.compile``, you might want to invest rewriting the problematic section of the code that produce the breaks.
265+
But if you want to get the most out of ``torch.compile``, you might want to invest rewriting the problematic section of the code that produces the breaks.
266266

267267
You can check whether your model produces graph breaks by calling ``torch.compile`` with ``fullgraph=True``:
268268

@@ -332,7 +332,7 @@ Enabling CUDA Graphs often results in a significant speedup, but sometimes also
332332
333333
**Shape padding:** The specific shape/size of the tensors involved in the computation of your model (input, activations, weights, gradients, etc.) can have an impact on the performance.
334334
With shape padding enabled, ``torch.compile`` can extend the tensors by padding to a size that gives a better memory alignment.
335-
Naturally, the tradoff here is that it will consume a bit more memory.
335+
Naturally, the tradeoff here is that it will consume a bit more memory.
336336

337337
.. code-block:: python
338338

docs/source-pytorch/advanced/model_parallel/tp_fsdp.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -282,7 +282,7 @@ Next steps
282282

283283
.. displayitem::
284284
:header: Pipeline Parallelism
285-
:description: Coming sooon
285+
:description: Coming soon
286286
:col_css: col-md-4
287287
:height: 160
288288
:tag: advanced

docs/source-pytorch/advanced/post_training_quantization.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ The "approach" parameter in PostTrainingQuantConfig is defined by the user to ma
106106
Quantize the model
107107
==================
108108

109-
The model can be qutized by Intel® Neural Compressor with:
109+
The model can be quantized by Intel® Neural Compressor with:
110110

111111
.. code-block:: python
112112
@@ -126,7 +126,7 @@ At last, the quantized model can be saved by:
126126
Hands-on Examples
127127
*****************
128128

129-
Based on the `given example code <https://lightning.ai/docs/pytorch/2.1.0/notebooks/lightning_examples/text-transformers.html>`_, we show how Intel Neural Compressor conduct model quantization on PyTorch Lightning. We first define the basic config of the quantization process.
129+
Based on the `given example code <https://lightning.ai/docs/pytorch/2.1.0/notebooks/lightning_examples/text-transformers.html>`_, we show how Intel Neural Compressor conducts model quantization on PyTorch Lightning. We first define the basic config of the quantization process.
130130

131131
.. code-block:: python
132132

0 commit comments

Comments
 (0)