Skip to content

Commit 850fb0e

Browse files
authored
Merge branch 'master' into bump/pt-2.5
2 parents da21ae9 + 2110a39 commit 850fb0e

File tree

12 files changed

+17
-17
lines changed

12 files changed

+17
-17
lines changed

.github/workflows/call-clear-cache.yml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -23,18 +23,18 @@ on:
2323
jobs:
2424
cron-clear:
2525
if: github.event_name == 'schedule' || github.event_name == 'pull_request'
26-
uses: Lightning-AI/utilities/.github/workflows/[email protected].7
26+
uses: Lightning-AI/utilities/.github/workflows/[email protected].8
2727
with:
28-
scripts-ref: v0.11.7
28+
scripts-ref: v0.11.8
2929
dry-run: ${{ github.event_name == 'pull_request' }}
3030
pattern: "latest|docs"
3131
age-days: 7
3232

3333
direct-clear:
3434
if: github.event_name == 'workflow_dispatch' || github.event_name == 'pull_request'
35-
uses: Lightning-AI/utilities/.github/workflows/[email protected].7
35+
uses: Lightning-AI/utilities/.github/workflows/[email protected].8
3636
with:
37-
scripts-ref: v0.11.7
37+
scripts-ref: v0.11.8
3838
dry-run: ${{ github.event_name == 'pull_request' }}
3939
pattern: ${{ inputs.pattern || 'pypi_wheels' }} # setting str in case of PR / debugging
4040
age-days: ${{ fromJSON(inputs.age-days) || 0 }} # setting 0 in case of PR / debugging

.github/workflows/ci-check-md-links.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ on:
1414

1515
jobs:
1616
check-md-links:
17-
uses: Lightning-AI/utilities/.github/workflows/[email protected].7
17+
uses: Lightning-AI/utilities/.github/workflows/[email protected].8
1818
with:
1919
config-file: ".github/markdown-links-config.json"
2020
base-branch: "master"

.github/workflows/ci-schema.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ on:
88

99
jobs:
1010
check:
11-
uses: Lightning-AI/utilities/.github/workflows/[email protected].7
11+
uses: Lightning-AI/utilities/.github/workflows/[email protected].8
1212
with:
1313
# skip azure due to the wrong schema file by MSFT
1414
# https://github.com/Lightning-AI/lightning-flash/pull/1455#issuecomment-1244793607

.pre-commit-config.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ repos:
5858
#args: ["--write-changes"] # uncomment if you want to get automatic fixing
5959

6060
- repo: https://github.com/PyCQA/docformatter
61-
rev: v1.7.5
61+
rev: 06907d0267368b49b9180eed423fae5697c1e909 # todo: fix for docformatter after last 1.7.5
6262
hooks:
6363
- id: docformatter
6464
additional_dependencies: [tomli]

_notebooks

docs/source-pytorch/accelerators/tpu_advanced.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ Example:
5252
model = WeightSharingModule()
5353
trainer = Trainer(max_epochs=1, accelerator="tpu")
5454
55-
See `XLA Documentation <https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#xla-tensor-quirks>`_
55+
See `XLA Documentation <https://github.com/pytorch/xla/blob/v2.5.0/TROUBLESHOOTING.md#xla-tensor-quirks>`_
5656

5757
----
5858

@@ -61,4 +61,4 @@ XLA
6161
XLA is the library that interfaces PyTorch with the TPUs.
6262
For more information check out `XLA <https://github.com/pytorch/xla>`_.
6363

64-
Guide for `troubleshooting XLA <https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md>`_
64+
Guide for `troubleshooting XLA <https://github.com/pytorch/xla/blob/v2.5.0/TROUBLESHOOTING.md>`_

docs/source-pytorch/accelerators/tpu_basic.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ There are cases in which training on TPUs is slower when compared with GPUs, for
108108
- XLA Graph compilation during the initial steps `Reference <https://github.com/pytorch/xla/issues/2383#issuecomment-666519998>`_
109109
- Some tensor ops are not fully supported on TPU, or not supported at all. These operations will be performed on CPU (context switch).
110110

111-
The official PyTorch XLA `performance guide <https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#known-performance-caveats>`_
111+
The official PyTorch XLA `performance guide <https://github.com/pytorch/xla/blob/v2.5.0/TROUBLESHOOTING.md#known-performance-caveats>`_
112112
has more detailed information on how PyTorch code can be optimized for TPU. In particular, the
113-
`metrics report <https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#get-a-metrics-report>`_ allows
113+
`metrics report <https://github.com/pytorch/xla/blob/v2.5.0/TROUBLESHOOTING.md#get-a-metrics-report>`_ allows
114114
one to identify operations that lead to context switching.

docs/source-pytorch/accelerators/tpu_faq.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ A lot of PyTorch operations aren't lowered to XLA, which could lead to significa
7878
These operations are moved to the CPU memory and evaluated, and then the results are transferred back to the XLA device(s).
7979
By using the `xla_debug` Strategy, users could create a metrics report to diagnose issues.
8080

81-
The report includes things like (`XLA Reference <https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#troubleshooting>`_):
81+
The report includes things like (`XLA Reference <https://github.com/pytorch/xla/blob/v2.5.0/TROUBLESHOOTING.md#troubleshooting>`_):
8282

8383
* how many times we issue XLA compilations and time spent on issuing.
8484
* how many times we execute and time spent on execution

docs/source-pytorch/upgrade/sections/2_0_regular.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
- Then
77
- Ref
88

9-
* - used PyTorch 3.11
9+
* - used PyTorch 1.11
1010
- upgrade to PyTorch 2.1 or higher
1111
- `PR18691`_
1212

src/lightning/fabric/strategies/deepspeed.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -598,7 +598,7 @@ def _initialize_engine(
598598
) -> Tuple["DeepSpeedEngine", Optimizer]:
599599
"""Initialize one model and one optimizer with an optional learning rate scheduler.
600600
601-
This calls :func:`deepspeed.initialize` internally.
601+
This calls ``deepspeed.initialize`` internally.
602602
603603
"""
604604
import deepspeed

0 commit comments

Comments
 (0)