Skip to content

Commit 25fc4da

Browse files
authored
Merge branch 'master' into flake8-password-rules
2 parents 733026b + 0943546 commit 25fc4da

File tree

11 files changed

+33
-14
lines changed

11 files changed

+33
-14
lines changed

.github/workflows/probot-check-group.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,14 +12,14 @@ jobs:
1212
required-jobs:
1313
runs-on: ubuntu-latest
1414
if: github.event.pull_request.draft == false
15-
timeout-minutes: 61 # in case something is wrong with the internal timeout
15+
timeout-minutes: 71 # in case something is wrong with the internal timeout
1616
steps:
1717
- uses: Lightning-AI/[email protected]
1818
env:
1919
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
2020
with:
2121
job: check-group
2222
interval: 180 # seconds
23-
timeout: 60 # minutes
23+
timeout: 70 # minutes
2424
maintainers: "Lightning-AI/lai-frameworks"
2525
owner: "carmocca"

.lightning/workflows/fabric.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ trigger:
44
pull_request:
55
branches: ["master", "release/stable"]
66

7-
timeout: "55" # minutes
7+
timeout: "60" # minutes
88
parametrize:
99
matrix: {}
1010
include:

.lightning/workflows/pytorch.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ trigger:
44
pull_request:
55
branches: ["master", "release/stable"]
66

7-
timeout: "55" # minutes
7+
timeout: "60" # minutes
88
parametrize:
99
matrix: {}
1010
include:

docs/source-pytorch/common/checkpointing_intermediate.rst

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,13 @@ For fine-grained control over checkpointing behavior, use the :class:`~lightning
2121
checkpoint_callback = ModelCheckpoint(dirpath="my/path/", save_top_k=2, monitor="val_loss")
2222
trainer = Trainer(callbacks=[checkpoint_callback])
2323
trainer.fit(model)
24-
checkpoint_callback.best_model_path
24+
25+
# Access best and last model checkpoint directly from the callback
26+
print(checkpoint_callback.best_model_path)
27+
print(checkpoint_callback.last_model_path)
28+
# Or via the trainer
29+
print(trainer.checkpoint_callback.best_model_path)
30+
print(trainer.checkpoint_callback.last_model_path)
2531
2632
Any value that has been logged via *self.log* in the LightningModule can be monitored.
2733

requirements/docs.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ myst-parser >=0.18.1, <5.0.0
33
nbsphinx >=0.8.5, <=0.9.7
44
nbconvert >7.14, <7.17
55
pandoc >=1.0, <=2.4
6-
docutils>=0.18.1,<=0.22
6+
docutils>=0.18.1,<=0.22.2
77
sphinxcontrib-fulltoc >=1.0, <=1.2.0
88
sphinxcontrib-mockautodoc
99
sphinx-autobuild

requirements/pytorch/test.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ numpy >1.20.0, <1.27.0
1212
onnx >1.12.0, <1.20.0
1313
onnxruntime >=1.12.0, <1.23.0
1414
onnxscript >= 0.1.0, < 0.5.0
15-
psutil <7.0.1 # for `DeviceStatsMonitor`
15+
psutil <7.1.1 # for `DeviceStatsMonitor`
1616
pandas >2.0, <2.4.0 # needed in benchmarks
1717
fastapi # for `ServableModuleValidator` # not setting version as re-defined in App
1818
uvicorn # for `ServableModuleValidator` # not setting version as re-defined in App

requirements/typing.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
mypy==1.18.1
1+
mypy==1.18.2
22
torch==2.8.0
33

44
types-Markdown

src/lightning/pytorch/CHANGELOG.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
3030
- Default to `RichProgressBar` and `RichModelSummary` if the rich package is available. Fallback to TQDMProgressBar and ModelSummary otherwise ([#20896](https://github.com/Lightning-AI/pytorch-lightning/pull/20896))
3131

3232

33+
- Add MPS accelerator support for mixed precision ([#21209](https://github.com/Lightning-AI/pytorch-lightning/pull/21209))
34+
35+
3336
### Removed
3437

3538
-

src/lightning/pytorch/callbacks/model_checkpoint.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -204,11 +204,11 @@ class ModelCheckpoint(Checkpoint):
204204
... )
205205
206206
# retrieve the best checkpoint after training
207-
checkpoint_callback = ModelCheckpoint(dirpath='my/path/')
208-
trainer = Trainer(callbacks=[checkpoint_callback])
209-
model = ...
210-
trainer.fit(model)
211-
checkpoint_callback.best_model_path
207+
>>> checkpoint_callback = ModelCheckpoint(dirpath='my/path/')
208+
>>> trainer = Trainer(callbacks=[checkpoint_callback])
209+
>>> model = ... # doctest: +SKIP
210+
>>> trainer.fit(model) # doctest: +SKIP
211+
>>> print(checkpoint_callback.best_model_path) # doctest: +SKIP
212212
213213
.. tip:: Saving and restoring multiple checkpoint callbacks at the same time is supported under variation in the
214214
following arguments:

src/lightning/pytorch/trainer/connectors/accelerator_connector.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -515,7 +515,7 @@ def _check_and_init_precision(self) -> Precision:
515515
rank_zero_info(
516516
f"Using {'16bit' if self._precision_flag == '16-mixed' else 'bfloat16'} Automatic Mixed Precision (AMP)"
517517
)
518-
device = "cpu" if self._accelerator_flag == "cpu" else "cuda"
518+
device = self._accelerator_flag if self._accelerator_flag in ("cpu", "mps") else "cuda"
519519
return MixedPrecision(self._precision_flag, device) # type: ignore[arg-type]
520520

521521
raise RuntimeError("No precision set")

0 commit comments

Comments
 (0)