Skip to content

Commit d454e6e

Browse files
authored
Merge branch 'master' into nitpick/add-make-command
2 parents 658328d + b36edc4 commit d454e6e

File tree

9 files changed

+10
-10
lines changed

9 files changed

+10
-10
lines changed

.github/CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
Welcome to the PyTorch Lightning community! We're building the most advanced research platform on the planet to implement the latest, best practices
44
and integrations that the amazing PyTorch team and other research organization rolls out!
55

6-
If you are new to open source, check out [this blog to get started with your first Open Source contribution](https://devblog.pytorchlightning.ai/quick-contribution-guide-86d977171b3a).
6+
If you are new to open source, check out [this blog to get started with your first Open Source contribution](https://medium.com/pytorch-lightning/quick-contribution-guide-86d977171b3a).
77

88
## Main Core Value: One less thing to remember
99

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ ______________________________________________________________________
5757

5858
# Why PyTorch Lightning?
5959

60-
Training models in plain PyTorch is tedious and error-prone - you have to manually handle things like backprop, mixed precision, multi-GPU, and distributed training, often rewriting code for every new project. PyTorch Lightning organizes PyTorch code to automate those complexities so you can focus on your model and data, while keeping full control and scaling from CPU to multi-node without changing your core code.
60+
Training models in plain PyTorch is tedious and error-prone - you have to manually handle things like backprop, mixed precision, multi-GPU, and distributed training, often rewriting code for every new project. PyTorch Lightning organizes PyTorch code to automate those complexities so you can focus on your model and data, while keeping full control and scaling from CPU to multi-node without changing your core code. But if you want control of those things, you can still opt into more DIY.
6161

6262
Fun analogy: If PyTorch is Javascript, PyTorch Lightning is ReactJS or NextJS.
6363

requirements/fabric/base.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
# in case you want to preserve/enforce restrictions on the latest compatible version, add "strict" as an in-line comment
33

44
torch >=2.1.0, <2.8.0
5-
fsspec[http] >=2022.5.0, <2025.6.0
5+
fsspec[http] >=2022.5.0, <2025.8.0
66
packaging >=20.0, <=25.0
77
typing-extensions >=4.5.0, <4.15.0
88
lightning-utilities >=0.10.0, <0.15.0

requirements/pytorch/base.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
torch >=2.1.0, <=2.8.0
55
tqdm >=4.57.0, <4.68.0
66
PyYAML >5.4, <6.1.0
7-
fsspec[http] >=2022.5.0, <2025.6.0
7+
fsspec[http] >=2022.5.0, <2025.8.0
88
torchmetrics >0.7.0, <1.8.0
99
packaging >=20.0, <=25.0
1010
typing-extensions >=4.5.0, <4.15.0

requirements/pytorch/test.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,4 +16,4 @@ pandas >2.0, <2.4.0 # needed in benchmarks
1616
fastapi # for `ServableModuleValidator` # not setting version as re-defined in App
1717
uvicorn # for `ServableModuleValidator` # not setting version as re-defined in App
1818

19-
tensorboard >=2.9.1, <2.20.0 # for `TensorBoardLogger`
19+
tensorboard >=2.9.1, <2.21.0 # for `TensorBoardLogger`

requirements/typing.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
mypy==1.16.1
1+
mypy==1.17.0
22
torch==2.7.1
33

44
types-Markdown

src/lightning/pytorch/core/module.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -466,10 +466,10 @@ def log(
466466
)
467467

468468
# make sure user doesn't introduce logic for multi-dataloaders
469-
if "/dataloader_idx_" in name:
469+
if add_dataloader_idx and "/dataloader_idx_" in name:
470470
raise MisconfigurationException(
471471
f"You called `self.log` with the key `{name}`"
472-
" but it should not contain information about `dataloader_idx`"
472+
" but it should not contain information about `dataloader_idx` when `add_dataloader_idx=True`"
473473
)
474474

475475
value = apply_to_collection(value, (Tensor, numbers.Number), self.__to_tensor, name)

src/pytorch_lightning/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -370,7 +370,7 @@ The PyTorch Lightning community is maintained by
370370
- [10+ core contributors](https://lightning.ai/docs/pytorch/stable/community/governance.html) who are all a mix of professional engineers, Research Scientists, and Ph.D. students from top AI labs.
371371
- 680+ active community contributors.
372372

373-
Want to help us build Lightning and reduce boilerplate for thousands of researchers? [Learn how to make your first contribution here](https://devblog.pytorchlightning.ai/quick-contribution-guide-86d977171b3a)
373+
Want to help us build Lightning and reduce boilerplate for thousands of researchers? [Learn how to make your first contribution here](https://medium.com/pytorch-lightning/quick-contribution-guide-86d977171b3a)
374374

375375
PyTorch Lightning is also part of the [PyTorch ecosystem](https://pytorch.org/ecosystem/) which requires projects to have solid testing, documentation and support.
376376

tests/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Additionally, for testing backward compatibility with older versions of PyTorch
2626
bash .actions/pull_legacy_checkpoints.sh
2727
```
2828

29-
Note: These checkpoints are generated to set baselines for maintaining backward compatibility with legacy versions of PyTorch Lightning. Details of checkpoints for back-compatibility can be found [here](https://github.com/Lightning-AI/pytorch-lightning/blob/master/tests/legacy/README.md).
29+
Note: These checkpoints are generated to set baselines for maintaining backward compatibility with legacy versions of PyTorch Lightning. Details of checkpoints for back-compatibility can be found [here](https://github.com/Lightning-AI/pytorch-lightning/tree/master/tests/legacy/README.md).
3030

3131
You can run the full test suite in your terminal via this make script:
3232

0 commit comments

Comments
 (0)