Skip to content

Commit 819d3c8

Browse files
committed
Merge branch 'master' into feat/dynamo_export_onnx
# Conflicts: # .github/workflows/ci-tests-fabric.yml # .github/workflows/ci-tests-pytorch.yml
2 parents 9491953 + ca3880a commit 819d3c8

File tree

21 files changed

+69
-46
lines changed

21 files changed

+69
-46
lines changed

.azure/gpu-tests-fabric.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@ jobs:
130130
- bash: |
131131
set -e
132132
extra=$(python -c "print({'lightning': 'fabric-'}.get('$(PACKAGE_NAME)', ''))")
133-
pip install -e ".[${extra}dev]" -U --extra-index-url="${TORCH_URL}"
133+
pip install -e ".[${extra}dev]" -U --upgrade-strategy=eager --extra-index-url="${TORCH_URL}"
134134
displayName: "Install package & dependencies"
135135
136136
- bash: |

.azure/gpu-tests-pytorch.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -134,7 +134,7 @@ jobs:
134134
- bash: |
135135
set -e
136136
extra=$(python -c "print({'lightning': 'pytorch-'}.get('$(PACKAGE_NAME)', ''))")
137-
pip install -e ".[${extra}dev]" -U --extra-index-url="${TORCH_URL}"
137+
pip install -e ".[${extra}dev]" -U --upgrade-strategy=eager --extra-index-url="${TORCH_URL}"
138138
displayName: "Install package & dependencies"
139139
140140
- bash: pip uninstall -y lightning

.github/workflows/probot-check-group.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ jobs:
1414
if: github.event.pull_request.draft == false
1515
timeout-minutes: 61 # in case something is wrong with the internal timeout
1616
steps:
17-
- uses: Lightning-AI/probot@v5.4
17+
- uses: Lightning-AI/probot@v5.5
1818
env:
1919
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
2020
with:

docs/source-fabric/api/fabric_methods.rst

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,13 +40,17 @@ Moves the model and optimizer to the correct device automatically.
4040
4141
model = nn.Linear(32, 64)
4242
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
43+
scheduler = torch.optim.lr_scheduler.LinearLR(optimizer, start_factor=1.0, end_factor=0.3, total_iters=10)
4344
4445
# Set up model and optimizer for accelerated training
4546
model, optimizer = fabric.setup(model, optimizer)
4647
4748
# If you don't want Fabric to set the device
4849
model, optimizer = fabric.setup(model, optimizer, move_to_device=False)
4950
51+
# If you want to additionally register a learning rate scheduler with compatible strategies such as DeepSpeed
52+
model, optimizer, scheduler = fabric.setup(model, optimizer, scheduler)
53+
5054
5155
The setup method also prepares the model for the selected precision choice so that operations during ``forward()`` get
5256
cast automatically. Advanced users should read :doc:`the notes on models wrapped by Fabric <../api/wrappers>`.

docs/source-fabric/api/wrappers.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -124,7 +124,7 @@ If you were to run this model in Fabric with multiple devices (DDP or FSDP), you
124124
# OK: Calling the model directly
125125
output = model(torch.randn(10))
126126
127-
# OK: Calling the model's forward (equivalent to the abvoe)
127+
# OK: Calling the model's forward (equivalent to the above)
128128
output = model.forward(torch.randn(10))
129129
130130
# ERROR: Calling another method that calls forward indirectly

docs/source-fabric/conf.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -287,6 +287,7 @@
287287
("py:class", "torch.distributed.fsdp.wrap.ModuleWrapPolicy"),
288288
("py:class", "torch.distributed.fsdp.sharded_grad_scaler.ShardedGradScaler"),
289289
("py:class", "torch.amp.grad_scaler.GradScaler"),
290+
("py:class", "torch.optim.lr_scheduler._LRScheduler"),
290291
# Mocked optional packages
291292
("py:class", "deepspeed.*"),
292293
("py:.*", "torch_xla.*"),

docs/source-pytorch/advanced/transfer_learning.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ Let's use the `AutoEncoder` as a feature extractor in a separate model.
3232
class CIFAR10Classifier(LightningModule):
3333
def __init__(self):
3434
# init the pretrained LightningModule
35-
self.feature_extractor = AutoEncoder.load_from_checkpoint(PATH)
35+
self.feature_extractor = AutoEncoder.load_from_checkpoint(PATH).encoder
3636
self.feature_extractor.freeze()
3737

3838
# the autoencoder outputs a 100-dim representation and CIFAR-10 has 10 classes

docs/source-pytorch/common/precision_intermediate.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -165,7 +165,7 @@ Under the hood, we use `transformer_engine.pytorch.fp8_autocast <https://docs.nv
165165
Quantization via Bitsandbytes
166166
*****************************
167167

168-
`bitsandbytes <https://github.com/TimDettmers/bitsandbytes>`__ (BNB) is a library that supports quantizing :class:`torch.nn.Linear` weights.
168+
`bitsandbytes <https://github.com/bitsandbytes-foundation/bitsandbytes>`__ (BNB) is a library that supports quantizing :class:`torch.nn.Linear` weights.
169169

170170
Both 4-bit (`paper reference <https://arxiv.org/abs/2305.14314v1>`__) and 8-bit (`paper reference <https://arxiv.org/abs/2110.02861>`__) quantization is supported.
171171
Specifically, we support the following modes:

docs/source-pytorch/glossary/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -209,7 +209,7 @@ Glossary
209209

210210
.. displayitem::
211211
:header: LightningModule
212-
:description: A base class organizug your neural network module
212+
:description: A base class organizing your neural network module
213213
:col_css: col-md-12
214214
:button_link: ../common/lightning_module.html
215215
:height: 100

docs/source-pytorch/versioning.rst

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,10 +79,16 @@ The table below indicates the coverage of tested versions in our CI. Versions ou
7979
- ``torch``
8080
- ``torchmetrics``
8181
- Python
82+
* - 2.5
83+
- 2.5
84+
- 2.5
85+
- ≥2.1, ≤2.7
86+
- ≥0.7.0
87+
- ≥3.9, ≤3.12
8288
* - 2.4
8389
- 2.4
8490
- 2.4
85-
- ≥2.1, ≤2.4
91+
- ≥2.1, ≤2.6
8692
- ≥0.7.0
8793
- ≥3.9, ≤3.12
8894
* - 2.3

0 commit comments

Comments
 (0)