Skip to content

Commit ea89133

Browse files
authored
Rename fabric run model to fabric run (#19527)
1 parent e461e90 commit ea89133

File tree

18 files changed

+60
-72
lines changed

18 files changed

+60
-72
lines changed

docs/source-fabric/fundamentals/launch.rst

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ An alternative way to launch your Python script in multiple processes is to use
6767

6868
.. code-block:: bash
6969
70-
fabric run model path/to/your/script.py
70+
fabric run path/to/your/script.py
7171
7272
This is essentially the same as running ``python path/to/your/script.py``, but it also lets you configure the following settings externally without changing your code:
7373

@@ -80,9 +80,9 @@ This is essentially the same as running ``python path/to/your/script.py``, but i
8080

8181
.. code-block:: bash
8282
83-
fabric run model --help
83+
fabric run --help
8484
85-
Usage: fabric run model [OPTIONS] SCRIPT [SCRIPT_ARGS]...
85+
Usage: fabric run [OPTIONS] SCRIPT [SCRIPT_ARGS]...
8686
8787
Run a Lightning Fabric script.
8888
@@ -128,7 +128,7 @@ Here is how you run DDP with 8 GPUs and `torch.bfloat16 <https://pytorch.org/doc
128128

129129
.. code-block:: bash
130130
131-
fabric run model ./path/to/train.py \
131+
fabric run ./path/to/train.py \
132132
--strategy=ddp \
133133
--devices=8 \
134134
--accelerator=cuda \
@@ -138,7 +138,7 @@ Or `DeepSpeed Zero3 <https://www.deepspeed.ai/2021/03/07/zero3-offload.html>`_ w
138138

139139
.. code-block:: bash
140140
141-
fabric run model ./path/to/train.py \
141+
fabric run ./path/to/train.py \
142142
--strategy=deepspeed_stage_3 \
143143
--devices=8 \
144144
--accelerator=cuda \
@@ -148,7 +148,7 @@ Or `DeepSpeed Zero3 <https://www.deepspeed.ai/2021/03/07/zero3-offload.html>`_ w
148148

149149
.. code-block:: bash
150150
151-
fabric run model ./path/to/train.py \
151+
fabric run ./path/to/train.py \
152152
--devices=auto \
153153
--accelerator=auto \
154154
--precision=16

docs/source-fabric/fundamentals/precision.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ The same values can also be set through the :doc:`command line interface <launch
6666

6767
.. code-block:: bash
6868
69-
lightning run model train.py --precision=bf16-mixed
69+
fabric run train.py --precision=bf16-mixed
7070
7171
7272
.. note::

docs/source-fabric/guide/multi_node/barebones.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ Log in to the **first node** and run this command:
7272
.. code-block:: bash
7373
:emphasize-lines: 2,3
7474
75-
lightning run model \
75+
fabric run \
7676
--node-rank=0 \
7777
--main-address=10.10.10.16 \
7878
--accelerator=cuda \
@@ -85,7 +85,7 @@ Log in to the **second node** and run this command:
8585
.. code-block:: bash
8686
:emphasize-lines: 2,3
8787
88-
lightning run model \
88+
fabric run \
8989
--node-rank=1 \
9090
--main-address=10.10.10.16 \
9191
--accelerator=cuda \
@@ -129,7 +129,7 @@ The most likely reasons and how to fix it:
129129
130130
export GLOO_SOCKET_IFNAME=eno1
131131
export NCCL_SOCKET_IFNAME=eno1
132-
lightning run model ...
132+
fabric run ...
133133
134134
You can find the interface name by parsing the output of the ``ifconfig`` command.
135135
The name of this interface **may differ on each node**.
@@ -152,7 +152,7 @@ Launch your command by prepending ``NCCL_DEBUG=INFO`` to get more info.
152152

153153
.. code-block:: bash
154154
155-
NCCL_DEBUG=INFO lightning run model ...
155+
NCCL_DEBUG=INFO fabric run ...
156156
157157
158158
----

examples/fabric/image_classifier/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -27,11 +27,11 @@ This script shows you how to scale the pure PyTorch code to enable GPU and multi
2727

2828
```bash
2929
# CPU
30-
lightning run model train_fabric.py
30+
fabric run train_fabric.py
3131

3232
# GPU (CUDA or M1 Mac)
33-
lightning run model train_fabric.py --accelerator=gpu
33+
fabric run train_fabric.py --accelerator=gpu
3434

3535
# Multiple GPUs
36-
lightning run model train_fabric.py --accelerator=gpu --devices=4
36+
fabric run train_fabric.py --accelerator=gpu --devices=4
3737
```

examples/fabric/image_classifier/train_fabric.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -20,10 +20,10 @@
2020
3. Apply ``setup`` over each model and optimizers pair, ``setup_dataloaders`` on all your dataloaders,
2121
and replace ``loss.backward()`` with ``self.backward(loss)``.
2222
23-
4. Run the script from the terminal using ``lightning run model path/to/train.py``
23+
4. Run the script from the terminal using ``fabric run path/to/train.py``
2424
2525
Accelerate your training loop by setting the ``--accelerator``, ``--strategy``, ``--devices`` options directly from
26-
the command line. See ``lightning run model --help`` or learn more from the documentation:
26+
the command line. See ``fabric run --help`` or learn more from the documentation:
2727
https://lightning.ai/docs/fabric.
2828
2929
"""
@@ -71,7 +71,7 @@ def forward(self, x):
7171

7272
def run(hparams):
7373
# Create the Lightning Fabric object. The parameters like accelerator, strategy, devices etc. will be proided
74-
# by the command line. See all options: `lightning run model --help`
74+
# by the command line. See all options: `fabric run --help`
7575
fabric = Fabric()
7676

7777
seed_everything(hparams.seed) # instead of torch.manual_seed(...)
@@ -168,7 +168,7 @@ def run(hparams):
168168
if __name__ == "__main__":
169169
# Arguments can be passed in through the CLI as normal and will be parsed here
170170
# Example:
171-
# lightning run model image_classifier.py accelerator=cuda --epochs=3
171+
# fabric run image_classifier.py accelerator=cuda --epochs=3
172172
parser = argparse.ArgumentParser(description="Fabric MNIST Example")
173173
parser.add_argument(
174174
"--batch-size", type=int, default=64, metavar="N", help="input batch size for training (default: 64)"

examples/fabric/kfold_cv/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,13 +14,13 @@ This script shows you how to scale the pure PyTorch code to enable GPU and multi
1414

1515
```bash
1616
# CPU
17-
lightning run model train_fabric.py
17+
fabric run train_fabric.py
1818

1919
# GPU (CUDA or M1 Mac)
20-
lightning run model train_fabric.py --accelerator=gpu
20+
fabric run train_fabric.py --accelerator=gpu
2121

2222
# Multiple GPUs
23-
lightning run model train_fabric.py --accelerator=gpu --devices=4
23+
fabric run train_fabric.py --accelerator=gpu --devices=4
2424
```
2525

2626
### References

examples/fabric/kfold_cv/train_fabric.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ def validate_dataloader(model, data_loader, fabric, hparams, fold, acc_metric):
107107

108108
def run(hparams):
109109
# Create the Lightning Fabric object. The parameters like accelerator, strategy, devices etc. will be proided
110-
# by the command line. See all options: `lightning run model --help`
110+
# by the command line. See all options: `fabric run --help`
111111
fabric = Fabric()
112112

113113
seed_everything(hparams.seed) # instead of torch.manual_seed(...)
@@ -171,7 +171,7 @@ def run(hparams):
171171
if __name__ == "__main__":
172172
# Arguments can be passed in through the CLI as normal and will be parsed here
173173
# Example:
174-
# lightning run model image_classifier.py accelerator=cuda --epochs=3
174+
# fabric run image_classifier.py accelerator=cuda --epochs=3
175175
parser = argparse.ArgumentParser(description="Fabric MNIST K-Fold Cross Validation Example")
176176
parser.add_argument(
177177
"--batch-size", type=int, default=64, metavar="N", help="input batch size for training (default: 64)"

examples/fabric/language_model/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,11 @@ It is a simplified version of the [official PyTorch example](https://github.com/
77

88
```bash
99
# CPU
10-
lightning run model --accelerator=cpu train.py
10+
fabric run --accelerator=cpu train.py
1111

1212
# GPU (CUDA or M1 Mac)
13-
lightning run model --accelerator=gpu train.py
13+
fabric run --accelerator=gpu train.py
1414

1515
# Multiple GPUs
16-
lightning run model --accelerator=gpu --devices=4 train.py
16+
fabric run --accelerator=gpu --devices=4 train.py
1717
```

examples/fabric/meta_learning/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ torchrun --nproc_per_node=2 --standalone train_torch.py
3333
**Accelerated using Lightning Fabric:**
3434

3535
```bash
36-
lightning run model train_fabric.py --devices 2 --strategy ddp --accelerator cpu
36+
fabric run train_fabric.py --devices 2 --strategy ddp --accelerator cpu
3737
```
3838

3939
### References

examples/fabric/meta_learning/train_fabric.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
- gym<=0.22
1313
1414
Run it with:
15-
lightning run model train_fabric.py --accelerator=cuda --devices=2 --strategy=ddp
15+
fabric run train_fabric.py --accelerator=cuda --devices=2 --strategy=ddp
1616
"""
1717

1818
import cherry
@@ -59,7 +59,7 @@ def main(
5959
seed=42,
6060
):
6161
# Create the Fabric object
62-
# Arguments get parsed from the command line, see `lightning run model --help`
62+
# Arguments get parsed from the command line, see `fabric run --help`
6363
fabric = Fabric()
6464

6565
meta_batch_size = meta_batch_size // fabric.world_size

0 commit comments

Comments
 (0)