Skip to content

Commit eb4623b

Browse files
carmoccarasbt
authored andcommitted
Use the CLI in the tutorials (#1094)
1 parent c46bb16 commit eb4623b

13 files changed

+90
-90
lines changed

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ To generate text predictions, you need to download the model weights. **If you d
102102
Run inference:
103103

104104
```bash
105-
python litgpt/generate/base.py --prompt "Hello, my name is"
105+
litgpt generate base --prompt "Hello, my name is"
106106
```
107107

108108
This will run the 3B pretrained model and require ~7 GB of GPU memory using the `bfloat16` datatype.
@@ -112,7 +112,7 @@ This will run the 3B pretrained model and require ~7 GB of GPU memory using the
112112
You can also chat with the model interactively:
113113

114114
```bash
115-
python litgpt/chat/base.py
115+
litgpt chat
116116
```
117117

118118
 
@@ -131,19 +131,19 @@ For example, you can either use
131131
Adapter ([Zhang et al. 2023](https://arxiv.org/abs/2303.16199)):
132132

133133
```bash
134-
python litgpt/finetune/adapter.py
134+
litgpt finetune adapter
135135
```
136136

137137
or Adapter v2 ([Gao et al. 2023](https://arxiv.org/abs/2304.15010)):
138138

139139
```bash
140-
python litgpt/finetune/adapter_v2.py
140+
litgpt finetune adapter_v2
141141
```
142142

143143
or LoRA ([Hu et al. 2021](https://arxiv.org/abs/2106.09685)):
144144

145145
```bash
146-
python litgpt/finetune/lora.py
146+
litgpt finetune lora
147147
```
148148

149149
(Please see the [tutorials/finetune_adapter](tutorials/finetune_adapter.md) for details on the differences between the two adapter methods.)

litgpt/utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ def check_valid_checkpoint_dir(checkpoint_dir: Path, lora: bool = False) -> None
7474
error_message = (
7575
f"--checkpoint_dir {str(checkpoint_dir.absolute())!r}{problem}."
7676
"\nFind download instructions at https://github.com/Lightning-AI/litgpt/blob/main/tutorials\n"
77-
f"{extra}\nSee all download options by running:\n python litgpt/scripts/download.py"
77+
f"{extra}\nSee all download options by running:\n litgpt download"
7878
)
7979
print(error_message, file=sys.stderr)
8080
raise SystemExit(1)

tests/test_utils.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ def test_check_valid_checkpoint_dir(tmp_path):
4747
Find download instructions at https://github.com/Lightning-AI/litgpt/blob/main/tutorials
4848
4949
See all download options by running:
50-
python litgpt/scripts/download.py
50+
litgpt download
5151
""".strip()
5252
assert out == expected
5353

@@ -61,7 +61,7 @@ def test_check_valid_checkpoint_dir(tmp_path):
6161
Find download instructions at https://github.com/Lightning-AI/litgpt/blob/main/tutorials
6262
6363
See all download options by running:
64-
python litgpt/scripts/download.py
64+
litgpt download
6565
""".strip()
6666
assert out == expected
6767

@@ -79,7 +79,7 @@ def test_check_valid_checkpoint_dir(tmp_path):
7979
--checkpoint_dir '{str(checkpoint_dir.absolute())}'
8080
8181
See all download options by running:
82-
python litgpt/scripts/download.py
82+
litgpt download
8383
""".strip()
8484
assert out == expected
8585

tutorials/convert_hf_checkpoint.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
By default, the `litgpt/scripts/download.py` script converts the downloaded HF checkpoint files into a LitGPT compatible format after downloading. For example,
44

55
```bash
6-
python litgpt/scripts/download.py --repo_id EleutherAI/pythia-14m
6+
litgpt download --repo_id EleutherAI/pythia-14m
77
```
88

99
creates the following files:
@@ -28,7 +28,7 @@ To disable the automatic conversion, which is useful for development and debuggi
2828
```bash
2929
rm -rf checkpoints/EleutherAI/pythia-14m
3030

31-
python litgpt/scripts/download.py \
31+
litgpt download \
3232
--repo_id EleutherAI/pythia-14m \
3333
--convert_checkpoint false
3434

@@ -49,7 +49,7 @@ ls checkpoints/EleutherAI/pythia-14m
4949
The required files `lit_config.json` and `lit_model.pth` files can then be manually generated via the `litgpt/scripts/convert_hf_checkpoint.py` script:
5050

5151
```bash
52-
python litgpt/scripts/convert_hf_checkpoint.py \
52+
litgpt convert to_litgpt \
5353
--checkpoint_dir checkpoints/EleutherAI/pythia-14m
5454
```
5555

tutorials/convert_lit_models.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ LitGPT weights need to be converted to a format that Hugging Face understands wi
55
We provide a helpful script to convert models LitGPT models back to their equivalent Hugging Face Transformers format:
66

77
```sh
8-
python litgpt/scripts/convert_lit_checkpoint.py \
8+
litgpt convert from_litgpt \
99
--checkpoint_dir checkpoint_dir \
1010
--output_dir converted_dir
1111
```
@@ -47,7 +47,7 @@ model = AutoModel.from_pretrained("online_repo_id", state_dict=state_dict)
4747
Please note that if you want to convert a model that has been fine-tuned using an adapter like LoRA, these weights should be [merged](../litgpt/scripts/merge_lora.py) to the checkpoint prior to converting.
4848

4949
```sh
50-
python litgpt/scripts/merge_lora.py \
50+
litgpt merge_lora \
5151
--checkpoint_dir path/to/lora/checkpoint_dir
5252
```
5353

@@ -73,7 +73,7 @@ by running `litgpt/scripts/download.py` without any additional arguments.
7373
Then, we download the model we specified via `$repo_id` above:
7474

7575
```bash
76-
python litgpt/scripts/download.py --repo_id $repo_id
76+
litgpt download --repo_id $repo_id
7777
```
7878

7979
2. Finetune the model:
@@ -82,7 +82,7 @@ python litgpt/scripts/download.py --repo_id $repo_id
8282
```bash
8383
export finetuned_dir=out/lit-finetuned-model
8484

85-
python litgpt/finetune/lora.py \
85+
litgpt finetune lora \
8686
--checkpoint_dir checkpoints/$repo_id \
8787
--out_dir $finetuned_dir \
8888
--train.epochs 1 \
@@ -94,15 +94,15 @@ python litgpt/finetune/lora.py \
9494
Note that this step only applies if the model was finetuned with `lora.py` above and not when `full.py` was used for finetuning.
9595

9696
```bash
97-
python litgpt/scripts/merge_lora.py \
97+
litgpt merge_lora \
9898
--checkpoint_dir $finetuned_dir/final
9999
```
100100

101101

102102
4. Convert the finetuning model back into a HF format:
103103

104104
```bash
105-
python litgpt/scripts/convert_lit_checkpoint.py \
105+
litgpt convert from_litgpt \
106106
--checkpoint_dir $finetuned_dir/final/ \
107107
--output_dir out/hf-tinyllama/converted \
108108
```

tutorials/download_model_weights.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ LitGPT supports a variety of LLM architectures with publicly available weights.
1111
To see all supported models, run the following command without arguments:
1212

1313
```bash
14-
python litgpt/scripts/download.py
14+
litgpt download
1515
```
1616

1717
The output is shown below:
@@ -128,7 +128,7 @@ Trelis/Llama-2-7b-chat-hf-function-calling-v2
128128
To download the weights for a specific model, use the `--repo_id` argument. Replace `<repo_id>` with the model's repository ID. For example:
129129

130130
```bash
131-
python litgpt/scripts/download.py --repo_id <repo_id>
131+
litgpt download --repo_id <repo_id>
132132
```
133133
This command downloads the model checkpoint into the `checkpoints/` directory.
134134

@@ -139,7 +139,7 @@ This command downloads the model checkpoint into the `checkpoints/` directory.
139139
For more options, add the `--help` flag when running the script:
140140

141141
```bash
142-
python litgpt/scripts/download.py --help
142+
litgpt download --help
143143
```
144144

145145
&nbsp;
@@ -148,7 +148,7 @@ python litgpt/scripts/download.py --help
148148
After conversion, run the model with the `--checkpoint_dir` flag, adjusting `repo_id` accordingly:
149149

150150
```bash
151-
python litgpt/chat/base.py --checkpoint_dir checkpoints/<repo_id>
151+
litgpt chat --checkpoint_dir checkpoints/<repo_id>
152152
```
153153

154154
&nbsp;
@@ -159,7 +159,7 @@ This section shows a typical end-to-end example for downloading and using TinyLl
159159
1. List available TinyLlama checkpoints:
160160

161161
```bash
162-
python litgpt/scripts/download.py | grep Tiny
162+
litgpt download | grep Tiny
163163
```
164164

165165
```
@@ -171,13 +171,13 @@ TinyLlama/TinyLlama-1.1B-Chat-v1.0
171171

172172
```bash
173173
export repo_id=TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
174-
python litgpt/scripts/download.py --repo_id $repo_id
174+
litgpt download --repo_id $repo_id
175175
```
176176

177177
3. Use the TinyLlama model:
178178

179179
```bash
180-
python litgpt/chat/base.py --checkpoint_dir checkpoints/$repo_id
180+
litgpt chat --checkpoint_dir checkpoints/$repo_id
181181
```
182182

183183
&nbsp;
@@ -190,7 +190,7 @@ For example, to get access to the Gemma 2B model, you can do so by following the
190190
Once you've been granted access and obtained the access token you need to pass the additional `--access_token`:
191191

192192
```bash
193-
python litgpt/scripts/download.py \
193+
litgpt download \
194194
--repo_id google/gemma-2b \
195195
--access_token your_hf_token
196196
```
@@ -203,7 +203,7 @@ The `download.py` script will automatically convert the downloaded model checkpo
203203

204204

205205
```bash
206-
python litgpt/scripts/download.py \
206+
litgpt download \
207207
--repo_id <repo_id>
208208
--dtype bf16-true
209209
```
@@ -218,15 +218,15 @@ For development purposes, for example, when adding or experimenting with new mod
218218
You can do this by passing the `--convert_checkpoint false` option to the download script:
219219

220220
```bash
221-
python litgpt/scripts/download.py \
221+
litgpt download \
222222
--repo_id <repo_id> \
223223
--convert_checkpoint false
224224
```
225225

226226
and then calling the `convert_hf_checkpoint.py` script:
227227

228228
```bash
229-
python litgpt/scripts/convert_hf_checkpoint.py \
229+
litgpt convert to_litgpt \
230230
--checkpoint_dir checkpoint_dir/<repo_id>
231231
```
232232

@@ -236,15 +236,15 @@ python litgpt/scripts/convert_hf_checkpoint.py \
236236
In some cases we don't need the model weight, for example, when we are pretraining a model from scratch instead of finetuning it. For cases like this, you can use the `--tokenizer_only` flag to only download a model's tokenizer, which can then be used in the pretraining scripts:
237237

238238
```bash
239-
python litgpt/scripts/download.py \
239+
litgpt download \
240240
--repo_id TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T \
241241
--tokenizer_only true
242242
```
243243

244244
and
245245

246246
```bash
247-
python litgpt/pretrain.py \
247+
litgpt pretrain \
248248
--data ... \
249249
--model_name tiny-llama-1.1b \
250250
--tokenizer_dir checkpoints/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T/

tutorials/finetune_adapter.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -22,15 +22,15 @@ For more information about dataset preparation, also see the [prepare_dataset.md
2222
## Running the finetuning
2323

2424
```bash
25-
python litgpt/finetune/adapter.py \
25+
litgpt finetune adapter \
2626
--data Alpaca \
2727
--checkpoint_dir checkpoints/stabilityai/stablelm-base-alpha-3b
2828
```
2929

3030
or for Adapter V2
3131

3232
```bash
33-
python litgpt/finetune/adapter_v2.py \
33+
litgpt finetune adapter_v2 \
3434
--data Alpaca \
3535
--checkpoint_dir checkpoints/stabilityai/stablelm-base-alpha-3b
3636
```
@@ -49,15 +49,15 @@ For example, the following settings will let you finetune the model in under 1 h
4949
This script will save checkpoints periodically to the `out_dir` directory. If you are finetuning different models or on your own dataset, you can specify an output directory with your preferred name:
5050

5151
```bash
52-
python litgpt/finetune/adapter.py \
52+
litgpt finetune adapter \
5353
--data Alpaca \
5454
--out_dir out/adapter/my-model-finetuned
5555
```
5656

5757
or for Adapter V2
5858

5959
```bash
60-
python litgpt/finetune/adapter_v2.py \
60+
litgpt finetune adapter_v2 \
6161
--data Alpaca \
6262
--out_dir out/adapter_v2/my-model-finetuned
6363
```
@@ -66,7 +66,7 @@ If your GPU does not support `bfloat16`, you can pass the `--precision 32-true`
6666
For instance, to fine-tune on MPS (the GPU on modern Macs), you can run
6767

6868
```bash
69-
python litgpt/finetune/adapter.py \
69+
litgpt finetune adapter \
7070
--data Alpaca \
7171
--out_dir out/adapter/my-model-finetuned \
7272
--precision 32-true
@@ -79,13 +79,13 @@ Note that `mps` as the accelerator will be picked up automatically by Fabric whe
7979
Optionally, finetuning using quantization can be enabled via the `--quantize` flag, for example using the 4-bit NormalFloat data type:
8080

8181
```bash
82-
python litgpt/finetune/adapter.py --quantize "bnb.nf4"
82+
litgpt finetune adapter --quantize "bnb.nf4"
8383
```
8484

8585
or using adapter_v2 with double-quantization:
8686

8787
```bash
88-
python litgpt/finetune/adapter_v2.py --quantize "bnb.nf4-dq"
88+
litgpt finetune adapter_v2 --quantize "bnb.nf4-dq"
8989
```
9090

9191
For additional benchmarks and resource requirements, please see the [Resource Tables](resource-tables.md).
@@ -95,15 +95,15 @@ For additional benchmarks and resource requirements, please see the [Resource Ta
9595
You can test the finetuned model with your own instructions by running:
9696

9797
```bash
98-
python litgpt/generate/adapter.py \
98+
litgpt generate adapter \
9999
--prompt "Recommend a movie to watch on the weekend." \
100100
--checkpoint_dir checkpoints/stabilityai/stablelm-base-alpha-3b
101101
```
102102

103103
or for Adapter V2
104104

105105
```bash
106-
python litgpt/generate/adapter_v2.py \
106+
litgpt generate adapter_v2 \
107107
--prompt "Recommend a movie to watch on the weekend." \
108108
--checkpoint_dir checkpoints/stabilityai/stablelm-base-alpha-3b
109109
```
@@ -138,7 +138,7 @@ You can easily train on your own instruction dataset saved in JSON format.
138138
2. Run `litgpt/finetune/adapter.py` or `litgpt/finetune/adapter_v2.py` by passing in the location of your data (and optionally other parameters):
139139
140140
```bash
141-
python litgpt/finetune/adapter.py \
141+
litgpt finetune adapter \
142142
--data JSON \
143143
--data.json_path data/mydata.json \
144144
--checkpoint_dir checkpoints/tiiuae/falcon-7b \

0 commit comments

Comments
 (0)