You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: tutorials/convert_hf_checkpoint.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@
3
3
By default, the `litgpt/scripts/download.py` script converts the downloaded HF checkpoint files into a LitGPT compatible format after downloading. For example,
@@ -28,7 +28,7 @@ To disable the automatic conversion, which is useful for development and debuggi
28
28
```bash
29
29
rm -rf checkpoints/EleutherAI/pythia-14m
30
30
31
-
python litgpt/scripts/download.py \
31
+
litgptdownload \
32
32
--repo_id EleutherAI/pythia-14m \
33
33
--convert_checkpoint false
34
34
@@ -49,7 +49,7 @@ ls checkpoints/EleutherAI/pythia-14m
49
49
The required files `lit_config.json` and `lit_model.pth` files can then be manually generated via the `litgpt/scripts/convert_hf_checkpoint.py` script:
Copy file name to clipboardExpand all lines: tutorials/convert_lit_models.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ LitGPT weights need to be converted to a format that Hugging Face understands wi
5
5
We provide a helpful script to convert models LitGPT models back to their equivalent Hugging Face Transformers format:
6
6
7
7
```sh
8
-
python litgpt/scripts/convert_lit_checkpoint.py \
8
+
litgpt convert from_litgpt \
9
9
--checkpoint_dir checkpoint_dir \
10
10
--output_dir converted_dir
11
11
```
@@ -47,7 +47,7 @@ model = AutoModel.from_pretrained("online_repo_id", state_dict=state_dict)
47
47
Please note that if you want to convert a model that has been fine-tuned using an adapter like LoRA, these weights should be [merged](../litgpt/scripts/merge_lora.py) to the checkpoint prior to converting.
48
48
49
49
```sh
50
-
python litgpt/scripts/merge_lora.py \
50
+
litgptmerge_lora \
51
51
--checkpoint_dir path/to/lora/checkpoint_dir
52
52
```
53
53
@@ -73,7 +73,7 @@ by running `litgpt/scripts/download.py` without any additional arguments.
73
73
Then, we download the model we specified via `$repo_id` above:
In some cases we don't need the model weight, for example, when we are pretraining a model from scratch instead of finetuning it. For cases like this, you can use the `--tokenizer_only` flag to only download a model's tokenizer, which can then be used in the pretraining scripts:
@@ -49,15 +49,15 @@ For example, the following settings will let you finetune the model in under 1 h
49
49
This script will save checkpoints periodically to the `out_dir` directory. If you are finetuning different models or on your own dataset, you can specify an output directory with your preferred name:
50
50
51
51
```bash
52
-
python litgpt/finetune/adapter.py \
52
+
litgptfinetuneadapter \
53
53
--data Alpaca \
54
54
--out_dir out/adapter/my-model-finetuned
55
55
```
56
56
57
57
or for Adapter V2
58
58
59
59
```bash
60
-
python litgpt/finetune/adapter_v2.py \
60
+
litgptfinetuneadapter_v2 \
61
61
--data Alpaca \
62
62
--out_dir out/adapter_v2/my-model-finetuned
63
63
```
@@ -66,7 +66,7 @@ If your GPU does not support `bfloat16`, you can pass the `--precision 32-true`
66
66
For instance, to fine-tune on MPS (the GPU on modern Macs), you can run
67
67
68
68
```bash
69
-
python litgpt/finetune/adapter.py \
69
+
litgptfinetuneadapter \
70
70
--data Alpaca \
71
71
--out_dir out/adapter/my-model-finetuned \
72
72
--precision 32-true
@@ -79,13 +79,13 @@ Note that `mps` as the accelerator will be picked up automatically by Fabric whe
79
79
Optionally, finetuning using quantization can be enabled via the `--quantize` flag, for example using the 4-bit NormalFloat data type:
0 commit comments