Skip to content

Commit 6d04a87

Browse files
Fix unreachable links in markdown files (#1219)
1 parent 81d7cf3 commit 6d04a87

File tree

5 files changed

+4
-9
lines changed

5 files changed

+4
-9
lines changed

extensions/xla/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ export PJRT_DEVICE=TPU
7878
> An extensive guide on setup and available options can be found [here](https://cloud.google.com/tpu/docs/v4-users-guide).
7979
8080
Since a new machine was created, you may need to download pretrained weights.
81-
They can be copied to the machine using `gcloud compute tpus tpu-vm scp`, or you can follow the steps described in our [downloading guide](download_model_weights.md).
81+
They can be copied to the machine using `gcloud compute tpus tpu-vm scp`, or you can follow the steps described in our [downloading guide](../../tutorials/download_model_weights.md).
8282

8383
It is also recommended to set up a persistent disk from which to load checkpoints.
8484
Follow [this guide](https://cloud.google.com/tpu/docs/setup-persistent-disk#setting_up_a_tpu_vm_and_a_persistent_disk) to do so.

tutorials/0_to_litgpt.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -527,7 +527,7 @@ lm_eval --model hf \
527527
 
528528
**More information and additional resources**
529529

530-
- [tutorials/convert_lit_models](tutorials/convert_lit_models.md): Tutorial on converting LitGPT weights
530+
- [tutorials/convert_lit_models](./convert_lit_models.md): Tutorial on converting LitGPT weights
531531

532532

533533

tutorials/inference.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Inference
22

3-
We demonstrate how to run inference (next token prediction) with the GPT base model in the [`generate.py`](generate.py) script:
3+
We demonstrate how to run inference (next token prediction) with the GPT base model in the [`generate.py`](../litgpt/generate/base.py) script:
44

55
```bash
66
litgpt generate base --prompt "Hello, my name is" --checkpoint_dir checkpoints/stabilityai/stablelm-base-alpha-3b

tutorials/oom.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ However, your hardware may not support such large context lengths. Here's what y
3434
* For the finetuning scripts, you can trim the length of the samples in your dataset.
3535
All the finetuning scripts expose a `--data.max_seq_length=...` argument. This might also be useful in cases where
3636
sample lengths are highly unbalanced, as the presence of a single very long sample would incur a larger memory usage for all other
37-
shorter samples. For example, the median length of the samples in Alpaca is 110 tokens. Truncating the Alpaca dataset to 256 max tokens reduces the memory requirements of a Falcon 7B model from 23.52 GB to 15.73 GB. For more information about the dataset truncation, please see the *Truncating datasets* section in the [prepare_datasets.md](prepare_datasets.md) tutorial.
37+
shorter samples. For example, the median length of the samples in Alpaca is 110 tokens. Truncating the Alpaca dataset to 256 max tokens reduces the memory requirements of a Falcon 7B model from 23.52 GB to 15.73 GB. For more information about the dataset truncation, please see the *Truncating datasets* section in the [prepare_dataset.md](prepare_dataset.md) tutorial.
3838

3939
Keep in mind that reducing the context length will affect the modelling performance on text sequences longer than the limit.
4040

tutorials/prepare_dataset.md

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,6 @@ For comparison, the Falcon 7B model requires 23.52 GB of memory for the original
7979

8080
### Alpaca-GPT4
8181

82-
8382
The Alpaca-GPT4 was built by using the prompts of the original Alpaca dataset and generate the responses via GPT 4. The
8483
dataset consists of 52,000 instructions and responses.
8584

@@ -126,7 +125,6 @@ litgpt finetune lora \
126125
--train.max_seq_length 256
127126
```
128127

129-
130128
 
131129

132130
### Deita
@@ -162,7 +160,6 @@ litgpt finetune lora \
162160
--train.max_seq_length 512
163161
```
164162

165-
166163
 
167164

168165
### Dolly
@@ -281,7 +278,6 @@ litgpt finetune lora \
281278

282279
However, you can also select individual subsets via comma-separated strings as follows:
283280

284-
285281
```bash
286282
litgpt finetune lora \
287283
--data FLAN \
@@ -385,5 +381,4 @@ Note that you only need to modify a small fraction of the code file, namely the
385381

386382
In addition to the finetuning dataset described above, LitGPT also supports several datasets for pretraining. The pretraining datasets are described in more detail in the following separate tutorial documents:
387383

388-
- [Pretrain Llama 2 on OpenWebText](./pretrain_openwebtext.md)
389384
- [Pretrain TinyLlama on Slimpajama and Starcoder](./pretrain_tinyllama.md)

0 commit comments

Comments
 (0)