Skip to content

Commit 916f309

Browse files
committed
Fix 3
1 parent a6c7fe6 commit 916f309

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/multi_gpu.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ To run fine-tuning on multi-GPUs, we will make use of two packages:
44

55
1. [PEFT](https://huggingface.co/blog/peft) methods and in particular using the Hugging Face [PEFT](https://github.com/huggingface/peft)library.
66

7-
2. [FSDP](https://pytorch.org/tutorials/intermediate/FSDP_adavnced_tutorial.html) which helps us parallelize the training over multiple GPUs. [More details](LLM_finetuning.md/#2-full-partial-parameter-finetuning).
7+
2. [FSDP](https://pytorch.org/tutorials/intermediate/FSDP_adavnced_tutorial.html) which helps us parallelize the training over multiple GPUs. [More details](./LLM_finetuning.md).
88

99
Given the combination of PEFT and FSDP, we would be able to fine tune a Meta Llama 8B model on multiple GPUs in one node.
1010
For big models like 405B we will need to fine-tune in a multi-node setup even if 4bit quantization is enabled.

0 commit comments

Comments
 (0)