Skip to content

Commit 6be53e1

Browse files
authored
[DOCS] fix prose in lora guide (#4217)
1 parent 3080fc1 commit 6be53e1

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/source/lora_without_regret.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -419,7 +419,7 @@ The blog post defines the ideal dataset size for LoRA to match full fine-tuning
419419

420420
### 3. *"FullFT and high-rank LoRAs have similar learning curves"*
421421

422-
Counterintuitively, the blog post recommends using similar learning rates to full fine-tuning. In the TRL script, we could use `--learning_rate` to set the learning rate. The \\( \frac{1}{r} \\) scaling in LoRA makes the optimal learning rate approximately rank-independent.
422+
Counterintuitively, the blog post recommends using a higher learning rate than for full fine-tuning. In the table above, we used 1.0e-5 for LoRA and 1.0e-6 for full fine-tuning. In the TRL script, we could use `--learning_rate` to set the learning rate. The \\( \frac{1}{r} \\) scaling in LoRA makes the optimal learning rate approximately rank-independent.
423423

424424
![learning rate](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lora_without_regret/2.png)
425425

0 commit comments

Comments
 (0)