Skip to content

Commit 3080fc1

Browse files
Fix LoRA params in Python in LoRA without regret (#4215)
1 parent 5d87095 commit 3080fc1

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

docs/source/lora_without_regret.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ from trl import SFTTrainer, SFTConfig
4242

4343
dataset = load_dataset("open-thoughts/OpenThoughts-114k", split="train")
4444

45-
peft_config = LoraConfig(lora_r=256, lora_alpha=16, lora_target_modules="all-linear")
45+
peft_config = LoraConfig(r=256, lora_alpha=16, target_modules="all-linear")
4646

4747
training_args = SFTConfig(
4848
learning_rate=2e-4,
@@ -245,9 +245,9 @@ def strip_reasoning_accuracy_reward(completions, **kwargs):
245245
...
246246

247247
peft_config = LoraConfig(
248-
lora_r=1,
248+
r=1,
249249
lora_alpha=32,
250-
lora_target_modules="all-linear"
250+
target_modules="all-linear"
251251
)
252252

253253
training_args = GRPOConfig(

0 commit comments

Comments
 (0)