Skip to content

Commit 361d211

Browse files
authored
fix: support for Rank Stabilized LoRA (RSLoRA) (#619)
If an adapter is trained with RSLoRA, the alpha value is multiplied by `sqrt(rank)`.
1 parent c1fed2e commit 361d211

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

exllamav2/lora.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,9 @@ def __init__(self,
6767
read_config = json.load(f)
6868

6969
self.lora_r = read_config["r"]
70-
self.lora_alpha = float(read_config["lora_alpha"])
70+
self.lora_alpha = float(read_config["lora_alpha"] * math.sqrt(self.lora_r)
71+
) if read_config.get("use_rslora", False
72+
) else float(read_config["lora_alpha"])
7173
self.lora_scaling *= self.lora_alpha / self.lora_r
7274

7375
if "fan_in_fan_out" in read_config and read_config["fan_in_fan_out"]:

0 commit comments

Comments
 (0)