Bug Report
- Summary: The Unsloth-generated
Linear_peft_forward.py references VARIANT_KWARG_KEYS when constructing variant_kwargs, but the constant is never imported or defined, so every LoRA Linear forward immediately crashes with NameError.
- Environment: Python 3.10, torch ≥2.8.0, transformers 4.56.2, latest
unsloth/unsloth_zoo from Git (ROCm workstation, but this is a pure-Python failure).
- GPU: AMD RYZEN AI MAX+ 395 w/ Radeon 8060S, rocm 7.0+,
- Steps to Reproduce:
- Generate any LoRA-wrapped model with Unsloth (e.g., run
examples/gpt_oss_(20B)_Reinforcement_Learning_2048_Game_BF16.py).
- When the decoder calls a LoRA projection,
unsloth_forward in Linear_peft_forward.py executes.
- The function evaluates
variant_kwargs = {k: kwargs.pop(k, None) for k in VARIANT_KWARG_KEYS} and Python raises NameError, halting the run.
- Minimal Reproduction (standalone):
import textwrap, types, torch
LINEAR_SOURCE = textwrap.dedent(
"""
import torch
def unsloth_forward(self, x, *args, **kwargs):
variant_kwargs = {k: kwargs.pop(k, None) for k in VARIANT_KWARG_KEYS}
return self.base_layer(x, *args, **kwargs)
"""
)
mod = types.ModuleType("broken_linear")
exec(LINEAR_SOURCE, mod.__dict__)
class Dummy:
disable_adapters = False
merged = False
active_adapters = []
lora_A = lora_B = lora_dropout = {}
scaling = {}
lora_variant = {}
def base_layer(self, x, *args, **kwargs):
return x
mod.unsloth_forward(Dummy(), torch.zeros(1, 1))
Running this script reproduces the NameError.
- Expected Result:
variant_kwargs should be built successfully so the base layer forward completes.
- Actual Result:
NameError prevents any LoRA projection from running.
- Proposed Fix: Have the generator import
VARIANT_KWARG_KEYS from peft.tuners.lora.layer, fallback to peft.tuners.lora.bnb, or default to ["alora_offsets"] if neither import is available.
Bug Report
Linear_peft_forward.pyreferencesVARIANT_KWARG_KEYSwhen constructingvariant_kwargs, but the constant is never imported or defined, so every LoRA Linear forward immediately crashes withNameError.unsloth/unsloth_zoofrom Git (ROCm workstation, but this is a pure-Python failure).examples/gpt_oss_(20B)_Reinforcement_Learning_2048_Game_BF16.py).unsloth_forwardinLinear_peft_forward.pyexecutes.variant_kwargs = {k: kwargs.pop(k, None) for k in VARIANT_KWARG_KEYS}and Python raisesNameError, halting the run.Running this script reproduces the
NameError.variant_kwargsshould be built successfully so the base layer forward completes.NameErrorprevents any LoRA projection from running.VARIANT_KWARG_KEYSfrompeft.tuners.lora.layer, fallback topeft.tuners.lora.bnb, or default to["alora_offsets"]if neither import is available.