Skip to content

Commit 5ba890c

Browse files
Params4bit: don't try to quantize when moving to meta device
1 parent 7b5cf36 commit 5ba890c

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

bitsandbytes/nn/modules.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -332,7 +332,7 @@ def to(self: T, tensor: Tensor, non_blocking: bool = ...) -> T: ...
332332
def to(self, *args, **kwargs):
333333
device, dtype, non_blocking, convert_to_format = torch._C._nn._parse_to(*args, **kwargs)
334334

335-
if device is not None and not self.bnb_quantized:
335+
if device is not None and device.type != "meta" and not self.bnb_quantized:
336336
return self._quantize(device)
337337
else:
338338
if self.quant_state is not None:

0 commit comments

Comments
 (0)