Skip to content

Commit a473d28

Browse files
committed
don't re-assign _pre_quantization_type.
1 parent af3ecea commit a473d28

File tree

1 file changed

+0
-6
lines changed

1 file changed

+0
-6
lines changed

src/diffusers/models/modeling_utils.py

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -822,12 +822,6 @@ def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.P
822822
model=model, device_map=device_map, keep_in_fp32_modules=keep_in_fp32_modules
823823
)
824824

825-
# We store the original dtype for quantized models as we cannot easily retrieve it
826-
# once the weights have been quantized
827-
# Note that once you have loaded a quantized model, you can't change its dtype so this will
828-
# remain a single source of truth
829-
config["_pre_quantization_dtype"] = torch_dtype
830-
831825
# if device_map is None, load the state dict and move the params from meta device to the cpu
832826
if device_map is None and not is_sharded:
833827
# `torch.cuda.current_device()` is fine here when `hf_quantizer` is not None.

0 commit comments

Comments
 (0)