Skip to content

Commit 867922a

Browse files
authored
Do not squeeze weights with leading dim 1
Signed-off-by: Chenjie Luo <[email protected]>
1 parent 1cf78b2 commit 867922a

File tree

1 file changed

+0
-5
lines changed

1 file changed

+0
-5
lines changed

modelopt/torch/export/quant_utils.py

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -869,11 +869,6 @@ def postprocess_state_dict(state_dict: dict, maxbound: float, quantization: str
869869
post_state_dict[prefix + new_suffix] = value
870870
break
871871

872-
# Squeeze tensors with a leading dimension of 1
873-
for key, value in post_state_dict.items():
874-
if isinstance(value, torch.Tensor) and value.dim() == 3 and value.shape[0] == 1:
875-
post_state_dict[key] = value.squeeze(0)
876-
877872
# remove real quant parameters from the state dict
878873
keys_to_delete = []
879874
for key, value in post_state_dict.items():

0 commit comments

Comments
 (0)