Skip to content

Commit 689c887

Browse files
committed
fix: Fixed perCh useage in Quantizer
Signed-off-by: Brandon Groth <[email protected]>
1 parent 6b7b738 commit 689c887

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

fms_mo/quant_refactor/base_quant.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -205,7 +205,7 @@ def __init__(
205205
self.align_zero = align_zero
206206
self.clipSTE = clipSTE
207207

208-
temp_clipvals = torch.ones(self.perCh) if self.perCh else torch.Tensor([1.0])
208+
temp_clipvals = torch.ones(self.qscheme.Nch) if self.perCh else torch.Tensor([1.0])
209209
self.register_parameter("clip_val", torch.nn.Parameter(temp_clipvals.clone()))
210210
# Keep clip_valn as positive 1.0 to allow simpler multiplication with
211211
# negative numbers (clip_valn.data *= clip_valn)

0 commit comments

Comments
 (0)