Skip to content

Commit 17480e6

Browse files
committed
fix qconv2d flipout layers
1 parent ccc52ee commit 17480e6

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

bayesian_torch/layers/flipout_layers/quantized_conv_flipout.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -454,7 +454,7 @@ def forward(self, x, normal_scale=6/255, default_scale=0.1, default_zero_point=1
454454
dilation=self.dilation, groups=self.groups, scale=self.quant_dict[7]['scale'], zero_point=self.quant_dict[7]['zero_point'])
455455
perturbed_outputs = torch.ops.quantized.mul(perturbed_outputs, sign_output, self.quant_dict[8]['scale'], self.quant_dict[8]['zero_point'])
456456
out = torch.ops.quantized.add(outputs, perturbed_outputs, self.quant_dict[9]['scale'], self.quant_dict[9]['zero_point'])
457-
out = out.dequantize()
457+
# out = out.dequantize()
458458

459459
else:
460460
if x.dtype!=torch.quint8:

0 commit comments

Comments
 (0)