Skip to content

Commit 2b04dd0

Browse files
committed
bug fix for when there are shared initializers across multiple pads in an ONNX
When there are multiple Pad's in a network which use the same initializer, the subsequent Pad quantization doesn't find this initializer since it was deleted. It ends up creating a dangling QuantizeLinear node for the input[2] to the Pad which is a constant.
1 parent 9492593 commit 2b04dd0

File tree

1 file changed

+0
-2
lines changed
  • onnxruntime/python/tools/quantization/operators

1 file changed

+0
-2
lines changed

onnxruntime/python/tools/quantization/operators/pad.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -56,8 +56,6 @@ def quantize(self):
5656
quantized_padding_constant_array,
5757
quantized_padding_constant_name,
5858
)
59-
# Suppose this padding constant initializer only used by the node
60-
self.quantizer.model.remove_initializer(padding_constant_initializer)
6159
self.quantizer.model.add_initializer(quantized_padding_constant_initializer)
6260
node.input[2] = quantized_padding_constant_name
6361
else:

0 commit comments

Comments
 (0)