Skip to content

Commit 066e3f7

Browse files
nutsiepullytensorflower-gardener
authored andcommitted
Add sigmoid as supported activation in QAT
Handling of sigmoid is similar to Softmax. We place a FQ before the activation but not after to prevent a large set of values turning to zeros, and potentially leading to NaNs. PiperOrigin-RevId: 336213731
1 parent fa3f855 commit 066e3f7

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

tensorflow_model_optimization/python/core/quantization/keras/default_8bit/default_8bit_quantize_registry.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -472,7 +472,7 @@ def get_output_quantizers(self, layer):
472472
# 'relu' should generally get fused into the previous layer.
473473
return [quantizers.MovingAverageQuantizer(
474474
num_bits=8, per_axis=False, symmetric=False, narrow_range=False)]
475-
elif layer.activation.__name__ in ['linear', 'softmax']:
475+
elif layer.activation.__name__ in ['linear', 'softmax', 'sigmoid']:
476476
return []
477477

478478
raise ValueError('Activation {} not supported by '

tensorflow_model_optimization/python/core/quantization/keras/quantize_aware_activation.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ class QuantizeAwareActivation(object):
7777
# on inclusion. Verify in TFLite before enabling.
7878

7979
# These activations should be quantized prior to the activation being applied.
80-
_PRE_QUANT_ACTIVATIONS = frozenset({'softmax'})
80+
_PRE_QUANT_ACTIVATIONS = frozenset({'softmax', 'sigmoid'})
8181

8282
# These activations should be quantized after the activation has been applied.
8383
_POST_QUANT_ACTIVATIONS = frozenset({'linear', 'relu'})

0 commit comments

Comments
 (0)