File tree Expand file tree Collapse file tree 2 files changed +7
-4
lines changed
tensorflow_model_optimization/g3doc/guide/quantization Expand file tree Collapse file tree 2 files changed +7
-4
lines changed Original file line number Diff line number Diff line change @@ -37,8 +37,10 @@ cases. The code currently supports a
37
37
### Experiment with quantization and associated hardware
38
38
39
39
Users can configure the quantization parameters (e.g. number of bits) and to
40
- some degree, the underlying algorithms. With these changes from the API
41
- defaults, there is no supported path to deployment.
40
+ some degree, the underlying algorithms. Note that with these changes from the
41
+ API defaults, there is currently no supported path for deployment to a backend.
42
+ For instance, TFLite conversion and kernel implementations only support 8-bit
43
+ quantization.
42
44
43
45
APIs specific to this configuration are experimental and not subject to backward
44
46
compatibility.
Original file line number Diff line number Diff line change 496
496
},
497
497
"source" : [
498
498
" **Your use case**: using the following APIs means that there is no\n " ,
499
- " supported path to deployment. The features are also experimental and not\n " ,
500
- " subject to backward compatibility.\n " ,
499
+ " supported path to deployment. For instance, TFLite conversion\n " ,
500
+ " and kernel implementations only support 8-bit quantization.\n " ,
501
+ " The features are also experimental and not subject to backward compatibility.\n " ,
501
502
" * `tfmot.quantization.keras.QuantizeConfig`\n " ,
502
503
" * `tfmot.quantization.keras.quantizers.Quantizer`\n " ,
503
504
" * `tfmot.quantization.keras.quantizers.LastValueQuantizer`\n " ,
You can’t perform that action at this time.
0 commit comments