Skip to content

Commit 3bab89e

Browse files
alanchiaotensorflower-gardener
authored andcommitted
Correct QAT docs header sizes to make them visible in tensorflow.org nav bar.
PiperOrigin-RevId: 305324107
1 parent 6b8edcd commit 3bab89e

File tree

2 files changed

+10
-10
lines changed

2 files changed

+10
-10
lines changed

tensorflow_model_optimization/g3doc/guide/quantization/training.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -13,14 +13,14 @@ determine how it fits with your use case.
1313
* To quickly find the APIs you need for your use case, see the
1414
[quantization aware training comprehensive guide](training_comprehensive_guide.md).
1515

16-
### Overview
16+
## Overview
1717

1818
Quantization aware training emulates inference-time quantization, creating a
1919
model that downstream tools will use to produce actually quantized models.
2020
The quantized models use lower-precision (e.g. 8-bit instead of 32-bit float),
2121
leading to benefits during deployment.
2222

23-
#### Deploy with quantization
23+
### Deploy with quantization
2424

2525
Quantization brings improvements via model compression and latency reduction.
2626
With the API defaults, the model size shrinks by 4x, and we typically see
@@ -31,7 +31,7 @@ such as the [EdgeTPU](https://coral.ai/docs/edgetpu/benchmarks/) and NNAPI.
3131
The technique is used in production in speech, vision, text, and translate use
3232
cases. The code currently supports a subset of these models.
3333

34-
#### Experiment with quantization and associated hardware
34+
### Experiment with quantization and associated hardware
3535

3636
Users can configure the quantization parameters (e.g. number of bits) and to
3737
some degree, the underlying algorithms. With these changes from the API
@@ -40,7 +40,7 @@ defaults, there is no supported path to deployment.
4040
APIs specific to this configuration are experimental and not subject to backward
4141
compatibility.
4242

43-
#### API compatibility
43+
### API compatibility
4444

4545
Users can apply quantization with the following APIs:
4646

@@ -56,7 +56,7 @@ It is on our roadmap to add support in the following areas:
5656
* Model building: clarify how Subclassed Models have limited to no support
5757
* Distributed training: `tf.distribute`
5858

59-
#### General support matrix
59+
### General support matrix
6060

6161
Support is available in the following areas:
6262

@@ -85,9 +85,9 @@ to launch. -->
8585
require the training step.
8686
* Stabilize APIs.
8787

88-
### Results
88+
## Results
8989

90-
#### Image classification with tools
90+
### Image classification with tools
9191

9292
<figure>
9393
<table>
@@ -116,7 +116,7 @@ to launch. -->
116116

117117
The models were tested on Imagenet and evaluated in both TensorFlow and TFLite.
118118

119-
#### Image classification for technique
119+
### Image classification for technique
120120

121121
<figure>
122122
<table>
@@ -139,7 +139,7 @@ The models were tested on Imagenet and evaluated in both TensorFlow and TFLite.
139139

140140
The models were tested on Imagenet and evaluated in both TensorFlow and TFLite.
141141

142-
### Examples
142+
## Examples
143143

144144
In addition to the
145145
[quantization aware training example](training_example.md),

tensorflow_model_optimization/g3doc/guide/quantization/training_comprehensive_guide.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -178,7 +178,7 @@
178178
"id": "Ybigft1fTn4T"
179179
},
180180
"source": [
181-
"#### Quantize whole model"
181+
"### Quantize whole model"
182182
]
183183
},
184184
{

0 commit comments

Comments
 (0)