Skip to content

Commit 5a4c839

Browse files
committed
up
1 parent 174a6a6 commit 5a4c839

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

docs/source/quantization-overview.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,16 +12,16 @@ Talk briefly about options for evaluating quantized model accuracy (running in e
1212

1313
# Quantizing ExecuTorch Models
1414

15-
ExecuTorch uses [torchao](https://github.com/pytorch/ao) for quantization. In general, ExecuTorch quantization is backend specific, and we allow each backned to define exactly how models are quantization based on the capability of the underlying hardware.
15+
ExecuTorch uses [torchao](https://github.com/pytorch/ao) for quantization. In general, ExecuTorch quantization is backend specific, and we allow each backned to define exactly how model quantization is done based on the capability of the underlying hardware.
16+
1617

1718
Each backend defines its own PT2E quantizers.
1819

1920
PT2E quantization happens after model export, but before lowering to a backend.
2021

2122

22-
* XNNPACK quantization example
23-
* CoreML quantization example
24-
* Vulkan quantization example
23+
* [XNNPACK quantization example](backends-xnnpack.md#quantization)
24+
* [CoreML quantization example](backends-coreml.md#quantization)
2525

2626

2727
```

0 commit comments

Comments
 (0)