From fc2544ceb8fe29933595171bea8cfdedeaab257c Mon Sep 17 00:00:00 2001 From: Digant Desai Date: Mon, 21 Apr 2025 11:25:15 -0700 Subject: [PATCH] Typo Summary: Created from CodeHub with https://fburl.com/edit-in-codehub Differential Revision: D73380647 --- docs/source/quantization-overview.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/quantization-overview.md b/docs/source/quantization-overview.md index 62411f0d8c4..cfc04e6dc59 100644 --- a/docs/source/quantization-overview.md +++ b/docs/source/quantization-overview.md @@ -13,7 +13,7 @@ Backend developers will need to implement their own ``Quantizer`` to express how Modeling users will use the ``Quantizer`` specific to their target backend to quantize their model, e.g. ``XNNPACKQuantizer``. -For an example quantization flow with ``XNPACKQuantizer``, more documentation and tutorials, please see ``Performing Quantization`` section in [ExecuTorch tutorial](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial). +For an example quantization flow with ``XNNPACKQuantizer``, more documentation and tutorials, please see ``Performing Quantization`` section in [ExecuTorch tutorial](https://pytorch.org/executorch/main/tutorials/export-to-executorch-tutorial). ## Source Quantization: Int8DynActInt4WeightQuantizer