Skip to content

Commit 5a7bfc9

Browse files
digantdesaifacebook-github-bot
authored andcommitted
Move XNNPACKQuantizer from PyTorch to ExecuTorch (pytorch#144940)
Summary: X-link: pytorch/executorch#7804 Pull Request resolved: pytorch#144940 This migrates XNNPACKQuantizer from PyTorch to ExecuTorch. Rationale: Main motivation is to avoid pytorch pin update in OSS after updating XNNPACKQuantizer, which can be rather frequent. Other impact and considerations: - PT2e flow (which lives in PyTorch) relies havily on XNNPACKQuantizer for a "example" implementation for quantizer and more importantly tests. Fow now, we will keep the torch.ao.quantization.xnnpack_quantizer as is but mark is as not BC, and deprecated to discourace future new dependencies on it. - Other OSS repository using XNNPACKQuantizer from PyTorch now have to take an additional dependency on ExecuTorch. Test Plan: CI Reviewed By: mcr229 Differential Revision: D68191752
1 parent b2c89bc commit 5a7bfc9

File tree

1 file changed

+11
-0
lines changed

1 file changed

+11
-0
lines changed

torch/ao/quantization/quantizer/xnnpack_quantizer.py

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33

44
import copy
55
import functools
6+
import warnings
67
from typing import Any, Callable, Optional, TYPE_CHECKING
78

89
import torch
@@ -30,6 +31,7 @@
3031
propagate_annotation,
3132
QuantizationConfig,
3233
)
34+
from torch.fx._compatibility import compatibility
3335

3436

3537
if TYPE_CHECKING:
@@ -235,7 +237,15 @@ def not_module_type_or_name_filter(n: Node) -> bool:
235237
return not_module_type_or_name_filter
236238

237239

240+
@compatibility(is_backward_compatible=False)
238241
class XNNPACKQuantizer(Quantizer):
242+
"""
243+
!!! DEPRECATED !!!
244+
XNNPACKQuantizer is a marked as deprected. It will be removed in the future.
245+
It has been moved to executorch.backends.xnnpack.quantizer.xnnpack_quantizer.XNNPACKQuantizer.
246+
Please use the new quantizer instead.
247+
"""
248+
239249
supported_config_and_operators = _get_supported_config_and_operators()
240250
STATIC_QAT_ONLY_OPS = [
241251
"conv_bn_relu",
@@ -268,6 +278,7 @@ class XNNPACKQuantizer(Quantizer):
268278

269279
def __init__(self) -> None:
270280
super().__init__()
281+
warnings.warn(f"{self.__class__.__name__} is deprecated!")
271282
self.global_config: Optional[QuantizationConfig] = None
272283
self.operator_type_config: dict[
273284
torch._ops.OpOverloadPacket, Optional[QuantizationConfig]

0 commit comments

Comments
 (0)