Skip to content

Commit be71434

Browse files
authored
[pt2e] Make prepare and convert faster by caching (#2983)
**Summary:** This is the torchao version of pytorch/pytorch#162550 by @navsud. Including the PR description here again: D79674759 tried to fix the expensive prepare and convert steps, as assert_and_get_unique_device was called multiple times. This change fixes that issue by using functools.cache decorator. **Test Plan:** Verified on llm export to QNN. LLM Quantization prepare time of ~20min reduced to ~3min.
1 parent 66384a9 commit be71434

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

torchao/utils.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -49,6 +49,7 @@
4949

5050

5151
# Referenced from: https://github.com/pytorch/pytorch/blob/9105d54c6b37099575c0059ef274c86c4dc80c57/torch/ao/quantization/utils.py#L711
52+
@functools.cache
5253
def _assert_and_get_unique_device(module: torch.nn.Module) -> Any:
5354
"""
5455
Returns the unique device for a module, or None if no device is found.

0 commit comments

Comments
 (0)