-
Notifications
You must be signed in to change notification settings - Fork 724
Add TagUnquantizedNodesPass #7427
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add TagUnquantizedNodesPass #7427
Conversation
For models with operations that are not quantized, this pass keeps unquantized operators on the CPU. For example, the deit-tiny-patch16-224 network has an unquantized scaled_dot_product_attention operation. When compiling to Vela, invalid argument errors occur because unquantized operations are offloaded to the NPU. This pass is designed to solve this problem.
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/7427
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit a5db1ed with merge base 82763a9 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot label "topic: not user facing" |
|
@pytorchbot label "partner: arm" |
|
Thanks for your contribution @Aleksei-grovety! Just for clarification, do you prefer ScaledDotProductAttention (SDPA) to run on the CPU in FP32 for precision reasons, or would you prefer to run it on the NPU? If you prefer to run it on the NPU, a different approach to this would be to decompose SDPA before quantization. An SDPA-decomposition exists in https://github.com/pytorch/executorch/blob/main/backends/transforms/decompose_sdpa.py. You can add pre-quantization decompositions to
Note: doing this might require some additional changes to https://github.com/pytorch/executorch/blob/main/backends/arm/_passes/decompose_softmaxes_pass.py |
|
It is preferable to run SDPA on the NPU. I will consider an approach with decomposition of the operation before quantization. What do you think of the approach of taking non-quantized operations to the CPU? Or should we prioritize a full offloading approach to the NPU? In addition to the SDPA operation in the deit-tiny-patch16-224 network, there is also the GELU activation, which is not implemented on the NPU. Is it correct that this activation can be realized using a LUT table? |
We prioritize full offloading to NPU. In some cases it may be necessary or beneficial to run an operator on the CPU, however that should be determined by TOSASupportedOperators:
The reasoning is that if an operator is not quantized, it should not be consumed by the delegate (there should be no need to explicitly tag nodes). I think the most common reason for these kinds of issues is that a pattern of dq -> op -> q gets decomposed to dq -> op_0 -> op_1 -> ... -> op_n -> q when converting the model to edge dialect. The way we usually tackle these issues is to apply the decomposition before quantization so that each op will have a dq-q pair around it. Alternatively you can add the aten op to executorch/backends/arm/arm_partitioner.py Line 116 in 85d274a
That's correct. It should be possible to lower a quantized GELU with a simple modification of this pass:
|
|
Hi @Aleksei-grovety, just checking how things are going with this? Let us know if you need more guidance or if something is unclear. |
|
Hi @oscarandersson8218! The task of offloading all operations to the NPU for the deit-tiny-patch16-224 network has been postponed. I will close this PR for now. When I return to the task, I will create a new PR. |
For models with operations that are not quantized, this pass keeps unquantized operators on the CPU. For example, the deit-tiny-patch16-224 network has an unquantized scaled_dot_product_attention operation. When compiling to Vela, invalid argument errors occur because unquantized operations are offloaded to the NPU. This pass is designed to solve this problem.