Skip to content

Conversation

@Aleksei-grovety
Copy link
Contributor

For models with operations that are not quantized, this pass keeps unquantized operators on the CPU. For example, the deit-tiny-patch16-224 network has an unquantized scaled_dot_product_attention operation. When compiling to Vela, invalid argument errors occur because unquantized operations are offloaded to the NPU. This pass is designed to solve this problem.

For models with operations that are not quantized, this pass keeps unquantized operators on the CPU. For example, the deit-tiny-patch16-224 network has an unquantized scaled_dot_product_attention operation. When compiling to Vela, invalid argument errors occur because unquantized operations are offloaded to the NPU. This pass is designed to solve this problem.
@pytorch-bot
Copy link

pytorch-bot bot commented Dec 23, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/7427

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit a5db1ed with merge base 82763a9 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 23, 2024
@Aleksei-grovety
Copy link
Contributor Author

@pytorchbot label "topic: not user facing"

@Aleksei-grovety
Copy link
Contributor Author

@pytorchbot label "partner: arm"

@pytorch-bot pytorch-bot bot added the partner: arm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Arm label Dec 23, 2024
@oscarandersson8218
Copy link
Collaborator

Thanks for your contribution @Aleksei-grovety! Just for clarification, do you prefer ScaledDotProductAttention (SDPA) to run on the CPU in FP32 for precision reasons, or would you prefer to run it on the NPU?

If you prefer to run it on the NPU, a different approach to this would be to decompose SDPA before quantization. An SDPA-decomposition exists in https://github.com/pytorch/executorch/blob/main/backends/transforms/decompose_sdpa.py. You can add pre-quantization decompositions to

def transform_for_annotation_pipeline(self, graph_module: torch.fx.GraphModule):

Note: doing this might require some additional changes to https://github.com/pytorch/executorch/blob/main/backends/arm/_passes/decompose_softmaxes_pass.py

@Aleksei-grovety
Copy link
Contributor Author

It is preferable to run SDPA on the NPU. I will consider an approach with decomposition of the operation before quantization.

What do you think of the approach of taking non-quantized operations to the CPU? Or should we prioritize a full offloading approach to the NPU?

In addition to the SDPA operation in the deit-tiny-patch16-224 network, there is also the GELU activation, which is not implemented on the NPU. Is it correct that this activation can be realized using a LUT table?

@oscarandersson8218
Copy link
Collaborator

What do you think of the approach of taking non-quantized operations to the CPU? Or should we prioritize a full offloading approach to the NPU?

We prioritize full offloading to NPU. In some cases it may be necessary or beneficial to run an operator on the CPU, however that should be determined by TOSASupportedOperators:

class TOSASupportedOperators(OperatorSupportBase):

The reasoning is that if an operator is not quantized, it should not be consumed by the delegate (there should be no need to explicitly tag nodes). I think the most common reason for these kinds of issues is that a pattern of dq -> op -> q gets decomposed to dq -> op_0 -> op_1 -> ... -> op_n -> q when converting the model to edge dialect. The way we usually tackle these issues is to apply the decomposition before quantization so that each op will have a dq-q pair around it. Alternatively you can add the aten op to
def ops_to_not_decompose(
and do the edge decomposition yourself. If you encounter these issues, feel free to raise an issue. Also, if you would like to have a way to specify which operators to not delegate from a top level, I suggest you create an issue for that.

In addition to the SDPA operation in the deit-tiny-patch16-224 network, there is also the GELU activation, which is not implemented on the NPU. Is it correct that this activation can be realized using a LUT table?

That's correct. It should be possible to lower a quantized GELU with a simple modification of this pass:

class InsertTableOpsPass(ExportPass):

@oscarandersson8218
Copy link
Collaborator

Hi @Aleksei-grovety, just checking how things are going with this? Let us know if you need more guidance or if something is unclear.

@Aleksei-grovety
Copy link
Contributor Author

Hi @oscarandersson8218! The task of offloading all operations to the NPU for the deit-tiny-patch16-224 network has been postponed. I will close this PR for now. When I return to the task, I will create a new PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. partner: arm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Arm topic: not user facing

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants