Skip to content

Conversation

cccclai
Copy link
Contributor

@cccclai cccclai commented Oct 7, 2025

Differential Revision: D84071939

This PR make changes for the Pad and Conv2d with pad option

  1. For Pad op, first of all, QNN can support it so we don't decompose it. circular mode is not supported by QNN so it has to be converted anyway. However, for replicate, ideally it should work with OpPad.Scheme.EDGE but I have been getting wrong result. In this PR, I convert pad with circular mode and replicate mode for the pass. We can follow up on why replicate didn't work with OpPad.Scheme.EDGE and remove the condition

  2. Slice copy op needs to be fixed with conv2d node, because we convert pad to slice copy in step 1)

  3. For Conv2d with pad option, it just work out of box once we fix the pad op

Add unit test for pad and conv2d with pad option, also add test for the new pass

Copy link

pytorch-bot bot commented Oct 7, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14860

Note: Links to docs will display an error until the docs builds have been completed.

❌ 4 New Failures, 1 Cancelled Job, 1 Unrelated Failure

As of commit 3f83885 with merge base 2672dd3 (image):

NEW FAILURES - The following jobs have failed:

CANCELLED JOB - The following job was cancelled. Please retry:

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 7, 2025
Copy link

meta-codesync bot commented Oct 7, 2025

@cccclai has exported this pull request. If you are a Meta employee, you can view the originating Diff in D84071939.

Copy link

github-actions bot commented Oct 7, 2025

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@cccclai
Copy link
Contributor Author

cccclai commented Oct 7, 2025

@winskuo-quic it seems like you're fixing failing. I have this pr fixing some of the conv2d with different mode but still missing circular mode.

@cccclai cccclai marked this pull request as draft October 7, 2025 17:08
cccclai added a commit to cccclai/executorch-1 that referenced this pull request Oct 8, 2025
…t padding (pytorch#14860)

Summary:

This PR make changes for the Pad and Conv2d with pad option
1. For Pad op, first of all, QNN can support it so we don't decompose it. `circular` mode is not supported by QNN so it has to be converted anyway. However, for `replicate`, ideally it should work with `OpPad.Scheme.EDGE` but I have been getting wrong result. In this PR, I convert pad with `circular` mode and `replicate` mode for the pass. We can follow up on why replicate didn't work with `OpPad.Scheme.EDGE` and remove the condition

2. For `Conv2d` with pad option, it just work out of box once we fix the pad op

Add unit test for pad and conv2d with pad option, also add test for the new pass

Differential Revision: D84071939
@cccclai cccclai changed the title conv2d padding Fix circulate/reflection/replication padding and conv2d with different padding Oct 8, 2025
cccclai added a commit to cccclai/executorch-1 that referenced this pull request Oct 8, 2025
…t padding (pytorch#14860)

Summary:

This PR make changes for the Pad and Conv2d with pad option
1. For Pad op, first of all, QNN can support it so we don't decompose it. `circular` mode is not supported by QNN so it has to be converted anyway. However, for `replicate`, ideally it should work with `OpPad.Scheme.EDGE` but I have been getting wrong result. In this PR, I convert pad with `circular` mode and `replicate` mode for the pass. We can follow up on why replicate didn't work with `OpPad.Scheme.EDGE` and remove the condition

2. For `Conv2d` with pad option, it just work out of box once we fix the pad op

Add unit test for pad and conv2d with pad option, also add test for the new pass

Differential Revision: D84071939
cccclai added a commit to cccclai/executorch-1 that referenced this pull request Oct 8, 2025
…t padding (pytorch#14860)

Summary:

This PR make changes for the Pad and Conv2d with pad option
1. For Pad op, first of all, QNN can support it so we don't decompose it. `circular` mode is not supported by QNN so it has to be converted anyway. However, for `replicate`, ideally it should work with `OpPad.Scheme.EDGE` but I have been getting wrong result. In this PR, I convert pad with `circular` mode and `replicate` mode for the pass. We can follow up on why replicate didn't work with `OpPad.Scheme.EDGE` and remove the condition

2. For `Conv2d` with pad option, it just work out of box once we fix the pad op

Add unit test for pad and conv2d with pad option, also add test for the new pass

Differential Revision: D84071939
@cccclai cccclai marked this pull request as ready for review October 8, 2025 04:16
cccclai added a commit to cccclai/executorch-1 that referenced this pull request Oct 8, 2025
…t padding (pytorch#14860)

Summary:

This PR make changes for the Pad and Conv2d with pad option
1. For Pad op, first of all, QNN can support it so we don't decompose it. `circular` mode is not supported by QNN so it has to be converted anyway. However, for `replicate`, ideally it should work with `OpPad.Scheme.EDGE` but I have been getting wrong result. In this PR, I convert pad with `circular` mode and `replicate` mode for the pass. We can follow up on why replicate didn't work with `OpPad.Scheme.EDGE` and remove the condition

2. For `Conv2d` with pad option, it just work out of box once we fix the pad op

Add unit test for pad and conv2d with pad option, also add test for the new pass

Differential Revision: D84071939
…t padding (pytorch#14860)

Summary:

This PR make changes for the Pad and Conv2d with pad option
1. For Pad op, first of all, QNN can support it so we don't decompose it. `circular` mode is not supported by QNN so it has to be converted anyway. However, for `replicate`, ideally it should work with `OpPad.Scheme.EDGE` but I have been getting wrong result. In this PR, I convert pad with `circular` mode and `replicate` mode for the pass. We can follow up on why replicate didn't work with `OpPad.Scheme.EDGE` and remove the condition

2. For `Conv2d` with pad option, it just work out of box once we fix the pad op

Add unit test for pad and conv2d with pad option, also add test for the new pass

Differential Revision: D84071939
start = self._sym_sub(graph, w, left)
left_slice = graph.create_node("call_function", torch.ops.aten.slice.Tensor, (x, -1, start, w))
right_slice = graph.create_node("call_function", torch.ops.aten.slice.Tensor, (x, -1, 0, right))
self._copy_meta(x, left_slice)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is meta['val'].shape correct for slice and cat?
I feel like their shapes cannot be copied directly and need some transform function.

Copy link
Collaborator

@winskuo-quic winskuo-quic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for supporting the conv2d padding mode!

I noticed replicate mode generates correct output when running quantized flow, while FP has wrong output, which I am guessing is caused by QNN SDK.

If this is the case, maybe we don't need to convert replicate mode when running quant flow. We can use a flag like following:

def __init__(self, quantization_capture=False) -> None:

to differentiate whether the pass is called during transform_for_annotation_pipeline or transform_for_export_pipeline.

To reproduce FP error, I have written a simple pad test:

def test_qnn_backend_pad_mode(self):
        sample_input = (torch.tensor([[[[1, 2, 3],
                               [4, 5, 6],
                               [7, 8, 9]]]], dtype=torch.float),)
        module = PadMode()  # noqa: F405
        self.lower_module_and_test_output(module, sample_input)
class PadMode(torch.nn.Module):
    def __init__(self):
        super().__init__()
 
    def forward(self, x):
        return torch.nn.functional.pad(x, pad=(1, 1, 1, 1), mode='replicate')

module = self.get_qdq_module(module, sample_input)
self.lower_module_and_test_output(module, sample_input)

def test_qnn_backend_conv1d_mode(self):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the function name is duplicated.


# ---------- small helpers ----------

def _copy_meta(self, src, dst, val_transform=None):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we possibly reuse the copy_meta under
from executorch.backends.qualcomm._passes.utils import copy_meta

scheme_map = {
"constant": OpPad.Scheme.CONSTANT,
"reflect": OpPad.Scheme.MIRROR_REFLECT,
"replicate": OpPad.Scheme.EDGE, # I think this is supposed to be correct, but the result is wrong
Copy link
Collaborator

@winskuo-quic winskuo-quic Oct 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm actually getting the UT passed even by commenting the ConvertPadToSliceConcat pass out when running quantized flow.

# pad_constant_value only for constant mode
if mode == "constant":
pad_value = None
if len(node.args) > 2 and not isinstance(node.args[2], str):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, since we are now supporting 2 targets,
aten.constant_pad_nd.default and aten.pad.default,
the schema can be different.
I believe there's a change that the constant value is now stored in node.args[3].
Maybe we can add another condition like:

if len(node.args) >=3 and not isinstance(node.args[2], str):
                pad_value = node.args[2]
elif len(node.args) >= 4 and not isinstance(node.args[3], str):
                pad_value = node.args[3]

target = ["aten.constant_pad_nd.default"]
target = [
"aten.constant_pad_nd.default",
"aten.pad.default",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we also add aten.pad.default to LayoutTransform pass?

exir_ops.edge.aten.constant_pad_nd.default,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/nightly CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants