-
Notifications
You must be signed in to change notification settings - Fork 742
NXP backend: Add support for conversion of Conv1D operator #13549
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NXP backend: Add support for conversion of Conv1D operator #13549
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13549
Note: Links to docs will display an error until the docs builds have been completed. ❌ 3 New FailuresAs of commit a8ffed0 with merge base bd92f1a ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot label "module: nxp" "release notes: nxp" |
JakeStevens
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
0d530e4 to
5182658
Compare
Yes, this is a conflict. I added fix for padding with zero-point to this PR. In our IR, Conv1D operator doesn't exist and it is emulated by changing to Conv2D and then the result is converted to Conv1D format. So this fix will change it also for Conv2D. However, the other PR is still needed as it changes Average pool. |
+ fix input_shapes type hint in to_quantized_edge_program() + add test cases for Conv1D operator + add fix for padding with zero-point
5182658 to
a8ffed0
Compare
|
@JakeStevens , Roman updated based on your findings. Can you please re-review. |
| def extend_1d_padding_to_2d(tflite_1d_padding: MutableSequence): | ||
| """Extend the PyTorch 'padding' operator attribute that represents padding for a 1D kernel to 2D, by adding '0's.""" | ||
| if tflite_1d_padding is not None: | ||
| tflite_1d_padding.append(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is my specific concern. We are padding with zeros not zero point, just like the 2d case previously
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This value express the amount of padding applied to the input, not the padding value. So zero here means that the tensor will not be padded on H dimension as this is conversion from 1D tensor NWC to 2D NHWC tensor - padding also needs to be extended/converted. The amount of padding for W dim is kept.
The padding value is the zero-point added on L352 and L390 in convolution_converter.py.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, thanks!
|
Sorry, I think my comment was misunderstood. I was not concerned that this doesn't include the changes from the 2D padding PR-- both will get into mainline eventually, this is fine and makes perfect sense. instead, I believe this PR has the same bug; in the conversion 1d case, we are extending the existing padding by adding a zero, instead of zero point. see inline comment |
| def extend_1d_padding_to_2d(tflite_1d_padding: MutableSequence): | ||
| """Extend the PyTorch 'padding' operator attribute that represents padding for a 1D kernel to 2D, by adding '0's.""" | ||
| if tflite_1d_padding is not None: | ||
| tflite_1d_padding.append(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, thanks!
…3549) ### Summary Add delegation of `aten.conv1d` to Neutron. Fixes `input_shapes` type hint in `to_quantized_edge_program()`. Fixes `operators_not_to_delegate` assignment in partitioner. ### Test plan Unit tests provided in backends/nxp/tests/ir/converter/node_converter/test_conv_converter.py. cc @digantdesai @JakeStevens @robert-kalmar
### Summary Add delegation of `aten.conv1d` to Neutron. Fixes `input_shapes` type hint in `to_quantized_edge_program()`. Fixes `operators_not_to_delegate` assignment in partitioner. ### Test plan Unit tests provided in backends/nxp/tests/ir/converter/node_converter/test_conv_converter.py. cc @digantdesai @JakeStevens @robert-kalmar
Summary
This PR adds delegation of
aten.conv1dto Neutron. Fixesinput_shapestype hint into_quantized_edge_program(). Fixesoperators_not_to_delegateassignment in partitioner.Test plan
Unit tests provided in backends/nxp/tests/ir/converter/node_converter/test_conv_converter.py.
cc @digantdesai @JakeStevens @robert-kalmar