-
Notifications
You must be signed in to change notification settings - Fork 741
Fix aten.amax lowering issue #13381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix aten.amax lowering issue #13381
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13381
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New FailuresAs of commit c775e98 with merge base 45e7810 ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
This pull request was exported from Phabricator. Differential Revision: D80187368 |
This PR needs a
|
|
This PR fix the lowering issue, however, not quite sure a good unit test for it. Try to use the following code but looks like passing without this PR. Do you know what might trigger the error? Edit: |
Summary: There was an error when lowering amax around this line `input_tensor = self.get_tensor(input_node, node)` and the issue is that we're trying to permute the tensor inside node_visitors, op_node.meta[QCOM_AXIS_ORDER] is (0, 1), however, tensor.shape is (1, 980, 49). Rollback Plan: Differential Revision: D80187368
f917e4b to
3b17364
Compare
|
This pull request was exported from Phabricator. Differential Revision: D80187368 |
|
Can I get a review on this PR? @haowhsu-quic @shewu-quic @winskuo-quic @DannyYuyang-quic |
Summary: There was an error when lowering amax around this line `input_tensor = self.get_tensor(input_node, node)` and the issue is that we're trying to permute the tensor inside node_visitors, op_node.meta[QCOM_AXIS_ORDER] is (0, 1), however, tensor.shape is (1, 980, 49). Rollback Plan: Differential Revision: D80187368
3b17364 to
1392c1b
Compare
|
This pull request was exported from Phabricator. Differential Revision: D80187368 |
Summary: There was an error when lowering amax around this line `input_tensor = self.get_tensor(input_node, node)` and the issue is that we're trying to permute the tensor inside node_visitors, op_node.meta[QCOM_AXIS_ORDER] is (0, 1), however, tensor.shape is (1, 980, 49). Rollback Plan: Differential Revision: D80187368
1392c1b to
c775e98
Compare
|
This pull request was exported from Phabricator. Differential Revision: D80187368 |
Summary: There was an error when lowering amax around this line `input_tensor = self.get_tensor(input_node, node)` and the issue is that we're trying to permute the tensor inside node_visitors, op_node.meta[QCOM_AXIS_ORDER] is (0, 1), however, tensor.shape is (1, 980, 49). Rollback Plan: Differential Revision: D80187368
Summary: There was an error when lowering amax around this line `input_tensor = self.get_tensor(input_node, node)` and the issue is that we're trying to permute the tensor inside node_visitors, op_node.meta[QCOM_AXIS_ORDER] is (0, 1), however, tensor.shape is (1, 980, 49). Rollback Plan: Differential Revision: D80187368
Summary:
There was an error when lowering amax around this line
input_tensor = self.get_tensor(input_node, node)and the issue is that we're trying to permute the tensor inside node_visitors, op_node.meta[QCOM_AXIS_ORDER] is (0, 1), however, tensor.shape is (1, 980, 49).Rollback Plan:
Differential Revision: D80187368