Skip to content

Commit f917e4b

Browse files
cccclaifacebook-github-bot
authored andcommitted
Fix aten.amax lowering issue
Summary: There was an error when lowering amax around this line `input_tensor = self.get_tensor(input_node, node)` and the issue is that we're trying to permute the tensor inside node_visitors, op_node.meta[QCOM_AXIS_ORDER] is (0, 1), however, tensor.shape is (1, 980, 49). Rollback Plan: Differential Revision: D80187368
1 parent 75b77a6 commit f917e4b

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

backends/qualcomm/_passes/layout_transform.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -175,6 +175,7 @@ def is_layout_agnostic(self, node: torch.fx.Node) -> bool:
175175
exir_ops.edge.aten.mean.dim,
176176
exir_ops.edge.aten.min.dim,
177177
exir_ops.edge.aten.sum.dim_IntList,
178+
exir_ops.edge.aten.amax.default,
178179
}:
179180
# if dimemsion is not kept, we'll have no clue how to do layout transform
180181
if len(node.args) < 3 or not node.args[2]:

0 commit comments

Comments
 (0)