Skip to content

Commit dd581f9

Browse files
3l1facebook-github-bot
authored andcommitted
Update mul int16 test (#14646)
Summary: bypass-github-export-checks bypass-github-pytorch-ci-checks bypass-github-executorch-ci-checks Reviewed By: digantdesai Differential Revision: D83437473
1 parent 258bce3 commit dd581f9

File tree

1 file changed

+0
-6
lines changed

1 file changed

+0
-6
lines changed

backends/arm/test/ops/test_mul.py

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -342,9 +342,6 @@ def test_mul_tensor_16a8w_tosa_INT(test_data: input_t1):
342342

343343
@common.parametrize("test_data", test_data_suite)
344344
@common.XfailIfNoCorstone300
345-
@pytest.mark.xfail(
346-
reason="Vela compilation fails with 'Invalid arguments' for int16 mul operations. See: https://github.com/pytorch/executorch/issues/13947"
347-
)
348345
def test_mul_tensor_16a8w_u55_INT16(test_data: input_t1):
349346
"""Test mul operation with 16A8W quantization on U55 (16-bit activations, 8-bit weights)"""
350347
per_channel_quantization = False
@@ -370,9 +367,6 @@ def test_mul_tensor_16a8w_u55_INT16(test_data: input_t1):
370367

371368
@common.parametrize("test_data", test_data_suite)
372369
@common.XfailIfNoCorstone320
373-
@pytest.mark.xfail(
374-
reason="Vela compilation fails with 'Invalid arguments' for int16 mul operations. See: https://github.com/pytorch/executorch/issues/13947"
375-
)
376370
def test_mul_tensor_16a8w_u85_INT16(test_data: input_t1):
377371
"""Test mul operation with 16A8W quantization on U85 (16-bit activations, 8-bit weights)"""
378372
per_channel_quantization = False

0 commit comments

Comments
 (0)