Skip to content

Fix lint: remove duplicate numpy import in test (W0621/W0404)

f50743d
Select commit
Loading
Failed to load commit list.
Draft

COPY OF 2345 #2354

Fix lint: remove duplicate numpy import in test (W0621/W0404)
f50743d
Select commit
Loading
Failed to load commit list.
Azure Pipelines / Olive CI failed Mar 18, 2026 in 38m 18s

Build #20260317.3 had test failures

Details

Tests

  • Failed: 12 (0.28%)
  • Passed: 3,858 (91.55%)
  • Other: 344 (8.16%)
  • Total: 4,214

Annotations

Check failure on line 18 in Build log

See this annotation in the file changed.

@azure-pipelines azure-pipelines / Olive CI

Build log #L18

There are one or more test failures detected in result files. Detailed summary of published test results can be viewed in the Tests tab.

Check failure on line 21 in Build log

See this annotation in the file changed.

@azure-pipelines azure-pipelines / Olive CI

Build log #L21

There are one or more test failures detected in result files. Detailed summary of published test results can be viewed in the Tests tab.

Check failure on line 6865 in Build log

See this annotation in the file changed.

@azure-pipelines azure-pipelines / Olive CI

Build log #L6865

Bash exited with code '1'.

Check failure on line 20 in Build log

See this annotation in the file changed.

@azure-pipelines azure-pipelines / Olive CI

Build log #L20

There are one or more test failures detected in result files. Detailed summary of published test results can be viewed in the Tests tab.

Check failure on line 1 in test_gemm_to_matmul_add

See this annotation in the file changed.

@azure-pipelines azure-pipelines / Olive CI

test_gemm_to_matmul_add

onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from /mnt/vss/_work/1/.pytest_basetemp/test_gemm_to_matmul_add0/out/gemm.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model.cc:181 onnxruntime::Model::Model(onnx::ModelProto&&, const onnxruntime::PathString&, const onnxruntime::IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 13, max supported IR version: 11
Raw output
test/passes/onnx/test_graph_surgeries.py:2339: in test_gemm_to_matmul_add
    sess = InferenceSession(output_model.model_path, providers=["CPUExecutionProvider"])
/opt/hostedtoolcache/Python/3.10.20/x64/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:485: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
/opt/hostedtoolcache/Python/3.10.20/x64/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:573: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
E   onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from /mnt/vss/_work/1/.pytest_basetemp/test_gemm_to_matmul_add0/out/gemm.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model.cc:181 onnxruntime::Model::Model(onnx::ModelProto&&, const onnxruntime::PathString&, const onnxruntime::IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 13, max supported IR version: 11

Check failure on line 1 in test_reciprocal_mul_to_div

See this annotation in the file changed.

@azure-pipelines azure-pipelines / Olive CI

test_reciprocal_mul_to_div

onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from /mnt/vss/_work/1/.pytest_basetemp/test_reciprocal_mul_to_div0/out/recip_mul.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model.cc:181 onnxruntime::Model::Model(onnx::ModelProto&&, const onnxruntime::PathString&, const onnxruntime::IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 13, max supported IR version: 11
Raw output
test/passes/onnx/test_graph_surgeries.py:2433: in test_reciprocal_mul_to_div
    sess = InferenceSession(output_model.model_path, providers=["CPUExecutionProvider"])
/opt/hostedtoolcache/Python/3.10.20/x64/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:485: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
/opt/hostedtoolcache/Python/3.10.20/x64/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:573: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
E   onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from /mnt/vss/_work/1/.pytest_basetemp/test_reciprocal_mul_to_div0/out/recip_mul.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model.cc:181 onnxruntime::Model::Model(onnx::ModelProto&&, const onnxruntime::PathString&, const onnxruntime::IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 13, max supported IR version: 11

Check failure on line 1 in test_deduplicate_nodes

See this annotation in the file changed.

@azure-pipelines azure-pipelines / Olive CI

test_deduplicate_nodes

onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from /mnt/vss/_work/1/.pytest_basetemp/test_deduplicate_nodes0/out/dup_nodes.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model.cc:181 onnxruntime::Model::Model(onnx::ModelProto&&, const onnxruntime::PathString&, const onnxruntime::IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 13, max supported IR version: 11
Raw output
test/passes/onnx/test_graph_surgeries.py:2553: in test_deduplicate_nodes
    sess = InferenceSession(output_model.model_path, providers=["CPUExecutionProvider"])
/opt/hostedtoolcache/Python/3.10.20/x64/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:485: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
/opt/hostedtoolcache/Python/3.10.20/x64/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:573: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
E   onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from /mnt/vss/_work/1/.pytest_basetemp/test_deduplicate_nodes0/out/dup_nodes.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model.cc:181 onnxruntime::Model::Model(onnx::ModelProto&&, const onnxruntime::PathString&, const onnxruntime::IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 13, max supported IR version: 11

Check failure on line 1 in test_numerical_correctness

See this annotation in the file changed.

@azure-pipelines azure-pipelines / Olive CI

test_numerical_correctness

onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from /mnt/vss/_work/1/.pytest_basetemp/test_numerical_correctness0/cast_chain.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model.cc:181 onnxruntime::Model::Model(onnx::ModelProto&&, const onnxruntime::PathString&, const onnxruntime::IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 13, max supported IR version: 11
Raw output
test/passes/onnx/test_peephole_optimizer.py:275: in test_numerical_correctness
    orig_sess = ort.InferenceSession(str(cast_chain_model_path), providers=["CPUExecutionProvider"])
/opt/hostedtoolcache/Python/3.10.20/x64/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:485: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
/opt/hostedtoolcache/Python/3.10.20/x64/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:573: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
E   onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from /mnt/vss/_work/1/.pytest_basetemp/test_numerical_correctness0/cast_chain.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model.cc:181 onnxruntime::Model::Model(onnx::ModelProto&&, const onnxruntime::PathString&, const onnxruntime::IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 13, max supported IR version: 11