|
1 | 1 | """ |
2 | 2 | .. _l-plot-failing-onnxruntime-evaluator: |
3 | 3 |
|
4 | | -Running OnnxruntimeEvaluator on a failing model |
5 | | -=============================================== |
| 4 | +Intermediate results with onnxruntime |
| 5 | +===================================== |
6 | 6 |
|
7 | 7 | Example :ref:`l-plot-failing-reference-evaluator` demonstrated |
8 | 8 | how to run a python runtime on a model but it may very slow sometimes |
9 | 9 | and it could show some discrepancies if the only provider is not CPU. |
10 | 10 | Let's use :class:`OnnxruntimeEvaluator <onnx_diagnostic.reference.OnnxruntimeEvaluator>`. |
11 | | -It splits the model into node and runs them independantly until it succeeds |
| 11 | +It splits the model into node and runs them independently until it succeeds |
12 | 12 | or fails. This class converts every node into model based on the types |
13 | 13 | discovered during the execution. It relies on :class:`InferenceSessionForTorch |
14 | 14 | <onnx_diagnostic.ort_session.InferenceSessionForTorch>` or |
|
43 | 43 | oh.make_node("Cast", ["C"], ["X999"], to=999, name="failing"), |
44 | 44 | oh.make_node("CastLike", ["X999", "Y"], ["Z"], name="n4"), |
45 | 45 | ], |
46 | | - "nd", |
| 46 | + "-nd-", |
47 | 47 | [ |
48 | 48 | oh.make_tensor_value_info("X", TBFLOAT16, ["a", "b", "c"]), |
49 | 49 | oh.make_tensor_value_info("Y", TBFLOAT16, ["a", "b", "c"]), |
|
100 | 100 | # %% |
101 | 101 | # We can see it run until it reaches `Cast` and stops. |
102 | 102 | # The error message is not always obvious to interpret. |
103 | | -# It gets improved everytime from time to time. |
| 103 | +# It gets improved every time from time to time. |
104 | 104 | # This runtime is useful when it fails for a numerical reason. |
105 | 105 | # It is possible to insert prints in the python code to print |
106 | 106 | # more information or debug if needed. |
0 commit comments