Skip to content

Commit 7cf327c

Browse files
authored
Update baseline.md
1 parent 48dd634 commit 7cf327c

File tree

1 file changed

+4
-4
lines changed
  • content/learning-paths/servers-and-cloud-computing/onnx-on-azure

1 file changed

+4
-4
lines changed

content/learning-paths/servers-and-cloud-computing/onnx-on-azure/baseline.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ This indicates the model successfully executed a single forward pass through the
4848

4949
#### Output summary:
5050

51-
Single inference latency(0.00260 sec): This is the time required for the model to process one input image and produce an output.
52-
Cold-start performance: The first run includes graph loading, memory allocation, and model initialization overhead.
53-
Subsequent inferences are usually faster due to caching and optimized execution paths.model.
54-
- This demonstrates that the setup is fully working, and ONNX Runtime efficiently executes quantized models on Arm64.
51+
Single inference latency(0.00260 sec): This is the time required for the model to process one input image and produce an output. The first run includes graph loading, memory allocation, and model initialization overhead.
52+
Subsequent inferences are usually faster due to caching and optimized execution.
53+
54+
This demonstrates that the setup is fully working, and ONNX Runtime efficiently executes quantized models on Arm64.

0 commit comments

Comments
 (0)