You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/embedded-and-microcontrollers/training-inference-pytorch/fvp-3.md
+13-14Lines changed: 13 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,13 +6,15 @@ weight: 4
6
6
layout: learningpathall
7
7
---
8
8
9
-
This section guides you through the process of compiling your trained Rock-Paper-Scissors model and running it on a simulated Arm-based edge device, the Corstone-320 Fixed Virtual Platform (FVP). This final step demonstrates the end-to-end workflow of deploying a TinyML model for on-device inference.
9
+
# Compile and run the rock-paper-scissors model on Corstone-320 FVP
10
+
11
+
This section shows how to compile your trained Rock, Paper, Scissors model and run it on the Corstone-320 Fixed Virtual Platform (FVP), a simulated Arm-based edge device. This completes the end-to-end workflow for deploying a TinyML model for on-device inference.
10
12
11
13
## Compile and build the executable
12
14
13
-
First, you'll use the Ahead-of-Time (AOT) Arm compiler to convert your PyTorch model into a format optimized for the Arm architecture and the Ethos-U NPU. This process, known as delegation, offloads parts of the neural network graph that are compatible with the NPU, allowing for highly efficient inference.
15
+
Use the Ahead-of-Time (AoT) Arm compiler to convert your PyTorch model to an ExecuTorch program optimized for Arm and the Ethos-U NPU. This process (delegation) offloads supported parts of the neural network to the NPUfor efficient inference.
14
16
15
-
Set up your environment variables by running the following commands in your terminal:
17
+
Set up environment variables:
16
18
17
19
```bash
18
20
export ET_HOME=$HOME/executorch
@@ -34,7 +36,7 @@ You should see:
34
36
PTE file saved as rps_tiny_arm_delegate_ethos-u85-128.pte
35
37
```
36
38
37
-
Next, you'll build the **Ethos-U runner**, which is a bare-metal executable that includes the ExecuTorch runtime and your compiled model. This runner is what the FVP will execute. Navigate to the runner's directory and use CMake to configure the build.
39
+
Next, build the Ethos-U runner - a bare-metal executable that includes the ExecuTorch runtime and your compiled model. Configure the build with CMake:
With the `arm_executor_runner` executable ready, you can now run it on the Corstone-320 FVP to see the model on a simulated Arm device.
78
80
79
81
```bash
@@ -88,11 +90,10 @@ FVP_Corstone_SSE-320 \
88
90
```
89
91
90
92
{{% notice Note %}}
91
-
The argument `mps4_board.visualisation.disable-visualisation=1` disables the FVP GUI. This can speed up launch time for the FVP.
93
+
`mps4_board.visualisation.disable-visualisation=1` disables the FVP GUI and can reduce launch time
92
94
{{% /notice %}}
93
95
94
-
95
-
Observe the output from the FVP. You'll see messages indicating that the model file has been loaded and the inference is running. This confirms that your ExecuTorch program is successfully executing on the simulated Arm hardware.
96
+
You should see logs indicating that the model file loads and inference begins:
96
97
97
98
```output
98
99
telnetterminal0: Listening for serial connection on port 5000
@@ -109,9 +110,7 @@ I [executorch:EthosUBackend.cpp:116 init()] data:0x70000070
109
110
```
110
111
111
112
{{% notice Note %}}
112
-
The inference itself may take a longer to run with a model this size - note that this is not a reflection of actual execution time.
113
+
Inference might take longer with a model of this size on the FVP; this does not reflect real device performance.
113
114
{{% /notice %}}
114
115
115
-
You've now successfully built, optimized, and deployed a computer vision model on a simulated Arm-based system. This hands-on exercise demonstrates the power and practicality of TinyML and ExecuTorch for resource-constrained devices.
116
-
117
-
In a future learning path, you can explore comparing different model performances and inference times before and after optimization. You could also analyze CPU and memory usage during inference, providing a deeper understanding of how the ExecuTorch framework optimizes your model for edge deployment.
116
+
You have now built, optimized, and deployed a computer vision model on a simulated Arm-based system. In a future Learning Path, you can compare performance and latency before and after optimization and analyze CPU and memory usage during inference for deeper insight into ExecuTorch on edge devices.
0 commit comments