Skip to content

Commit f80dcd6

Browse files
Updates
1 parent c7d8b94 commit f80dcd6

File tree

2 files changed

+14
-15
lines changed
  • content/learning-paths/embedded-and-microcontrollers/training-inference-pytorch

2 files changed

+14
-15
lines changed

content/learning-paths/embedded-and-microcontrollers/training-inference-pytorch/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: "Edge AI on Arm with PyTorch and ExecuTorch: Tiny Rock-Paper-Scissors"
2+
title: "Edge AI on Arm with PyTorch and ExecuTorch: Tiny Rock, Paper, Scissors"
33

44
minutes_to_complete: 60
55

content/learning-paths/embedded-and-microcontrollers/training-inference-pytorch/fvp-3.md

Lines changed: 13 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,15 @@ weight: 4
66
layout: learningpathall
77
---
88

9-
This section guides you through the process of compiling your trained Rock-Paper-Scissors model and running it on a simulated Arm-based edge device, the Corstone-320 Fixed Virtual Platform (FVP). This final step demonstrates the end-to-end workflow of deploying a TinyML model for on-device inference.
9+
# Compile and run the rock-paper-scissors model on Corstone-320 FVP
10+
11+
This section shows how to compile your trained Rock, Paper, Scissors model and run it on the Corstone-320 Fixed Virtual Platform (FVP), a simulated Arm-based edge device. This completes the end-to-end workflow for deploying a TinyML model for on-device inference.
1012

1113
## Compile and build the executable
1214

13-
First, you'll use the Ahead-of-Time (AOT) Arm compiler to convert your PyTorch model into a format optimized for the Arm architecture and the Ethos-U NPU. This process, known as delegation, offloads parts of the neural network graph that are compatible with the NPU, allowing for highly efficient inference.
15+
Use the Ahead-of-Time (AoT) Arm compiler to convert your PyTorch model to an ExecuTorch program optimized for Arm and the Ethos-U NPU. This process (delegation) offloads supported parts of the neural network to the NPU for efficient inference.
1416

15-
Set up your environment variables by running the following commands in your terminal:
17+
Set up environment variables:
1618

1719
```bash
1820
export ET_HOME=$HOME/executorch
@@ -34,7 +36,7 @@ You should see:
3436
PTE file saved as rps_tiny_arm_delegate_ethos-u85-128.pte
3537
```
3638

37-
Next, you'll build the **Ethos-U runner**, which is a bare-metal executable that includes the ExecuTorch runtime and your compiled model. This runner is what the FVP will execute. Navigate to the runner's directory and use CMake to configure the build.
39+
Next, build the Ethos-U runner - a bare-metal executable that includes the ExecuTorch runtime and your compiled model. Configure the build with CMake:
3840

3941
```bash
4042
cd $HOME/executorch/examples/arm/executor_runner
@@ -52,7 +54,7 @@ cmake -DCMAKE_BUILD_TYPE=Release \
5254
-DSYSTEM_CONFIG=Ethos_U85_SYS_DRAM_Mid
5355
```
5456

55-
You should see output similar to this, indicating a successful configuration:
57+
You should see configuration output similar to:
5658

5759
```bash
5860
-- *******************************************************
@@ -67,13 +69,13 @@ You should see output similar to this, indicating a successful configuration:
6769
-- Build files have been written to: ~/executorch/examples/arm/executor_runner/cmake-out
6870
```
6971

70-
Now, build the executable with CMake:
72+
Build the executable:
7173

7274
```bash
7375
cmake --build "$ET_HOME/examples/arm/executor_runner/cmake-out" -j --target arm_executor_runner
7476
```
7577

76-
### Run the Model on the FVP
78+
### Run the model on the FVP
7779
With the `arm_executor_runner` executable ready, you can now run it on the Corstone-320 FVP to see the model on a simulated Arm device.
7880

7981
```bash
@@ -88,11 +90,10 @@ FVP_Corstone_SSE-320 \
8890
```
8991

9092
{{% notice Note %}}
91-
The argument `mps4_board.visualisation.disable-visualisation=1` disables the FVP GUI. This can speed up launch time for the FVP.
93+
`mps4_board.visualisation.disable-visualisation=1` disables the FVP GUI and can reduce launch time
9294
{{% /notice %}}
9395

94-
95-
Observe the output from the FVP. You'll see messages indicating that the model file has been loaded and the inference is running. This confirms that your ExecuTorch program is successfully executing on the simulated Arm hardware.
96+
You should see logs indicating that the model file loads and inference begins:
9697

9798
```output
9899
telnetterminal0: Listening for serial connection on port 5000
@@ -109,9 +110,7 @@ I [executorch:EthosUBackend.cpp:116 init()] data:0x70000070
109110
```
110111

111112
{{% notice Note %}}
112-
The inference itself may take a longer to run with a model this size - note that this is not a reflection of actual execution time.
113+
Inference might take longer with a model of this size on the FVP; this does not reflect real device performance.
113114
{{% /notice %}}
114115

115-
You've now successfully built, optimized, and deployed a computer vision model on a simulated Arm-based system. This hands-on exercise demonstrates the power and practicality of TinyML and ExecuTorch for resource-constrained devices.
116-
117-
In a future learning path, you can explore comparing different model performances and inference times before and after optimization. You could also analyze CPU and memory usage during inference, providing a deeper understanding of how the ExecuTorch framework optimizes your model for edge deployment.
116+
You have now built, optimized, and deployed a computer vision model on a simulated Arm-based system. In a future Learning Path, you can compare performance and latency before and after optimization and analyze CPU and memory usage during inference for deeper insight into ExecuTorch on edge devices.

0 commit comments

Comments
 (0)