You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md
+51-64Lines changed: 51 additions & 64 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,6 +8,8 @@ weight: 7 # 1 is first, 2 is second, etc.
8
8
layout: "learningpathall"
9
9
---
10
10
11
+
## Define a small neural network using Python
12
+
11
13
With the development environment ready, you can create a simple PyTorch model to test the setup.
12
14
13
15
This example defines a small feedforward neural network for a classification task. The model consists of 2 linear layers with ReLU activation in between.
@@ -41,100 +43,85 @@ output_size = 2 # number of output classes
41
43
model = SimpleNN(input_size, hidden_size, output_size)
42
44
43
45
# Example input tensor (batch size 1, input size 10)
44
-
x = torch.randn(1, input_size)
45
-
46
-
# torch.export: Defines the program with the ATen operator set for SimpleNN.
47
-
aten_dialect = export(model, (x,))
48
-
49
-
# to_edge: Make optimizations for edge devices. This ensures the model runs efficiently on constrained hardware.
50
-
edge_program = to_edge(aten_dialect)
51
-
52
-
# to_executorch: Convert the graph to an ExecuTorch program
53
-
executorch_program = edge_program.to_executorch()
46
+
x = (torch.randn(1, input_size),)
54
47
55
-
#Save the compiled .pte program
56
-
withopen("simple_nn.pte", "wb") asfile:
57
-
file.write(executorch_program.buffer)
48
+
#Add arguments to be parsed by the Ahead-of-Time (AoT) Arm compiler
49
+
ModelUnderTest = model
50
+
ModelInputs = x
58
51
59
52
print("Model successfully exported to simple_nn.pte")
60
53
```
61
54
62
-
Run the model from the Linux command line:
55
+
## Running the model on the Corstone-320 FVP
56
+
57
+
The final step is to take the Python-defined model and run it on the Corstone-320 FVP. This was done upon running the `run.sh` script in a previous section. To wrap up the Learning Path, you will perform these steps separately to better understand what happened under the hood. Start by setting some environment variables that are used by ExecuTorch.
63
58
64
59
```bash
65
-
python3 simple_nn.py
60
+
export ET_HOME=$HOME/executorch
61
+
export executorch_DIR=$ET_HOME/build
66
62
```
67
63
68
-
The output is:
64
+
Then, generate a model file on the `.pte` format using the Arm examples. The Ahead-of-Time (AoT) Arm compiler will enable optimizations for devices like the Grove Vision AI Module V2 and the Corstone-320 FVP. Run it from the ExecuTorch root directory.
The model is saved as a .pte file, which is the format used by ExecuTorch for deploying models to the edge.
75
-
76
-
Run the ExecuTorch version, first build the executable:
74
+
From the Arm Examples directory, you build an embedded Arm runner with the `.pte` included. This allows you to get the most performance out of your model, and ensures compatibility with the CPU kernels on the FVP. Finally, generate the executable `arm_executor_runner`.
-a "$ET_HOME/examples/arm/executor_runner/cmake-out/arm_executor_runner"
110
107
```
111
108
112
-
When the model execution completes successfully, you’ll see confirmation messages similar to those above, indicating successful loading, inference, and output tensor shapes.
113
-
114
-
115
-
116
-
TODO: Debug issues when running the model on the FVP, kindly ignore anything below this
117
-
## Running the model on the Corstone-300 FVP
118
-
119
109
120
-
Run the model using:
121
-
122
-
```bash
123
-
FVP_Corstone_SSE-300_Ethos-U55 -a simple_nn.pte -C mps3_board.visualisation.disable-visualisation=1
124
-
```
125
110
126
111
{{% notice Note %}}
127
112
128
-
-C mps3_board.visualisation.disable-visualisation=1 disables the FVP GUI. This can speed up launch time for the FVP.
113
+
The argument `mps4_board.visualisation.disable-visualisation=1` disables the FVP GUI. This can speed up launch time for the FVP.
129
114
130
-
The FVP can be terminated with Ctrl+C.
131
115
{{% /notice %}}
132
116
133
-
134
-
117
+
Observe that the FVP loads the model file.
135
118
```output
136
-
119
+
telnetterminal0: Listening for serial connection on port 5000
120
+
telnetterminal1: Listening for serial connection on port 5001
121
+
telnetterminal2: Listening for serial connection on port 5002
122
+
telnetterminal5: Listening for serial connection on port 5003
123
+
I [executorch:arm_executor_runner.cpp:412] Model in 0x70000000 $
124
+
I [executorch:arm_executor_runner.cpp:414] Model PTE file loaded. Size: 3360 bytes.
137
125
```
138
126
139
-
140
-
You've now set up your environment for TinyML development, and tested a PyTorch and ExecuTorch Neural Network.
127
+
You've now set up your environment for TinyML development on Arm, and tested a small PyTorch and ExecuTorch Neural Network.
0 commit comments