Skip to content

Commit 5c8da85

Browse files
authored
Update env-setup-5.md
1 parent c457778 commit 5c8da85

File tree

1 file changed

+98
-6
lines changed
  • content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm

1 file changed

+98
-6
lines changed

content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md

Lines changed: 98 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ layout: "learningpathall"
1212
These instructions have been tested on:
1313
- A GCP Arm-based Tau T2A Virtual Machine instance Running Ubuntu 22.04 LTS.
1414
- Host machine with Ubuntu 24.04 on x86_64 architecture.
15+
- Windows Subsystem for Linux (WSL): Windows x86_64
1516

1617
The host machine is where you will perform most of your development work, especially cross-compiling code for the target Arm devices.
1718

@@ -23,15 +24,15 @@ If you want to use Arm Virtual Hardware the [Arm Virtual Hardware install guide]
2324

2425
## Setup on Host Machine
2526
1. Setup if you don't have access to the physical board: We would use the Corstone-300 FVP, it is pre-configured.
26-
2. Setup if you have access to the board: Skip to "Compilers" Section
27+
2. Setup if you have access to the board: Skip to **"Compilers"** Section
2728

2829

29-
### Corstone-300 FVP {#fvp} Setup for ExecuTorch
30+
### Corstone-300 FVP Setup for ExecuTorch
3031
For Arm Virtual Hardware users, the Corstone-300 FVP is pre-installed.
3132

32-
To install and set up the Corstone-300 FVP and ExecuTorch on your machine, refer to [Building and Running ExecuTorch with ARM Ethos-U Backend](https://pytorch.org/executorch/stable/executorch-arm-delegate-tutorial.html)). Follow this tutorial till "Install the TOSA reference model" Section. It should be the last thing you do from this tutorial.
33+
To install and set up the Corstone-300 FVP and ExecuTorch on your machine, refer to [Building and Running ExecuTorch with ARM Ethos-U Backend](https://pytorch.org/executorch/stable/executorch-arm-delegate-tutorial.html). Follow this tutorial till the **"Install the TOSA reference model"** Section. It should be the last thing you do from this tutorial.
3334

34-
Since you already have the compiler installed from the above tutorial, skip to ## Install PyTorch.
35+
Since you already have the compiler installed from the above tutorial, skip to **"Install PyTorch"**.
3536

3637
### Compilers
3738

@@ -71,9 +72,9 @@ conda activate executorch
7172
## Install Edge Impulse CLI
7273
1. Create an [Edge Impulse Account](https://studio.edgeimpulse.com/signup) if you do not have one
7374

74-
2. Install the CLI tools
75+
2. Install the CLI tools in your terminal
7576

76-
Ensure you have Nodejs install
77+
Ensure you have Nodejs installed
7778

7879
```console
7980
node -v
@@ -123,3 +124,94 @@ Follow the prompts to log in.
123124

124125
If successful, you should see your Grove - Vision AI Module V2 under 'Devices' in Edge Impulse.
125126

127+
128+
## Build a Simple PyTorch Model
129+
With our Environment ready, we will create a simple program to test our setup. This example will define a simple feedforward neural network for a classification task. The model consists of 2 linear layers with ReLU activation in between. Create a file called simple_nn.py with the following code:
130+
131+
```python
132+
import torch
133+
from torch.export import export
134+
from executorch.exir import to_edge
135+
136+
# Define a simple Feedforward Neural Network
137+
class SimpleNN(torch.nn.Module):
138+
def __init__(self, input_size, hidden_size, output_size):
139+
super(SimpleNN, self).__init__()
140+
self.fc1 = torch.nn.Linear(input_size, hidden_size)
141+
self.relu = torch.nn.ReLU()
142+
self.fc2 = torch.nn.Linear(hidden_size, output_size)
143+
144+
def forward(self, x):
145+
out = self.fc1(x)
146+
out = self.relu(out)
147+
out = self.fc2(out)
148+
return out
149+
150+
# Create the model instance
151+
input_size = 10 # example input features size
152+
hidden_size = 5 # hidden layer size
153+
output_size = 2 # number of output classes
154+
155+
model = SimpleNN(input_size, hidden_size, output_size)
156+
157+
# Example input tensor (batch size 1, input size 10)
158+
x = torch.randn(1, input_size)
159+
160+
# torch.export: Defines the program with the ATen operator set for SimpleNN.
161+
aten_dialect = export(model, (x,))
162+
163+
# to_edge: Make optimizations for edge devices. This ensures the model runs efficiently on constrained hardware.
164+
edge_program = to_edge(aten_dialect)
165+
166+
# to_executorch: Convert the graph to an ExecuTorch program
167+
executorch_program = edge_program.to_executorch()
168+
169+
# Save the compiled .pte program
170+
with open("simple_nn.pte", "wb") as file:
171+
file.write(executorch_program.buffer)
172+
173+
print("Model successfully exported to simple_nn.pte")
174+
```
175+
176+
Run it from your terminal:
177+
178+
```console
179+
python3 simple_nn.py
180+
```
181+
182+
If everything runs successfully, the output will be:
183+
```bash { output_lines = "1" }
184+
Model successfully exported to simple_nn.pte
185+
```
186+
Finally, the model is saved as a .pte file, which is the format used by ExecuTorch for deploying models to the edge.
187+
188+
Now, we will run the ExecuTorch version, first run:
189+
190+
```console
191+
# Clean and configure the build system
192+
rm -rf cmake-out && mkdir cmake-out && cd cmake-out && cmake ..
193+
194+
# Build the executor_runner target
195+
cmake --build cmake-out --target executor_runner -j9
196+
```
197+
198+
You should see an output similar to:
199+
```bash { output_lines = "1" }
200+
[100%] Built target executor_runner
201+
```
202+
203+
Now, run the executor_runner with the Model:
204+
```console
205+
./cmake-out/executor_runner --model_path simple_nn.pte
206+
```
207+
208+
Expected Output: Since the model is a simple feedforward model, you can expect a tensor of shape [1, 2]
209+
210+
```bash { output_lines = "1-3" }
211+
Input tensor shape: [1, 10]
212+
Output tensor shape: [1, 2]
213+
Inference output: tensor([[0.5432, -0.3145]]) #will vary due to random initialization
214+
```
215+
216+
If the model execution completes successfully, you’ll see confirmation messages similar to those above, indicating successful loading, inference, and output tensor shapes.
217+

0 commit comments

Comments
 (0)