You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md
+98-6Lines changed: 98 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,6 +12,7 @@ layout: "learningpathall"
12
12
These instructions have been tested on:
13
13
- A GCP Arm-based Tau T2A Virtual Machine instance Running Ubuntu 22.04 LTS.
14
14
- Host machine with Ubuntu 24.04 on x86_64 architecture.
15
+
- Windows Subsystem for Linux (WSL): Windows x86_64
15
16
16
17
The host machine is where you will perform most of your development work, especially cross-compiling code for the target Arm devices.
17
18
@@ -23,15 +24,15 @@ If you want to use Arm Virtual Hardware the [Arm Virtual Hardware install guide]
23
24
24
25
## Setup on Host Machine
25
26
1. Setup if you don't have access to the physical board: We would use the Corstone-300 FVP, it is pre-configured.
26
-
2. Setup if you have access to the board: Skip to "Compilers" Section
27
+
2. Setup if you have access to the board: Skip to **"Compilers"** Section
27
28
28
29
29
-
### Corstone-300 FVP {#fvp} Setup for ExecuTorch
30
+
### Corstone-300 FVP Setup for ExecuTorch
30
31
For Arm Virtual Hardware users, the Corstone-300 FVP is pre-installed.
31
32
32
-
To install and set up the Corstone-300 FVP and ExecuTorch on your machine, refer to [Building and Running ExecuTorch with ARM Ethos-U Backend](https://pytorch.org/executorch/stable/executorch-arm-delegate-tutorial.html)). Follow this tutorial till "Install the TOSA reference model" Section. It should be the last thing you do from this tutorial.
33
+
To install and set up the Corstone-300 FVP and ExecuTorch on your machine, refer to [Building and Running ExecuTorch with ARM Ethos-U Backend](https://pytorch.org/executorch/stable/executorch-arm-delegate-tutorial.html). Follow this tutorial till the **"Install the TOSA reference model"** Section. It should be the last thing you do from this tutorial.
33
34
34
-
Since you already have the compiler installed from the above tutorial, skip to ## Install PyTorch.
35
+
Since you already have the compiler installed from the above tutorial, skip to **"Install PyTorch"**.
35
36
36
37
### Compilers
37
38
@@ -71,9 +72,9 @@ conda activate executorch
71
72
## Install Edge Impulse CLI
72
73
1. Create an [Edge Impulse Account](https://studio.edgeimpulse.com/signup) if you do not have one
73
74
74
-
2. Install the CLI tools
75
+
2. Install the CLI tools in your terminal
75
76
76
-
Ensure you have Nodejs install
77
+
Ensure you have Nodejs installed
77
78
78
79
```console
79
80
node -v
@@ -123,3 +124,94 @@ Follow the prompts to log in.
123
124
124
125
If successful, you should see your Grove - Vision AI Module V2 under 'Devices' in Edge Impulse.
125
126
127
+
128
+
## Build a Simple PyTorch Model
129
+
With our Environment ready, we will create a simple program to test our setup. This example will define a simple feedforward neural network for a classification task. The model consists of 2 linear layers with ReLU activation in between. Create a file called simple_nn.py with the following code:
Expected Output: Since the model is a simple feedforward model, you can expect a tensor of shape [1, 2]
209
+
210
+
```bash { output_lines = "1-3" }
211
+
Input tensor shape: [1, 10]
212
+
Output tensor shape: [1, 2]
213
+
Inference output: tensor([[0.5432, -0.3145]]) #will vary due to random initialization
214
+
```
215
+
216
+
If the model execution completes successfully, you’ll see confirmation messages similar to those above, indicating successful loading, inference, and output tensor shapes.
0 commit comments