You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/_index.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,8 @@ prerequisites:
19
19
- Basic knowledge of machine learning concepts.
20
20
- Understanding of IoT and embedded systems (helpful but not required).
21
21
- A Linux host machine or VM running Ubuntu 20.04 or higher, or an AWS account to use [Arm Virtual Hardware](https://www.arm.com/products/development-tools/simulation/virtual-hardware)
22
-
- Target device, preferably Cortex-M boards but you can use Cortex-A7 boards as well.
22
+
- Target device, phyisical or using the or Corstone-300 FVP, preferably Cortex-M boards but you can use Cortex-A7 boards as well.
With our Environment ready, we will create a simple program to test our setup. This example will define a simple feedforward neural network for a classification task. The model consists of 2 linear layers with ReLU activation in between. Create a file called simple_nn.py with the following code:
Expected Output: Since the model is a simple feedforward model, you can expect a tensor of shape [1, 2]
91
+
92
+
```bash { output_lines = "1-3" }
93
+
Input tensor shape: [1, 10]
94
+
Output tensor shape: [1, 2]
95
+
Inference output: tensor([[0.5432, -0.3145]]) #will vary due to random initialization
96
+
```
97
+
98
+
If the model execution completes successfully, you’ll see confirmation messages similar to those above, indicating successful loading, inference, and output tensor shapes.
@@ -16,39 +16,25 @@ These instructions have been tested on:
16
16
17
17
The host machine is where you will perform most of your development work, especially cross-compiling code for the target Arm devices.
18
18
19
-
You can use your own Linux host machine or use [Arm Virtual Hardware (AVH)](https://www.arm.com/products/development-tools/simulation/virtual-hardware) for this Learning Path.
20
-
21
-
The Ubuntu version should be `20.04 or higher`. The `x86_64` architecture must be used because the Corstone-300 FVP is not currently available for the Arm architecture. You will need a Linux desktop to run the FVP because it opens graphical windows for input and output from the software applications. Also, though Executorch supports Windows via WSL, it is limited in resource.
22
-
23
-
If you want to use Arm Virtual Hardware the [Arm Virtual Hardware install guide](/install-guides/avh#corstone) provides setup instructions.
24
-
25
-
## Setup on Host Machine
26
-
1. Setup if you don't have access to the physical board: We would use the Corstone-300 FVP, it is pre-configured.
27
-
2. Setup if you have access to the board: Skip to **"Compilers"** Section
19
+
- The Ubuntu version should be `20.04 or higher`.
20
+
- If you do not have the board, the `x86_64` architecture must be used because the Corstone-300 FVP is not currently available for the Arm architecture.
21
+
- Also, though Executorch supports Windows via WSL, it is limited in resource.
28
22
29
23
30
24
### Corstone-300 FVP Setup for ExecuTorch
31
-
For Arm Virtual Hardware users, the Corstone-300 FVP is pre-installed.
32
25
33
26
To install and set up the Corstone-300 FVP and ExecuTorch on your machine, refer to [Building and Running ExecuTorch with ARM Ethos-U Backend](https://pytorch.org/executorch/stable/executorch-arm-delegate-tutorial.html). Follow this tutorial till the **"Install the TOSA reference model"** Section. It should be the last thing you do from this tutorial.
34
27
35
-
Since you already have the compiler installed from the above tutorial, skip to **"Install PyTorch"**.
36
28
37
-
### Compilers
38
29
39
-
The examples can be built with [Arm Compiler for Embedded](https://developer.arm.com/Tools%20and%20Software/Arm%20Compiler%20for%20Embedded) or [Arm GNU Toolchain](https://developer.arm.com/Tools%20and%20Software/GNU%20Toolchain).
30
+
## Install Executorch
40
31
41
-
Both compilers are pre-installed in Arm Virtual Hardware.
32
+
1. Follow the [Setting Up ExecuTorch guide](https://pytorch.org/executorch/stable/getting-started-setup.html) to install it.
42
33
43
-
Alternatively, if you use Arch Linux or its derivatives, you can use Pacman to install GCC.
34
+
2. Activate the `executorch` virtual environment from the installation guide to ensure it is ready for use:
44
35
45
-
Use the install guides to install the compilers on your computer:
46
-
-[Arm Compiler for Embedded](/install-guides/armclang/)
1. Connect the Grove - Vision AI Module V2 to your computer using the USB-C cable.
105
-
106
-

107
-
108
-
109
-
2. In the extracted Edge Impulse firmware, locate and run the installation scripts to flash your device.
110
-
111
-
```console
112
-
./flash_linux.sh
113
-
```
114
-
115
-
3. Configure Edge Impulse for the board
116
-
in your terminal, run:
117
-
118
-
```console
119
-
edge-impulse-daemon
120
-
```
121
-
Follow the prompts to log in.
122
-
123
-
4. Verify your board is connected
124
-
125
-
If successful, you should see your Grove - Vision AI Module V2 under 'Devices' in Edge Impulse.
126
-
127
-
128
-
## Build a Simple PyTorch Model
129
-
With our Environment ready, we will create a simple program to test our setup. This example will define a simple feedforward neural network for a classification task. The model consists of 2 linear layers with ReLU activation in between. Create a file called simple_nn.py with the following code:
Expected Output: Since the model is a simple feedforward model, you can expect a tensor of shape [1, 2]
209
-
210
-
```bash { output_lines = "1-3" }
211
-
Input tensor shape: [1, 10]
212
-
Output tensor shape: [1, 2]
213
-
Inference output: tensor([[0.5432, -0.3145]]) #will vary due to random initialization
214
-
```
215
-
216
-
If the model execution completes successfully, you’ll see confirmation messages similar to those above, indicating successful loading, inference, and output tensor shapes.
217
-
67
+
## Next Steps
68
+
1. If you don't have access to the physical board: Skip to [Environment Setup Corstone-300 FVP](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/setup-6-FVP.md)
69
+
2. If you have access to the board: Skip to [Setup on Grove - Vision AI Module V2](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/setup-6-Grove.md)
To install and set up the Corstone-300 FVP on your machine, refer to [Building and Running ExecuTorch with ARM Ethos-U Backend](https://pytorch.org/executorch/stable/executorch-arm-delegate-tutorial.html). Follow this tutorial till the **"Install the TOSA reference model"** Section. It should be the last thing you do from this tutorial.
14
+
15
+
16
+
## Next Steps
17
+
1. Go to [Build a Simple PyTorch Model](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md) to test your environment setup.
0 commit comments