Skip to content

Commit 14fae61

Browse files
authored
Merge pull request #1553 from annietllnd/tinyml-review
Rebase TinyML LP WIP
2 parents d3be902 + f06142f commit 14fae61

File tree

5 files changed

+36
-40
lines changed

5 files changed

+36
-40
lines changed

content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md

Lines changed: 31 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,6 @@ weight: 7 # 1 is first, 2 is second, etc.
88
layout: "learningpathall"
99
---
1010

11-
TODO connect this part with the FVP/board?
1211
With our environment ready, you can create a simple program to test the setup.
1312

1413
This example defines a small feedforward neural network for a classification task. The model consists of 2 linear layers with ReLU activation in between.
@@ -62,7 +61,7 @@ print("Model successfully exported to simple_nn.pte")
6261

6362
Run the model from the Linux command line:
6463

65-
```console
64+
```bash
6665
python3 simple_nn.py
6766
```
6867

@@ -76,15 +75,15 @@ The model is saved as a .pte file, which is the format used by ExecuTorch for de
7675

7776
Run the ExecuTorch version, first build the executable:
7877

79-
```console
78+
```bash
8079
# Clean and configure the build system
8180
(rm -rf cmake-out && mkdir cmake-out && cd cmake-out && cmake ..)
8281

8382
# Build the executor_runner target
8483
cmake --build cmake-out --target executor_runner -j$(nproc)
8584
```
8685

87-
You see the build output and it ends with:
86+
You will see the build output and it ends with:
8887

8988
```output
9089
[100%] Linking CXX executable executor_runner
@@ -93,7 +92,7 @@ You see the build output and it ends with:
9392

9493
When the build is complete, run the executor_runner with the model as an argument:
9594

96-
```console
95+
```bash
9796
./cmake-out/executor_runner --model_path simple_nn.pte
9897
```
9998

@@ -112,3 +111,30 @@ Output 0: tensor(sizes=[1, 2], [-0.105369, -0.178723])
112111

113112
When the model execution completes successfully, you’ll see confirmation messages similar to those above, indicating successful loading, inference, and output tensor shapes.
114113

114+
115+
116+
TODO: Debug issues when running the model on the FVP, kindly ignore anything below this
117+
## Running the model on the Corstone-300 FVP
118+
119+
120+
Run the model using:
121+
122+
```bash
123+
FVP_Corstone_SSE-300_Ethos-U55 -a simple_nn.pte -C mps3_board.visualisation.disable-visualisation=1
124+
```
125+
126+
{{% notice Note %}}
127+
128+
-C mps3_board.visualisation.disable-visualisation=1 disables the FVP GUI. This can speed up launch time for the FVP.
129+
130+
The FVP can be terminated with Ctrl+C.
131+
{{% /notice %}}
132+
133+
134+
135+
```output
136+
137+
```
138+
139+
140+
You've now set up your environment for TinyML development, and tested a PyTorch and ExecuTorch Neural Network.

content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,4 +61,4 @@ pkill -f buck
6161

6262
If you don't have the Grove AI vision board, use the Corstone-300 FVP proceed to [Environment Setup Corstone-300 FVP](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-fvp/)
6363

64-
If you have the Grove board proceed o to [Setup on Grove - Vision AI Module V2](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-grove/)
64+
If you have the Grove board proceed to [Setup on Grove - Vision AI Module V2](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-grove/)

content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,4 @@ Test that the setup was successful by running the `run.sh` script.
2626
./run.sh
2727
```
2828

29-
TODO connect this part to simple_nn.py part?
30-
31-
You will see a number of examples run on the FVP. This means you can proceed to the next section to test your environment setup.
29+
You will see a number of examples run on the FVP. This means you can proceed to the next section [Build a Simple PyTorch Model](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8/) to test your environment setup.

content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md

Lines changed: 3 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -35,23 +35,16 @@ Grove Vision V2 [Edge impulse Firmware](https://cdn.edgeimpulse.com/firmware/see
3535

3636
![Board connection](Connect.png)
3737

38+
{{% notice Note %}}
39+
Ensure the board is properly connected and recognized by your computer.
40+
{{% /notice %}}
3841

3942
3. In the extracted Edge Impulse firmware, locate and run the installation scripts to flash your device.
4043

4144
```console
4245
./flash_linux.sh
4346
```
4447

45-
4. Configure Edge Impulse for the board
46-
in your terminal, run:
47-
48-
```console
49-
edge-impulse-daemon
50-
```
51-
Follow the prompts to log in.
52-
53-
5. If successful, you should see your Grove - Vision AI Module V2 under 'Devices' in Edge Impulse.
54-
5548

5649
## Next Steps
5750
1. Go to [Build a Simple PyTorch Model](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8/) to test your environment setup.

content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/troubleshooting-6.md

Lines changed: 0 additions & 21 deletions
This file was deleted.

0 commit comments

Comments
 (0)