Skip to content

Commit 033a357

Browse files
committed
2 parents 70d1c24 + 30cabba commit 033a357

File tree

66 files changed

+485
-299
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

66 files changed

+485
-299
lines changed

.github/workflows/deploy.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ env:
2626
jobs:
2727
build_and_deploy:
2828
# The type of runner that the job will run on
29-
runs-on: ubuntu-latest
29+
runs-on: ubuntu-24.04-arm
3030
permissions:
3131
id-token: write
3232
contents: read
@@ -59,7 +59,7 @@ jobs:
5959
run: |
6060
hugo --minify
6161
cp learn-image-sitemap.xml public/learn-image-sitemap.xml
62-
bin/pagefind --site "public"
62+
bin/pagefind.aarch64 --site "public"
6363
env:
6464
HUGO_LLM_API: ${{ secrets.HUGO_LLM_API }}
6565
HUGO_RAG_API: ${{ secrets.HUGO_RAG_API }}

.wordlist.txt

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3558,4 +3558,17 @@ threadCount
35583558
threadNum
35593559
useAPL
35603560
vvenc
3561-
workspaces
3561+
workspaces
3562+
ETDump
3563+
ETRecord
3564+
FAISS
3565+
IVI
3566+
PDFs
3567+
Powertrain
3568+
SpinTheCubeInGDI
3569+
TaaS
3570+
cloudsdk
3571+
highcpu
3572+
proj
3573+
sln
3574+
uploader

content/install-guides/cmake.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ This article provides quick instructions to install CMake for Arm Linux distribu
3434

3535
### How do I download and install CMake for Windows on Arm?
3636

37-
Confirm you are using a Windows on Arm device such as Windows Dev Kit 2023 or a laptop such as Lenovo ThinkPad X13s or Surface Pro 9 with 5G.
37+
Confirm you are using a Windows on Arm device such as the Lenovo ThinkPad X13s or Surface Pro 9 with 5G.
3838

3939
### How do I download and install CMake for Arm Linux distributions?
4040

content/install-guides/windows-perf-vs-extension.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -41,13 +41,13 @@ The WindowsPerf GUI extension is composed of several key features, each designed
4141
- **Output Logging**: All commands executed through the GUI are logged, ensuring transparency and supporting performance analysis.
4242
- **Sampling UI**: Customize your sampling experience by selecting events, setting frequency and duration, choosing programs for sampling, and comprehensively analyzing results. See screenshot below.
4343

44-
![Sampling preview #center](../_images/wperf-vs-extension-sampling-preview.png "Sampling settings UI Overview")
44+
![Sampling preview #center](/install_guides/_images/wperf-vs-extension-sampling-preview.png "Sampling settings UI Overview")
4545

4646

4747
- **Counting Settings UI**: Build a `wperf stat` command from scratch using the configuration interface, then view the output in the IDE or open it with Windows Performance Analyzer (WPA). See screenshot below.
4848

4949

50-
![Counting preview #center](../_images/wperf-vs-extension-counting-preview.png "Counting settings UI Overview")
50+
![Counting preview #center](/install_guides/_images/wperf-vs-extension-counting-preview.png "Counting settings UI Overview")
5151

5252
## Before you begin
5353

@@ -69,7 +69,7 @@ To install the WindowsPerf Visual Studio Extension from Visual Studio:
6969
4. Click on the search bar (Ctrl+L) and type `WindowsPerf`.
7070
5. Click on the **Install** button and restart Visual Studio.
7171

72-
![WindowsPerf install page #center](../_images/wperf-vs-extension-install-page.png)
72+
![WindowsPerf install page #center](/install_guides/_images/wperf-vs-extension-install-page.png)
7373

7474
### Installation from GitHub
7575

content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/Overview-1.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,12 +6,12 @@ weight: 2
66
layout: learningpathall
77
---
88

9-
This Learning Path is about TinyML. It serves as a starting point for learning how cutting-edge AI technologies may be put on even the smallest of devices, making Edge AI more accessible and efficient. You will learn how to setup on your host machine and target device to facilitate compilation and ensure smooth integration across all devices.
9+
This Learning Path is about TinyML. It serves as a starting point for learning how cutting-edge AI technologies may be used on even the smallest devices, making Edge AI more accessible and efficient. You will learn how to set up your host machine and target device to facilitate compilation and ensure smooth integration across devices.
1010

1111
In this section, you get an overview of the domain with real-life use-cases and available devices.
1212

1313
## Overview
14-
TinyML represents a significant shift in machine learning deployment. Unlike traditional machine learning, which typically depends on cloud-based servers or high-powered hardware, TinyML is tailored to function on devices with limited resources, constrained memory, low power, and less processing capabilities. TinyML has gained popularity because it enables AI applications to operate in real-time, directly on the device, with minimal latency, enhanced privacy, and the ability to work offline. This shift opens up new possibilities for creating smarter and more efficient embedded systems.
14+
TinyML represents a significant shift in machine learning deployment. Unlike traditional machine learning, which typically depends on cloud-based servers or high-performance hardware, TinyML is tailored to function on devices with limited resources, constrained memory, low power, and less processing capabilities. TinyML has gained popularity because it enables AI applications to operate in real-time, directly on the device, with minimal latency, enhanced privacy, and the ability to work offline. This shift opens up new possibilities for creating smarter and more efficient embedded systems.
1515

1616
### Benefits and applications
1717

@@ -42,7 +42,7 @@ TinyML is being deployed across multiple industries, enhancing everyday experien
4242

4343
### Examples of Arm-based devices
4444

45-
There are many Arm-based off-the-shelf devices you can use for TinyML projects. Some of them are listed below, but the list is not exhaustive.
45+
There are many Arm-based devices you can use for TinyML projects. Some of them are listed below, but the list is not exhaustive.
4646

4747
#### Raspberry Pi 4 and 5
4848

@@ -64,6 +64,6 @@ The Arduino Nano, equipped with a suite of sensors, supports TinyML and is ideal
6464

6565
In addition to hardware, there are software platforms that can help you build TinyML applications.
6666

67-
Edge Impulse platform offers a suite of tools for developers to build and deploy TinyML applications on Arm-based devices. It supports devices like Raspberry Pi, Arduino, and STMicroelectronics boards.
67+
Edge Impulse offers a suite of tools for developers to build and deploy TinyML applications on Arm-based devices. It supports devices like Raspberry Pi, Arduino, and STMicroelectronics boards.
6868

6969
Now that you have an overview of the subject, move on to the next section where you will set up an environment on your host machine.

content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/_index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,13 +14,13 @@ learning_objectives:
1414
- Understand the benefits of deploying AI models on Arm-based edge devices.
1515
- Select Arm-based devices for TinyML.
1616
- Install and configure a TinyML development environment.
17-
- Perform best practices for ensuring optimal performance on constrained edge devices.
17+
- Apply best practices for ensuring optimal performance on constrained edge devices.
1818

1919

2020
prerequisites:
2121
- Basic knowledge of machine learning concepts.
2222
- A Linux host machine or VM running Ubuntu 22.04 or higher.
23-
- A [Grove Vision AI Module](https://wiki.seeedstudio.com/Grove-Vision-AI-Module/) **or** an Arm license to run the Corstone-300 Fixed Virtual Platform (FVP).
23+
- A [Grove Vision AI Module](https://wiki.seeedstudio.com/Grove-Vision-AI-Module/) or an Arm license to run the Corstone-300 Fixed Virtual Platform (FVP).
2424

2525

2626
author_primary: Dominica Abena O. Amanfo

content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/_next-steps.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,14 @@ further_reading:
99
title: TinyML Brings AI to Smallest Arm Devices
1010
link: https://newsroom.arm.com/blog/tinyml
1111
type: blog
12+
- resource:
13+
title: Arm Compiler for Embedded
14+
link: https://developer.arm.com/Tools%20and%20Software/Arm%20Compiler%20for%20Embedded
15+
type: documentation
16+
- resource:
17+
title: Arm GNU Toolchain
18+
link: https://developer.arm.com/Tools%20and%20Software/GNU%20Toolchain
19+
type: documentation
1220

1321

1422

content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8.md

Lines changed: 33 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,14 @@
11
---
22
# User change
3-
title: "Build a Simple PyTorch Model"
3+
title: "Build a simple PyTorch model"
44

55
weight: 7 # 1 is first, 2 is second, etc.
66

77
# Do not modify these elements
88
layout: "learningpathall"
99
---
1010

11-
TODO connect this part with the FVP/board?
12-
With our environment ready, you can create a simple program to test the setup.
11+
With the development environment ready, you can create a simple PyTorch model to test the setup.
1312

1413
This example defines a small feedforward neural network for a classification task. The model consists of 2 linear layers with ReLU activation in between.
1514

@@ -62,7 +61,7 @@ print("Model successfully exported to simple_nn.pte")
6261

6362
Run the model from the Linux command line:
6463

65-
```console
64+
```bash
6665
python3 simple_nn.py
6766
```
6867

@@ -76,15 +75,15 @@ The model is saved as a .pte file, which is the format used by ExecuTorch for de
7675

7776
Run the ExecuTorch version, first build the executable:
7877

79-
```console
78+
```bash
8079
# Clean and configure the build system
8180
(rm -rf cmake-out && mkdir cmake-out && cd cmake-out && cmake ..)
8281

8382
# Build the executor_runner target
8483
cmake --build cmake-out --target executor_runner -j$(nproc)
8584
```
8685

87-
You see the build output and it ends with:
86+
You will see the build output and it ends with:
8887

8988
```output
9089
[100%] Linking CXX executable executor_runner
@@ -93,7 +92,7 @@ You see the build output and it ends with:
9392

9493
When the build is complete, run the executor_runner with the model as an argument:
9594

96-
```console
95+
```bash
9796
./cmake-out/executor_runner --model_path simple_nn.pte
9897
```
9998

@@ -112,3 +111,30 @@ Output 0: tensor(sizes=[1, 2], [-0.105369, -0.178723])
112111

113112
When the model execution completes successfully, you’ll see confirmation messages similar to those above, indicating successful loading, inference, and output tensor shapes.
114113

114+
115+
116+
TODO: Debug issues when running the model on the FVP, kindly ignore anything below this
117+
## Running the model on the Corstone-300 FVP
118+
119+
120+
Run the model using:
121+
122+
```bash
123+
FVP_Corstone_SSE-300_Ethos-U55 -a simple_nn.pte -C mps3_board.visualisation.disable-visualisation=1
124+
```
125+
126+
{{% notice Note %}}
127+
128+
-C mps3_board.visualisation.disable-visualisation=1 disables the FVP GUI. This can speed up launch time for the FVP.
129+
130+
The FVP can be terminated with Ctrl+C.
131+
{{% /notice %}}
132+
133+
134+
135+
```output
136+
137+
```
138+
139+
140+
You've now set up your environment for TinyML development, and tested a PyTorch and ExecuTorch Neural Network.

content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ weight: 3
88
layout: "learningpathall"
99
---
1010

11-
In this section, you will prepare a development environment to compile the model. These instructions have been tested on Ubuntu 22.04, 24.04 and on Windows Subsystem for Linux (WSL).
11+
In this section, you will prepare a development environment to compile a machine learning model. These instructions have been tested on Ubuntu 22.04, 24.04 and on Windows Subsystem for Linux (WSL).
1212

1313
## Install dependencies
1414

@@ -27,7 +27,7 @@ Create a Python virtual environment using `python venv`.
2727
python3 -m venv $HOME/executorch-venv
2828
source $HOME/executorch-venv/bin/activate
2929
```
30-
The prompt of your terminal now has (executorch) as a prefix to indicate the virtual environment is active.
30+
The prompt of your terminal now has `(executorch)` as a prefix to indicate the virtual environment is active.
3131

3232

3333
## Install Executorch
@@ -40,11 +40,11 @@ git clone https://github.com/pytorch/executorch.git
4040
cd executorch
4141
```
4242

43-
Run a few commands to set up the ExecuTorch internal dependencies.
43+
Run the commands below to set up the ExecuTorch internal dependencies.
44+
4445
```bash
4546
git submodule sync
4647
git submodule update --init
47-
4848
./install_requirements.sh
4949
```
5050

@@ -59,6 +59,8 @@ pkill -f buck
5959

6060
## Next Steps
6161

62-
If you don't have the Grove AI vision board, use the Corstone-300 FVP proceed to [Environment Setup Corstone-300 FVP](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-fvp/)
62+
Your next steps depends on the hardware you have.
63+
64+
If you have the Grove Vision AI Module proceed to [Set up the Grove Vision AI Module V2](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-grove/).
6365

64-
If you have the Grove board proceed o to [Setup on Grove - Vision AI Module V2](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-grove/)
66+
If you don't have the Grove Vision AI Module, you can use the Corstone-300 FVP instead, proceed to [Set up the Corstone-300 FVP](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-fvp/).

content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,22 +10,26 @@ layout: "learningpathall"
1010

1111
## Corstone-300 FVP Setup for ExecuTorch
1212

13-
Navigate to the Arm examples directory in the ExecuTorch repository.
13+
Navigate to the Arm examples directory in the ExecuTorch repository and configure the Fixed Virtual Platform (FVP).
14+
1415
```bash
1516
cd $HOME/executorch/examples/arm
1617
./setup.sh --i-agree-to-the-contained-eula
1718
```
1819

20+
Set the environment variables for the FVP.
21+
1922
```bash
2023
export FVP_PATH=${pwd}/ethos-u-scratch/FVP-corstone300/models/Linux64_GCC-9.3
2124
export PATH=$FVP_PATH:$PATH
2225
```
23-
Test that the setup was successful by running the `run.sh` script.
26+
27+
Confirm the installation was successful by running the `run.sh` script.
2428

2529
```bash
2630
./run.sh
2731
```
2832

29-
TODO connect this part to simple_nn.py part?
33+
You will see a number of examples run on the FVP.
3034

31-
You will see a number of examples run on the FVP. This means you can proceed to the next section to test your environment setup.
35+
This confirms the installation, and you can proceed to the next section [Build a Simple PyTorch Model](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8/).

0 commit comments

Comments
 (0)