Skip to content

Commit 6b84c89

Browse files
Merge pull request #2366 from madeline-underwood/tiny_ml_updates
Tiny ml updates_JA to review
2 parents dc04c2c + db9c11b commit 6b84c89

File tree

5 files changed

+30
-31
lines changed

5 files changed

+30
-31
lines changed

content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/1-overview.md

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -6,32 +6,34 @@ weight: 2
66
layout: learningpathall
77
---
88

9-
## TinyML
9+
## Overview
1010

1111
This Learning Path is about TinyML. It is a starting point for learning how innovative AI technologies can be used on even the smallest of devices, making Edge AI more accessible and efficient. You will learn how to set up your host machine to facilitate compilation and ensure smooth integration across devices.
1212

1313
This section provides an overview of the domain with real-life use cases and available devices.
14+
## What is TinyML?
15+
1416

1517
TinyML represents a significant shift in Machine Learning deployment. Unlike traditional Machine Learning, which typically depends on cloud-based servers or high-performance hardware, TinyML is tailored to function on devices with limited resources, constrained memory, low power, and fewer processing capabilities.
1618

1719
TinyML has gained popularity because it enables AI applications to operate in real-time, directly on the device, with minimal latency, enhanced privacy, and the ability to work offline. This shift opens up new possibilities for creating smarter and more efficient embedded systems.
1820

19-
### Benefits and applications
21+
## Benefits and applications
2022

2123
The benefits of TinyML align well with the Arm architecture, which is widely used in IoT, mobile devices, and edge AI deployments.
2224

2325
Here are some of the key benefits of TinyML on Arm:
2426

2527

26-
- **Power Efficiency**: TinyML models are designed to be extremely power-efficient, making them ideal for battery-operated devices like sensors, wearables, and drones.
28+
- Power efficiency: TinyML models are designed to be extremely power-efficient, making them ideal for battery-operated devices like sensors, wearables, and drones.
2729

28-
- **Low Latency**: AI processing happens on-device, so there is no need to send data to the cloud, which reduces latency and enables real-time decision-making.
30+
- Low latency: AI processing happens on-device, so there is no need to send data to the cloud, which reduces latency and enables real-time decision-making.
2931

30-
- **Data Privacy**: With on-device computation, sensitive data remains local, providing enhanced privacy and security. This is a priority in healthcare and personal devices.
32+
- Data privacy: with on-device computation, sensitive data remains local, providing enhanced privacy and security. This is a priority in healthcare and personal devices.
3133

32-
- **Cost-Effective**: Arm devices, which are cost-effective and scalable, can now handle sophisticated Machine Learning tasks, reducing the need for expensive hardware or cloud services.
34+
- Cost-effective: Arm devices, which are cost-effective and scalable, can now handle sophisticated machine learning tasks, reducing the need for expensive hardware or cloud services.
3335

34-
- **Scalability**: With billions of Arm devices in the market, TinyML is well-suited for scaling across industries, enabling widespread adoption of AI at the edge.
36+
- Scalability: with billions of Arm devices in the market, TinyML is well-suited for scaling across industries, enabling widespread adoption of AI at the edge.
3537

3638
TinyML is being deployed across multiple industries, enhancing everyday experiences and enabling groundbreaking solutions. The table below shows some examples of TinyML applications.
3739

content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/2-env-setup.md

Lines changed: 4 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,6 @@ weight: 3
77
# Do not modify these elements
88
layout: "learningpathall"
99
---
10-
11-
In this section, you will prepare a development environment to compile a machine learning model.
12-
1310
## Introduction to ExecuTorch
1411

1512
ExecuTorch is a lightweight runtime designed for efficient execution of PyTorch models on resource-constrained devices. It enables machine learning inference on embedded and edge platforms, making it well-suited for Arm-based hardware. Since Arm processors are widely used in mobile, IoT, and embedded applications, ExecuTorch leverages Arm's efficient CPU architectures to deliver optimized performance while maintaining low power consumption. By integrating with Arm's compute libraries, it ensures smooth execution of AI workloads on Arm-powered devices, from Cortex-M microcontrollers to Cortex-A application processors.
@@ -18,7 +15,7 @@ ExecuTorch is a lightweight runtime designed for efficient execution of PyTorch
1815

1916
These instructions have been tested on Ubuntu 22.04, 24.04, and on Windows Subsystem for Linux (WSL).
2017

21-
Python3 is required and comes installed with Ubuntu, but some additional packages are needed:
18+
Python 3 is required and comes installed with Ubuntu, but some additional packages are needed:
2219

2320
```bash
2421
sudo apt update
@@ -36,7 +33,7 @@ source $HOME/executorch-venv/bin/activate
3633
The prompt of your terminal now has `(executorch)` as a prefix to indicate the virtual environment is active.
3734

3835

39-
## Install Executorch
36+
## Install ExecuTorch
4037

4138
From within the Python virtual environment, run the commands below to download the ExecuTorch repository and install the required packages:
4239

@@ -74,6 +71,6 @@ pip list | grep executorch
7471
executorch 1.1.0a0+1883128
7572
```
7673

77-
## Next Steps
74+
## Next steps
7875

79-
Proceed to the next section to learn about and set up the virtualized hardware.
76+
Proceed to the next section to set up the virtualized hardware.

content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/3-env-setup-fvp.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,22 +8,24 @@ weight: 5 # 1 is first, 2 is second, etc.
88
layout: "learningpathall"
99
---
1010

11-
In this section, you will run scripts to set up the Corstone-320 reference package.
11+
## Overview
1212

13-
The Corstone-320 Fixed Virtual Platform (FVP) is a pre-silicon software development environment for Arm-based microcontrollers. It provides a virtual representation of hardware, allowing developers to test and optimize software before actual hardware is available. Designed for AI and machine learning workloads, it includes support for Arm's Ethos-U NPU and Cortex-M processors, making it ideal for embedded AI applications. The FVP accelerates development by enabling early software validation and performance tuning in a flexible, simulation-based environment.
13+
In this section, you run scripts to set up the Corstone-320 reference package.
14+
15+
The Corstone-320 Fixed Virtual Platform (FVP) is a pre-silicon software development environment for Arm-based microcontrollers. It provides a virtual representation of hardware so you can test and optimize software before boards are available. Designed for AI and machine learning workloads, it includes support for Arm Ethos-U NPUs and Cortex-M processors, which makes it well-suited to embedded AI applications. The FVP accelerates development by enabling early software validation and performance tuning in a flexible, simulation-based environment.
1416

1517
The Corstone reference system is provided free of charge, although you will have to accept the license in the next step. For more information on Corstone-320, check out the [official documentation](https://developer.arm.com/documentation/109761/0000?lang=en).
1618

17-
## Corstone-320 FVP Setup for ExecuTorch
19+
## Set up Corstone-320 FVP for ExecuTorch
1820

19-
Run the FVP setup script in the ExecuTorch repository.
21+
Run the FVP setup script in the ExecuTorch repository:
2022

2123
```bash
2224
cd $HOME/executorch
2325
./examples/arm/setup.sh --i-agree-to-the-contained-eula
2426
```
2527

26-
After the script has finished running, it prints a command to run to finalize the installation. This step adds the FVP executables to your system path.
28+
When the script completes, it prints a command to finalize the installation by adding the FVP executables to your `PATH`:
2729

2830
```bash
2931
source $HOME/executorch/examples/arm/ethos-u-scratch/setup_path.sh

content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/4-build-model.md

Lines changed: 6 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ class SimpleNN(torch.nn.Module):
3636
return out
3737

3838
# Create the model instance
39-
input_size = 10 # example input features size
39+
input_size = 10 # example input feature size
4040
hidden_size = 5 # hidden layer size
4141
output_size = 2 # number of output classes
4242

@@ -52,7 +52,7 @@ ModelInputs = x
5252
print("Model successfully exported to simple_nn.pte")
5353
```
5454

55-
## Running the model on the Corstone-320 FVP
55+
## Run the model on the Corstone-320 FVP
5656

5757
The final step is to take the Python-defined model and run it on the Corstone-320 FVP. This was done upon running the `run.sh` script in a previous section. To wrap up the Learning Path, you will perform these steps separately to better understand what happened under the hood. Start by setting some environment variables that are used by ExecuTorch.
5858

@@ -61,7 +61,7 @@ export ET_HOME=$HOME/executorch
6161
export executorch_DIR=$ET_HOME/build
6262
```
6363

64-
Then, generate a model file on the `.pte` format using the Arm examples. The Ahead-of-Time (AoT) Arm compiler will enable optimizations for devices like the Grove Vision AI Module V2 and the Corstone-320 FVP. Run it from the ExecuTorch root directory.
64+
Generate a model in ExecuTorch `.pte` format using the Arm examples. The AoT Arm compiler enables optimizations for devices such as the Grove Vision AI Module V2 and the Corstone-320 FVP. Run the compiler from the ExecuTorch root directory:
6565

6666
```bash
6767
cd $ET_HOME
@@ -90,7 +90,7 @@ cmake --build $ET_HOME/examples/arm/executor_runner/cmake-out --parallel -- arm_
9090

9191
```
9292

93-
Now run the model on the Corstone-320 with the following command:
93+
Run the model on Corstone-320:
9494

9595
```bash
9696
FVP_Corstone_SSE-320 \
@@ -104,9 +104,7 @@ FVP_Corstone_SSE-320 \
104104
```
105105

106106
{{% notice Note %}}
107-
108-
The argument `mps4_board.visualisation.disable-visualisation=1` disables the FVP GUI. This can speed up launch time for the FVP.
109-
107+
The argument `mps4_board.visualisation.disable-visualisation=1` disables the FVP GUI and can speed up launch time
110108
{{% /notice %}}
111109

112110
Observe that the FVP loads the model file.
@@ -119,4 +117,4 @@ I [executorch:arm_executor_runner.cpp:412] Model in 0x70000000 $
119117
I [executorch:arm_executor_runner.cpp:414] Model PTE file loaded. Size: 3360 bytes.
120118
```
121119

122-
You have now set up your environment for TinyML development on Arm, and tested a small PyTorch and ExecuTorch Neural Network. In the next Learning Path of this series, you will learn about optimizing neural networks to run on Arm.
120+
You have now set up your environment for TinyML development on Arm and tested a small PyTorch model with ExecuTorch on the Corstone-320 FVP. In the next Learning Path, you learn how to optimize neural networks to run efficiently on Arm.

content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/_index.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,10 @@ minutes_to_complete: 40
66
who_is_this_for: This is an introductory topic for developers and data scientists new to Tiny Machine Learning (TinyML) who want to explore its potential using PyTorch and ExecuTorch.
77

88
learning_objectives:
9-
- Describe what differentiates TinyML from other AI domains.
10-
- Describe the benefits of deploying AI models on Arm-based edge devices.
11-
- Identify suitable Arm-based devices for TinyML applications.
12-
- Set up and configure a TinyML development environment using ExecuTorch and Corstone-320 Fixed Virtual Platform (FVP).
9+
- Describe what differentiates TinyML from other AI domains
10+
- Describe the benefits of deploying AI models on Arm-based edge devices
11+
- Identify suitable Arm-based devices for TinyML applications
12+
- Set up and configure a TinyML development environment using ExecuTorch and Corstone-320 Fixed Virtual Platform (FVP)
1313

1414
prerequisites:
1515
- Basic knowledge of Machine Learning concepts

0 commit comments

Comments
 (0)