You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/1-overview.md
+9-7Lines changed: 9 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,32 +6,34 @@ weight: 2
6
6
layout: learningpathall
7
7
---
8
8
9
-
## TinyML
9
+
## Overview
10
10
11
11
This Learning Path is about TinyML. It is a starting point for learning how innovative AI technologies can be used on even the smallest of devices, making Edge AI more accessible and efficient. You will learn how to set up your host machine to facilitate compilation and ensure smooth integration across devices.
12
12
13
13
This section provides an overview of the domain with real-life use cases and available devices.
14
+
## What is TinyML?
15
+
14
16
15
17
TinyML represents a significant shift in Machine Learning deployment. Unlike traditional Machine Learning, which typically depends on cloud-based servers or high-performance hardware, TinyML is tailored to function on devices with limited resources, constrained memory, low power, and fewer processing capabilities.
16
18
17
19
TinyML has gained popularity because it enables AI applications to operate in real-time, directly on the device, with minimal latency, enhanced privacy, and the ability to work offline. This shift opens up new possibilities for creating smarter and more efficient embedded systems.
18
20
19
-
###Benefits and applications
21
+
## Benefits and applications
20
22
21
23
The benefits of TinyML align well with the Arm architecture, which is widely used in IoT, mobile devices, and edge AI deployments.
22
24
23
25
Here are some of the key benefits of TinyML on Arm:
24
26
25
27
26
-
-**Power Efficiency**: TinyML models are designed to be extremely power-efficient, making them ideal for battery-operated devices like sensors, wearables, and drones.
28
+
- Power efficiency: TinyML models are designed to be extremely power-efficient, making them ideal for battery-operated devices like sensors, wearables, and drones.
27
29
28
-
-**Low Latency**: AI processing happens on-device, so there is no need to send data to the cloud, which reduces latency and enables real-time decision-making.
30
+
- Low latency: AI processing happens on-device, so there is no need to send data to the cloud, which reduces latency and enables real-time decision-making.
29
31
30
-
-**Data Privacy**: With on-device computation, sensitive data remains local, providing enhanced privacy and security. This is a priority in healthcare and personal devices.
32
+
- Data privacy: with on-device computation, sensitive data remains local, providing enhanced privacy and security. This is a priority in healthcare and personal devices.
31
33
32
-
-**Cost-Effective**: Arm devices, which are cost-effective and scalable, can now handle sophisticated Machine Learning tasks, reducing the need for expensive hardware or cloud services.
34
+
- Cost-effective: Arm devices, which are cost-effective and scalable, can now handle sophisticated machine learning tasks, reducing the need for expensive hardware or cloud services.
33
35
34
-
-**Scalability**: With billions of Arm devices in the market, TinyML is well-suited for scaling across industries, enabling widespread adoption of AI at the edge.
36
+
- Scalability: with billions of Arm devices in the market, TinyML is well-suited for scaling across industries, enabling widespread adoption of AI at the edge.
35
37
36
38
TinyML is being deployed across multiple industries, enhancing everyday experiences and enabling groundbreaking solutions. The table below shows some examples of TinyML applications.
Copy file name to clipboardExpand all lines: content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/2-env-setup.md
+4-7Lines changed: 4 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,9 +7,6 @@ weight: 3
7
7
# Do not modify these elements
8
8
layout: "learningpathall"
9
9
---
10
-
11
-
In this section, you will prepare a development environment to compile a machine learning model.
12
-
13
10
## Introduction to ExecuTorch
14
11
15
12
ExecuTorch is a lightweight runtime designed for efficient execution of PyTorch models on resource-constrained devices. It enables machine learning inference on embedded and edge platforms, making it well-suited for Arm-based hardware. Since Arm processors are widely used in mobile, IoT, and embedded applications, ExecuTorch leverages Arm's efficient CPU architectures to deliver optimized performance while maintaining low power consumption. By integrating with Arm's compute libraries, it ensures smooth execution of AI workloads on Arm-powered devices, from Cortex-M microcontrollers to Cortex-A application processors.
@@ -18,7 +15,7 @@ ExecuTorch is a lightweight runtime designed for efficient execution of PyTorch
18
15
19
16
These instructions have been tested on Ubuntu 22.04, 24.04, and on Windows Subsystem for Linux (WSL).
20
17
21
-
Python3 is required and comes installed with Ubuntu, but some additional packages are needed:
18
+
Python 3 is required and comes installed with Ubuntu, but some additional packages are needed:
Copy file name to clipboardExpand all lines: content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/3-env-setup-fvp.md
+7-5Lines changed: 7 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,22 +8,24 @@ weight: 5 # 1 is first, 2 is second, etc.
8
8
layout: "learningpathall"
9
9
---
10
10
11
-
In this section, you will run scripts to set up the Corstone-320 reference package.
11
+
## Overview
12
12
13
-
The Corstone-320 Fixed Virtual Platform (FVP) is a pre-silicon software development environment for Arm-based microcontrollers. It provides a virtual representation of hardware, allowing developers to test and optimize software before actual hardware is available. Designed for AI and machine learning workloads, it includes support for Arm's Ethos-U NPU and Cortex-M processors, making it ideal for embedded AI applications. The FVP accelerates development by enabling early software validation and performance tuning in a flexible, simulation-based environment.
13
+
In this section, you run scripts to set up the Corstone-320 reference package.
14
+
15
+
The Corstone-320 Fixed Virtual Platform (FVP) is a pre-silicon software development environment for Arm-based microcontrollers. It provides a virtual representation of hardware so you can test and optimize software before boards are available. Designed for AI and machine learning workloads, it includes support for Arm Ethos-U NPUs and Cortex-M processors, which makes it well-suited to embedded AI applications. The FVP accelerates development by enabling early software validation and performance tuning in a flexible, simulation-based environment.
14
16
15
17
The Corstone reference system is provided free of charge, although you will have to accept the license in the next step. For more information on Corstone-320, check out the [official documentation](https://developer.arm.com/documentation/109761/0000?lang=en).
16
18
17
-
## Corstone-320 FVP Setup for ExecuTorch
19
+
## Set up Corstone-320 FVP for ExecuTorch
18
20
19
-
Run the FVP setup script in the ExecuTorch repository.
21
+
Run the FVP setup script in the ExecuTorch repository:
After the script has finished running, it prints a command to run to finalize the installation. This step adds the FVP executables to your system path.
28
+
When the script completes, it prints a command to finalize the installation by adding the FVP executables to your `PATH`:
Copy file name to clipboardExpand all lines: content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/4-build-model.md
+6-8Lines changed: 6 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,7 +36,7 @@ class SimpleNN(torch.nn.Module):
36
36
return out
37
37
38
38
# Create the model instance
39
-
input_size =10# example input features size
39
+
input_size =10# example input feature size
40
40
hidden_size =5# hidden layer size
41
41
output_size =2# number of output classes
42
42
@@ -52,7 +52,7 @@ ModelInputs = x
52
52
print("Model successfully exported to simple_nn.pte")
53
53
```
54
54
55
-
## Running the model on the Corstone-320 FVP
55
+
## Run the model on the Corstone-320 FVP
56
56
57
57
The final step is to take the Python-defined model and run it on the Corstone-320 FVP. This was done upon running the `run.sh` script in a previous section. To wrap up the Learning Path, you will perform these steps separately to better understand what happened under the hood. Start by setting some environment variables that are used by ExecuTorch.
58
58
@@ -61,7 +61,7 @@ export ET_HOME=$HOME/executorch
61
61
export executorch_DIR=$ET_HOME/build
62
62
```
63
63
64
-
Then, generate a model file on the `.pte` format using the Arm examples. The Ahead-of-Time (AoT) Arm compiler will enable optimizations for devices like the Grove Vision AI Module V2 and the Corstone-320 FVP. Run it from the ExecuTorch root directory.
64
+
Generate a model in ExecuTorch `.pte` format using the Arm examples. The AoT Arm compiler enables optimizations for devices such as the Grove Vision AI Module V2 and the Corstone-320 FVP. Run the compiler from the ExecuTorch root directory:
Now run the model on the Corstone-320 with the following command:
93
+
Run the model on Corstone-320:
94
94
95
95
```bash
96
96
FVP_Corstone_SSE-320 \
@@ -104,9 +104,7 @@ FVP_Corstone_SSE-320 \
104
104
```
105
105
106
106
{{% notice Note %}}
107
-
108
-
The argument `mps4_board.visualisation.disable-visualisation=1` disables the FVP GUI. This can speed up launch time for the FVP.
109
-
107
+
The argument `mps4_board.visualisation.disable-visualisation=1` disables the FVP GUI and can speed up launch time
110
108
{{% /notice %}}
111
109
112
110
Observe that the FVP loads the model file.
@@ -119,4 +117,4 @@ I [executorch:arm_executor_runner.cpp:412] Model in 0x70000000 $
119
117
I [executorch:arm_executor_runner.cpp:414] Model PTE file loaded. Size: 3360 bytes.
120
118
```
121
119
122
-
You have now set up your environment for TinyML development on Arm, and tested a small PyTorch and ExecuTorch Neural Network. In the next Learning Path of this series, you will learn about optimizing neural networks to run on Arm.
120
+
You have now set up your environment for TinyML development on Arm and tested a small PyTorch model with ExecuTorch on the Corstone-320 FVP. In the next Learning Path, you learn how to optimize neural networks to run efficiently on Arm.
Copy file name to clipboardExpand all lines: content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/_index.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,10 +6,10 @@ minutes_to_complete: 40
6
6
who_is_this_for: This is an introductory topic for developers and data scientists new to Tiny Machine Learning (TinyML) who want to explore its potential using PyTorch and ExecuTorch.
7
7
8
8
learning_objectives:
9
-
- Describe what differentiates TinyML from other AI domains.
10
-
- Describe the benefits of deploying AI models on Arm-based edge devices.
11
-
- Identify suitable Arm-based devices for TinyML applications.
12
-
- Set up and configure a TinyML development environment using ExecuTorch and Corstone-320 Fixed Virtual Platform (FVP).
9
+
- Describe what differentiates TinyML from other AI domains
10
+
- Describe the benefits of deploying AI models on Arm-based edge devices
11
+
- Identify suitable Arm-based devices for TinyML applications
12
+
- Set up and configure a TinyML development environment using ExecuTorch and Corstone-320 Fixed Virtual Platform (FVP)
0 commit comments