Skip to content

Commit 5c6ca22

Browse files
committed
Merging 2 learning paths
1 parent ee5a2b6 commit 5c6ca22

File tree

10 files changed

+127
-142
lines changed

10 files changed

+127
-142
lines changed

content/learning-paths/microcontrollers/env-setup-tinyml-on-arm/_index.md

Lines changed: 0 additions & 44 deletions
This file was deleted.

content/learning-paths/microcontrollers/env-setup-tinyml-on-arm/_next-steps.md

Lines changed: 0 additions & 27 deletions
This file was deleted.

content/learning-paths/microcontrollers/env-setup-tinyml-on-arm/_review.md

Lines changed: 0 additions & 45 deletions
This file was deleted.
Binary file not shown.

content/learning-paths/microcontrollers/env-setup-tinyml-on-arm/how-to-1.md

Lines changed: 0 additions & 10 deletions
This file was deleted.

content/learning-paths/microcontrollers/env-setup-tinyml-on-arm/how-to-2.md

Lines changed: 0 additions & 13 deletions
This file was deleted.

content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/Overview-1.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,5 +9,7 @@ layout: learningpathall
99
## Module Overview
1010
This session delves into TinyML, which applies machine learning to devices with limited resources like microcontrollers. This module serves as a starting point for learning how cutting-edge AI technologies may be put on even the smallest of devices, making Edge AI more accessible and efficient.
1111

12+
Additionally, we'll cover the necessary setup on your host machine and target device to facilitate cross-compilation and ensure smooth integration across all devices.
13+
1214
## Introduction to TinyML
1315
TinyML represents a significant shift in how we approach machine learning deployment. Unlike traditional machine learning, which typically depends on cloud-based servers or high-powered hardware, TinyML is tailored to function on devices with limited resources, such as constrained memory, power, and processing capabilities. TinyML has quickly gained popularity because it enables AI applications to operate in real-time, directly on the device, with minimal latency, enhanced privacy, and the ability to work offline. This shift opens up new possibilities for creating smarter and more efficient embedded systems.

content/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/_index.md

Lines changed: 14 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,18 @@
11
---
2-
title: Introduction to TinyML on Arm
2+
title: Introduction to TinyML on Arm using PyTorch v2.0 and Executorch
33

4-
minutes_to_complete: 10
4+
minutes_to_complete: 40
55

6-
who_is_this_for: This learning module is tailored for developers, engineers, and data scientists who are new to TinyML and interested in exploring its potential for edge AI. If you have an interest in deploying machine learning models on low-power, resource-constrained devices, this course will help you get started.
6+
who_is_this_for: This learning module is tailored for developers, engineers, and data scientists who are new to TinyML and interested in exploring its potential for edge AI. If you have an interest in deploying machine learning models on low-power, resource-constrained devices, this course will help you get started using PyTorch v2.0 and Executorch on Arm-based platforms.
77

88
learning_objectives:
99
- Identify TinyML and how it's different from the AI you might be used to.
1010
- Understand the benefits of deploying AI models on Arm-based edge devices.
1111
- Select Arm-based devices for TinyML.
1212
- Identify real-world use cases demonstrating the impact of TinyML in various industries.
13+
- Install and configure a TinyML development environment.
14+
- Set up a cross-compilation environment on your host machine.
15+
- Perform best practices for ensuring optimal performance on constrained edge devices.
1316

1417

1518
prerequisites:
@@ -29,6 +32,14 @@ operatingsystems:
2932
- Linux
3033

3134
tools_software_languages:
35+
- Corstone 300 FVP
36+
- Grove - Vision AI Module V2
37+
- Arm Virtual Hardware
38+
- Python
39+
- PyTorch v2.0
40+
- Executorch
41+
- Arm Compute Library
42+
- GCC
3243

3344
### FIXED, DO NOT MODIFY
3445
# ================================================================================
Lines changed: 93 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
---
2+
# User change
3+
title: "Environment Setup for TinyML Development on Arm"
4+
5+
weight: 6 # 1 is first, 2 is second, etc.
6+
7+
# Do not modify these elements
8+
layout: "learningpathall"
9+
---
10+
## Before you begin
11+
12+
These instructions have been tested on:
13+
- A GCP Arm-based Tau T2A Virtual Machine instance Running Ubuntu 22.04 LTS.
14+
- Host machine with Ubuntu 24.04 on x86_64 architecture.
15+
16+
The host machine is where you will perform most of your development work, especially cross-compiling code for the target Arm devices.
17+
18+
You can use your own Linux host machine or use [Arm Virtual Hardware (AVH)](https://www.arm.com/products/development-tools/simulation/virtual-hardware) for this Learning Path.
19+
20+
The Ubuntu version should be `20.04 or higher`. The `x86_64` architecture must be used because the Corstone-300 FVP is not currently available for the Arm architecture. You will need a Linux desktop to run the FVP because it opens graphical windows for input and output from the software applications. Also, though Executorch supports Windows via WSL, it is limited in resource.
21+
22+
If you want to use Arm Virtual Hardware the [Arm Virtual Hardware install guide](/install-guides/avh#corstone) provides setup instructions.
23+
24+
### Compilers
25+
26+
The examples can be built with [Arm Compiler for Embedded](https://developer.arm.com/Tools%20and%20Software/Arm%20Compiler%20for%20Embedded) or [Arm GNU Toolchain](https://developer.arm.com/Tools%20and%20Software/GNU%20Toolchain).
27+
28+
Both compilers are pre-installed in Arm Virtual Hardware.
29+
30+
Alternatively, if you are using Arch Linux or its derivatives, you can use Pacman to install GCC.
31+
32+
Use the install guides to install the compilers on your computer:
33+
- [Arm Compiler for Embedded](/install-guides/armclang/)
34+
- [Arm GNU Toolchain](/install-guides/gcc/arm-gnu)
35+
- Using Pacman:
36+
37+
```console
38+
pacman -S aarch64-linux-gnu-gcc
39+
```
40+
41+
### Corstone-300 FVP {#fvp}
42+
43+
To install the Corstone-300 FVP on your computer refer to the [install guide for Arm Ecosystem FVPs](/install-guides/fm_fvp).
44+
45+
The Corstone-300 FVP is pre-installed in Arm Virtual Hardware.
46+
47+
48+
## Install Executorch
49+
50+
1. Follow the [Setting Up ExecuTorch guide](https://pytorch.org/executorch/stable/getting-started-setup.html ) to install it.
51+
52+
2. Activate the `executorch` virtual environment from the installation guide:
53+
54+
```console
55+
conda activate executorch
56+
```
57+
58+
59+
## Setup on Grove - Vision AI Module V2
60+
Due to its constrained environment, we'll focus on lightweight, optimized tools and models (will be introcuded in the next learning path).
61+
62+
1. Connect the Grove - Vision AI Module V2 to your computer using the USB-C cable.
63+
64+
![Board connection #center](connect.png)
65+
66+
2. Install Edge Impulse CLI. It will help in data collection and model deployment.
67+
68+
```console
69+
npm install -g edge-impulse-cli
70+
```
71+
72+
3. Configure Edge Impulse for the board
73+
In your terminal, run:
74+
75+
```console
76+
edge-impulse-daemon
77+
```
78+
Follow the prompts to log in.
79+
80+
4. Verify Setup
81+
Connect to your device
82+
83+
```console
84+
edge-impulse-run-impulse --api-key YOUR_API_KEY
85+
```
86+
87+
If successful, you should see data from your Grove - Vision AI Module V2.
88+
89+
5. Install Executorch
90+
91+
```console
92+
pip install executorch
93+
```
Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
---
2+
title: Benefits of TinyML for Edge Computing on Arm Devices
3+
weight: 4
4+
5+
### FIXED, DO NOT MODIFY
6+
layout: learningpathall
7+
---
8+
## Troubleshooting
9+
- If you encounter permission issues, try running the commands with sudo.
10+
- Ensure your Grove - Vision AI Module V2 is properly connected and recognized by your computer.
11+
- If Edge Impulse CLI fails to detect your device, try unplugging and replugging the USB cable.
12+
13+
## Best Practices
14+
- Always cross-compile your code on the host machine to ensure compatibility with the target Arm device.
15+
- Utilize model quantization techniques to optimize performance on constrained devices like the Grove - Vision AI Module V2.
16+
- Regularly update your development environment and tools to benefit from the latest improvements in TinyML and edge AI technologies
17+
18+
You've now set up your environment for TinyML development on the Grove - Vision AI Module V2. In the next modules, we'll explore data collection, model training, and deployment using PyTorch v2.0 and Executorch.

0 commit comments

Comments
 (0)