Skip to content

Commit da0b44f

Browse files
authored
Merge branch 'ArmDeveloperEcosystem:main' into main
2 parents 38552d7 + c7918ff commit da0b44f

File tree

20 files changed

+635
-117
lines changed

20 files changed

+635
-117
lines changed
Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,12 +8,11 @@ layout: learningpathall
88

99
## TinyML
1010

11-
12-
This Learning Path is about TinyML. It is a starting point for learning how innovative AI technologies can be used on even the smallest of devices, making Edge AI more accessible and efficient. You will learn how to set up your host machine and target device to facilitate compilation and ensure smooth integration across devices.
11+
This Learning Path is about TinyML. It is a starting point for learning how innovative AI technologies can be used on even the smallest of devices, making Edge AI more accessible and efficient. You will learn how to set up your host machine to facilitate compilation and ensure smooth integration across devices.
1312

1413
This section provides an overview of the domain with real-life use cases and available devices.
1514

16-
TinyML represents a significant shift in Machine Learning deployment. Unlike traditional Machine Learning, which typically depends on cloud-based servers or high-performance hardware, TinyML is tailored to function on devices with limited resources, constrained memory, low power, and fewer processing capabilities.
15+
TinyML represents a significant shift in Machine Learning deployment. Unlike traditional Machine Learning, which typically depends on cloud-based servers or high-performance hardware, TinyML is tailored to function on devices with limited resources, constrained memory, low power, and fewer processing capabilities.
1716

1817
TinyML has gained popularity because it enables AI applications to operate in real-time, directly on the device, with minimal latency, enhanced privacy, and the ability to work offline. This shift opens up new possibilities for creating smarter and more efficient embedded systems.
1918

@@ -36,7 +35,7 @@ Here are some of the key benefits of TinyML on Arm:
3635

3736
TinyML is being deployed across multiple industries, enhancing everyday experiences and enabling groundbreaking solutions. The table below shows some examples of TinyML applications.
3837

39-
| Area | Device, Arm IP | Description |
38+
| Area | Example, Arm IP | Description |
4039
| ------ | ------- | ------------ |
4140
| Healthcare | Fitbit Charge 5, Cortex-M | To monitor vital signs such as heart rate, detect arrhythmias, and provide real-time feedback. |
4241
| Agriculture | OpenAg, Cortex-M | To monitor soil moisture and optimize water usage. |
@@ -70,4 +69,4 @@ In addition to hardware, there are software platforms that can help you build Ti
7069

7170
Edge Impulse offers a suite of tools for developers to build and deploy TinyML applications on Arm-based devices. It supports devices like Raspberry Pi, Arduino, and STMicroelectronics boards.
7271

73-
Now that you have an overview of the subject, you can move on to the next section where you will set up an environment on your host machine.
72+
Now that you have an overview of the subject, you can move on to the next section where you will set up a development environment.
Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,23 @@
11
---
22
# User change
3-
title: "Environment Setup on Host Machine"
3+
title: "Install ExecuTorch"
44

55
weight: 3
66

77
# Do not modify these elements
88
layout: "learningpathall"
99
---
1010

11-
In this section, you will prepare a development environment to compile a Machine Learning model. These instructions have been tested on Ubuntu 22.04, 24.04, and on Windows Subsystem for Linux (WSL).
11+
In this section, you will prepare a development environment to compile a machine learning model.
12+
13+
## Introduction to ExecuTorch
14+
15+
ExecuTorch is a lightweight runtime designed for efficient execution of PyTorch models on resource-constrained devices. It enables machine learning inference on embedded and edge platforms, making it well-suited for Arm-based hardware. Since Arm processors are widely used in mobile, IoT, and embedded applications, ExecuTorch leverages Arm’s efficient CPU architectures to deliver optimized performance while maintaining low power consumption. By integrating with Arm’s compute libraries, it ensures smooth execution of AI workloads on Arm-powered devices, from Cortex-M microcontrollers to Cortex-A application processors.
1216

1317
## Install dependencies
1418

19+
These instructions have been tested on Ubuntu 22.04, 24.04, and on Windows Subsystem for Linux (WSL).
20+
1521
Python3 is required and comes installed with Ubuntu, but some additional packages are needed:
1622

1723
```bash
@@ -45,7 +51,6 @@ Run the commands below to set up the ExecuTorch internal dependencies:
4551
```bash
4652
git submodule sync
4753
git submodule update --init
48-
./install_requirements.sh
4954
./install_executorch.sh
5055
```
5156

@@ -70,8 +75,4 @@ executorch 0.6.0a0+3eea1f1
7075

7176
## Next Steps
7277

73-
Your next steps depend on your hardware.
74-
75-
If you have the Grove Vision AI Module, proceed to [Set up the Grove Vision AI Module V2 Learning Path](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-grove/).
76-
77-
If you do not have the Grove Vision AI Module, you can use the Corstone-320 FVP instead. See the Learning Path [Set up the Corstone-320 FVP](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-fvp/).
78+
Proceed to the next section to learn about and set up the virtualized hardware.
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
---
2+
# User change
3+
title: "Set up the Corstone-320 FVP"
4+
5+
weight: 5 # 1 is first, 2 is second, etc.
6+
7+
# Do not modify these elements
8+
layout: "learningpathall"
9+
---
10+
11+
In this section, you will run scripts to set up the Corstone-320 reference package.
12+
13+
The Corstone-320 Fixed Virtual Platform (FVP) is a pre-silicon software development environment for Arm-based microcontrollers. It provides a virtual representation of hardware, allowing developers to test and optimize software before actual hardware is available. Designed for AI and machine learning workloads, it includes support for Arm’s Ethos-U NPU and Cortex-M processors, making it ideal for embedded AI applications. The FVP accelerates development by enabling early software validation and performance tuning in a flexible, simulation-based environment.
14+
15+
The Corstone reference system is provided free of charge, although you will have to accept the license in the next step. For more information on Corstone-320, check out the [official documentation](https://developer.arm.com/documentation/109761/0000?lang=en).
16+
17+
## Corstone-320 FVP Setup for ExecuTorch
18+
19+
Navigate to the Arm examples directory in the ExecuTorch repository. Run the following command.
20+
21+
```bash
22+
cd $HOME/executorch/examples/arm
23+
./setup.sh --i-agree-to-the-contained-eula
24+
```
25+
26+
After the script has finished running, it prints a command to run to finalize the installation. This step adds the FVP executables to your system path.
27+
28+
```bash
29+
source $HOME/executorch/examples/arm/ethos-u-scratch/setup_path.sh
30+
```
31+
32+
Test that the setup was successful by running the `run.sh` script for Ethos-U85, which is the target device for Corstone-320:
33+
34+
```bash
35+
./examples/arm/run.sh --target=ethos-u85-256
36+
```
37+
38+
You will see a number of examples run on the FVP.
39+
40+
This confirms the installation, so you can now proceed to the Learning Path [Build a Simple PyTorch Model](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/build-model-8/).
Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -106,8 +106,6 @@ FVP_Corstone_SSE-320 \
106106
-a "$ET_HOME/examples/arm/executor_runner/cmake-out/arm_executor_runner"
107107
```
108108

109-
110-
111109
{{% notice Note %}}
112110

113111
The argument `mps4_board.visualisation.disable-visualisation=1` disables the FVP GUI. This can speed up launch time for the FVP.
@@ -124,4 +122,4 @@ I [executorch:arm_executor_runner.cpp:412] Model in 0x70000000 $
124122
I [executorch:arm_executor_runner.cpp:414] Model PTE file loaded. Size: 3360 bytes.
125123
```
126124

127-
You have now set up your environment for TinyML development on Arm, and tested a small PyTorch and ExecuTorch Neural Network.
125+
You have now set up your environment for TinyML development on Arm, and tested a small PyTorch and ExecuTorch Neural Network. In the next Learning Path of this series, you will learn about optimizing neural networks to run on Arm.

content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/_index.md

Lines changed: 9 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -9,12 +9,11 @@ learning_objectives:
99
- Describe what differentiates TinyML from other AI domains.
1010
- Describe the benefits of deploying AI models on Arm-based edge devices.
1111
- Identify suitable Arm-based devices for TinyML applications.
12-
- Set up and configure a TinyML development environment using ExecuTorch and Corstone-320 FVP.
12+
- Set up and configure a TinyML development environment using ExecuTorch and Corstone-320 Fixed Virtual Platform (FVP).
1313

1414
prerequisites:
15-
- Basic knowledge of Machine Learning concepts.
16-
- A Linux host machine or VM running Ubuntu 22.04 or higher.
17-
- A [Grove Vision AI Module](https://wiki.seeedstudio.com/Grove-Vision-AI-Module/) or an Arm license to run the Corstone-320 Fixed Virtual Platform (FVP).
15+
- Basic knowledge of Machine Learning concepts
16+
- A Linux computer
1817

1918

2019
author: Dominica Abena O. Amanfo
@@ -37,23 +36,21 @@ tools_software_languages:
3736
- ExecuTorch
3837
- Arm Compute Library
3938
- GCC
40-
- Edge Impulse
41-
- Node.js
4239

4340
further_reading:
4441
- resource:
45-
title: TinyML Brings AI to Smallest Arm Devices
42+
title: TinyML Brings AI to Smallest Arm Devices
4643
link: https://newsroom.arm.com/blog/tinyml
4744
type: blog
4845
- resource:
49-
title: Arm Compiler for Embedded
50-
link: https://developer.arm.com/Tools%20and%20Software/Arm%20Compiler%20for%20Embedded
46+
title: Arm Machine Learning Resources
47+
link: https://www.arm.com/developer-hub/embedded-and-microcontrollers/ml-solutions/getting-started
5148
type: documentation
5249
- resource:
53-
title: Arm GNU Toolchain
54-
link: https://developer.arm.com/Tools%20and%20Software/GNU%20Toolchain
50+
title: Arm Developers Guide for Cortex-M Processors and Ethos-U NPU
51+
link: https://developer.arm.com/documentation/109267/0101
5552
type: documentation
56-
53+
5754

5855

5956

content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-FVP.md

Lines changed: 0 additions & 34 deletions
This file was deleted.

content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-Grove.md

Lines changed: 0 additions & 50 deletions
This file was deleted.

content/learning-paths/mobile-graphics-and-gaming/ams/ga.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ Graphics Analyzer is a tool to help `OpenGL ES` and `Vulkan` developers get the
1111

1212
The tool allows you to observe API call arguments and return values, and interact with a running target application to investigate the effect of individual API calls. It highlights attempted misuse of the API, and gives recommendations for improvements.
1313

14+
**Note:** Graphics Analyzer is no longer in active development. You can still get Graphics Analyzer as part of [Arm Performance Studio 2024.2](https://artifacts.tools.arm.com/arm-performance-studio/2024.2/), but it is no longer available in later versions of the suite. For a more lightweight tool, try [Frame Advisor](https://developer.arm.com/Tools%20and%20Software/Frame%20Advisor), which enables you to capture and analyze rendering and geometry data for a single frame. For graphics debugging, we recommend RenderDoc for Arm GPUs. Both tools are available for free as part of [Arm Performance Studio](https://developer.arm.com/Tools%20and%20Software/Arm%20Performance%20Studio).
15+
1416
## Prerequisites
1517

1618
Build your application, and setup your Android device as described in [Setup tasks](/learning-paths/mobile-graphics-and-gaming/ams/setup_tasks/).
Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
---
2+
title: How to run AI Agent Application on CPU with llama.cpp and llama-cpp-agent using KleidiAI
3+
4+
minutes_to_complete: 45
5+
6+
who_is_this_for: This Learning Path is for software developers, ML engineers, and those looking to run AI Agent Application locally.
7+
8+
learning_objectives:
9+
- Set up llama-cpp-python optimised for Arm servers.
10+
- Learn how to optimise LLM models to run locally.
11+
- Learn how to create custom tools for ML models.
12+
- Learn how to use AI Agents for applications.
13+
14+
prerequisites:
15+
- An AWS Gravition instance (m7g.xlarge)
16+
- Basic understanding of Python and Prompt Engineering
17+
- Understanding of LLM fundamentals.
18+
19+
author: Andrew Choi
20+
21+
### Tags
22+
skilllevels: Introductory
23+
subjects: ML
24+
armips:
25+
- Neoverse
26+
tools_software_languages:
27+
- Python
28+
- AWS Gravition
29+
operatingsystems:
30+
- Linux
31+
32+
33+
34+
further_reading:
35+
- resource:
36+
title: llama.cpp
37+
link: https://github.com/ggml-org/llama.cpp
38+
type: documentation
39+
- resource:
40+
title: llama-cpp-agent
41+
link: https://llama-cpp-agent.readthedocs.io/en/latest/
42+
type: documentation
43+
44+
45+
46+
### FIXED, DO NOT MODIFY
47+
# ================================================================================
48+
weight: 1 # _index.md always has weight of 1 to order correctly
49+
layout: "learningpathall" # All files under learning paths have this same wrapper
50+
learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content.
51+
---
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
---
2+
# ================================================================================
3+
# FIXED, DO NOT MODIFY THIS FILE
4+
# ================================================================================
5+
weight: 21 # Set to always be larger than the content in this path to be at the end of the navigation.
6+
title: "Next Steps" # Always the same, html page title.
7+
layout: "learningpathall" # All files under learning paths have this same wrapper for Hugo processing.
8+
---

0 commit comments

Comments
 (0)