Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
69 commits
Select commit Hold shift + click to select a range
312a954
first iteration of json-ld
JoeStech Jul 9, 2025
58c8ae1
Updates
madeline-underwood Jul 29, 2025
f24ca7e
Starting major refactor
madeline-underwood Jul 29, 2025
6740b9b
Merge branch 'ArmDeveloperEcosystem:main' into review-visualizing-eth…
madeline-underwood Jul 30, 2025
3aad411
Updated overview with expanded information
madeline-underwood Jul 30, 2025
ee8869c
Merge branch 'ArmDeveloperEcosystem:main' into review-visualizing-eth…
madeline-underwood Jul 30, 2025
dc4ed36
Refactoring
madeline-underwood Jul 30, 2025
a15dc4c
Refactored and rewritten.
madeline-underwood Jul 30, 2025
fe96d6f
Continuing to refactor and rewrite
madeline-underwood Jul 30, 2025
bcbb52d
Merge remote-tracking branch 'upstream/main' into json-ld
JoeStech Jul 30, 2025
7694067
fix quote wrapping
JoeStech Jul 30, 2025
3a91e8c
Merge branch 'ArmDeveloperEcosystem:main' into review-visualizing-eth…
madeline-underwood Jul 31, 2025
d69ba69
Update 1-overview.md
madeline-underwood Jul 31, 2025
b9ae15b
Rename 4-how-executorch-works.md to 4-run-model.md
madeline-underwood Jul 31, 2025
bcbea19
Rename 2-env-setup.md to 2-env-setup-execut.md
madeline-underwood Jul 31, 2025
11e63b1
Update and rename 4-run-model.md to 7-run-model.md
madeline-underwood Jul 31, 2025
76b04cb
Update 2-env-setup-execut.md
madeline-underwood Jul 31, 2025
3a6be30
Update 2-env-setup-execut.md
madeline-underwood Jul 31, 2025
d2c3ecf
Update 5-configure-fvp-gui.md
madeline-underwood Jul 31, 2025
c34910c
Update 6-evaluate-output.md
madeline-underwood Jul 31, 2025
4b4ed4f
Rename 2-env-setup-execut.md to 3-env-setup-execut.md
madeline-underwood Jul 31, 2025
f8efe04
Rename 1-overview.md to 2-overview.md
madeline-underwood Jul 31, 2025
f1b990f
Rename 2-executorch-workflow.md to 3-executorch-workflow.md
madeline-underwood Jul 31, 2025
abdf33f
Rename 3-env-setup-execut.md to 4-env-setup-execut.md
madeline-underwood Jul 31, 2025
3324d3a
Rename 3-env-setup-fvp.md to 5-env-setup-fvp.md
madeline-underwood Jul 31, 2025
2801e51
Rename 5-configure-fvp-gui.md to 6-configure-fvp-gui.md
madeline-underwood Jul 31, 2025
e780bbb
Rename 6-evaluate-output.md to 8-evaluate-output.md
madeline-underwood Jul 31, 2025
220c6b1
Update 2-overview.md
madeline-underwood Jul 31, 2025
6ed3e47
Update 3-executorch-workflow.md
madeline-underwood Jul 31, 2025
b85d83b
Update 4-env-setup-execut.md
madeline-underwood Jul 31, 2025
a2c1c6a
Update 5-env-setup-fvp.md
madeline-underwood Jul 31, 2025
3d6aee4
Update 6-configure-fvp-gui.md
madeline-underwood Jul 31, 2025
c062d49
Update 7-run-model.md
madeline-underwood Jul 31, 2025
c7d85d4
Update 8-evaluate-output.md
madeline-underwood Jul 31, 2025
b43def6
Update _index.md
madeline-underwood Jul 31, 2025
df3e97f
Update 2-overview.md
madeline-underwood Jul 31, 2025
8f0e0f7
Update 3-executorch-workflow.md
madeline-underwood Jul 31, 2025
371a812
Update 4-env-setup-execut.md
madeline-underwood Jul 31, 2025
8529464
Update 5-env-setup-fvp.md
madeline-underwood Jul 31, 2025
adc9755
Update 6-configure-fvp-gui.md
madeline-underwood Jul 31, 2025
720ccc8
Update 6-configure-fvp-gui.md
madeline-underwood Jul 31, 2025
2986615
Merge branch 'ArmDeveloperEcosystem:main' into review-visualizing-eth…
madeline-underwood Aug 1, 2025
36883f7
Reordered files. Continued editing.
madeline-underwood Aug 1, 2025
946cbfd
Final
madeline-underwood Aug 1, 2025
1aeec60
Replaced index file in order to fix hugo render issues.
madeline-underwood Aug 1, 2025
e0757e9
automatic update of stats files
Aug 4, 2025
32aeeb7
Merge pull request #2193 from JoeStech/json-ld
pareenaverma Aug 4, 2025
7ce9aba
Merge pull request #2204 from madeline-underwood/review-visualizing-e…
jasonrandrews Aug 4, 2025
eceb776
Merge branch 'ArmDeveloperEcosystem:main' into java_flame
madeline-underwood Aug 4, 2025
11d6589
Updates
madeline-underwood Aug 4, 2025
757be56
Merge branch 'java_flame' of https://github.com/madeline-underwood/ar…
madeline-underwood Aug 4, 2025
10bc3e9
Updates
madeline-underwood Aug 4, 2025
001003f
Updates
madeline-underwood Aug 4, 2025
2ce6f68
Final tweaks
madeline-underwood Aug 4, 2025
fcf1e83
Merge pull request #2206 from madeline-underwood/java_flame
jasonrandrews Aug 5, 2025
eec8379
Polished Learning Path metadata: split objectives, improved readabili…
madeline-underwood Aug 5, 2025
da1529d
Edited intro section for tone, structure, and SEO alignment around Ar…
madeline-underwood Aug 5, 2025
7fe069e
Update FVP link and contributors list.
odincodeshen Aug 5, 2025
decb0f2
Cleaned up Learning Path introduction: fixed formatting, clarified QE…
madeline-underwood Aug 5, 2025
08b01aa
Updates
madeline-underwood Aug 5, 2025
78f96f2
Updates
madeline-underwood Aug 5, 2025
5364f1b
Tightened language, removed trailing whitespace.
madeline-underwood Aug 6, 2025
1d615a7
Added question-framing to section headers for SEO.
madeline-underwood Aug 6, 2025
d191047
Merge pull request #2209 from odincodeshen/main
pareenaverma Aug 7, 2025
af1dbbf
Merge pull request #2210 from madeline-underwood/Azure
jasonrandrews Aug 7, 2025
cd87508
spelling and link fixes
jasonrandrews Aug 7, 2025
d1f5a4a
Merge pull request #2212 from jasonrandrews/review
jasonrandrews Aug 7, 2025
a86ec76
Update Java flamegraph Learning Path
jasonrandrews Aug 7, 2025
0232b76
Merge pull request #2213 from jasonrandrews/review
jasonrandrews Aug 7, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion .wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4559,7 +4559,7 @@ qdisc
ras
rcu
regmap
rgerganovs
rgerganov's
rotocol
rpcgss
rpmh
Expand Down Expand Up @@ -4588,3 +4588,6 @@ vmscan
workqueue
xdp
xhci
JFR
conv
servlet
2 changes: 1 addition & 1 deletion assets/contributors.csv
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,6 @@ Aude Vuilliomenet,Arm,,,,
Andrew Kilroy,Arm,,,,
Peter Harris,Arm,,,,
Chenying Kuo,Adlink,evshary,evshary,,
William Liang,,wyliang,,,
William Liang,,,wyliang,,
Waheed Brown,Arm,https://github.com/armwaheed,https://www.linkedin.com/in/waheedbrown/,,
Aryan Bhusari,Arm,,https://www.linkedin.com/in/aryanbhusari,,
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ tools_software_languages_filter:
- Coding: 26
- Containerd: 1
- DetectNet: 1
- Docker: 9
- Docker: 10
- DSTREAM: 2
- Edge AI: 1
- Edge Impulse: 1
Expand Down

This file was deleted.

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
---
title: Overview
weight: 2

### FIXED, DO NOT MODIFY
layout: learningpathall
---
## Simulate and evaluate TinyML performance on Arm virtual hardware

In this section, you’ll learn how TinyML, ExecuTorch, and Arm Fixed Virtual Platforms work together to simulate embedded AI workloads before hardware is available.

Choosing the right hardware for your machine learning (ML) model starts with having the right tools. In many cases, you need to test and iterate before your target hardware is even available, especially when working with cutting-edge accelerators like the Ethos-U NPU.

Arm [Fixed Virtual Platforms](https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms) (FVPs) let you visualize and test model performance before any physical hardware is available.

By simulating hardware behavior at the system level, FVPs allow you to:

- Benchmark inference speed and measure operator-level performance
- Identify which operations are delegated to the NPU and which execute on the CPU
- Validate end-to-end integration between components like ExecuTorch and Arm NN
- Iterate faster by debugging and optimizing your workload without relying on hardware

This makes FVPs a crucial tool for embedded ML workflows where precision, portability, and early validation matter.

## What is TinyML?

TinyML is machine learning optimized to run on low-power, resource-constrained devices such as Arm Cortex-M microcontrollers and NPUs like the Ethos-U. These models must fit within tight memory and compute budgets, making them ideal for embedded systems.

This Learning Path focuses on using TinyML models with virtualized Arm hardware to simulate real-world AI workloads on microcontrollers and NPUs.

If you're looking to build and train your own TinyML models, follow the [Introduction to TinyML on Arm using PyTorch and ExecuTorch](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/).

## What is ExecuTorch?

ExecuTorch is a lightweight runtime for running PyTorch models on embedded and edge devices. It supports efficient model inference on a range of Arm processors, ranging from Cortex-M CPUs to Ethos-U NPUs, with support for hybrid CPU+accelerator execution.

ExecuTorch provides:

- Ahead-of-time (AOT) compilation for faster inference
- Delegation of selected operators to accelerators like Ethos-U
- Tight integration with Arm compute libraries

## Why use Arm Fixed Virtual Platforms?

Arm Fixed Virtual Platforms (FVPs) are virtual hardware models used to simulate Arm-based systems like the Corstone-320. They allow developers to validate and tune software before silicon is available, which is especially important when targeting newly-released accelerators like the [Ethos-U85](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u85) NPU.

These virtual platforms also include a built-in graphical user interface (GUI) that helps you:

- Confirm your model is running on the intended virtual hardware
- Visualize instruction counts
- Review total execution time
- Capture clear outputs for demos and prototypes

## What is Corstone-320?

The Corstone-320 FVP is a virtual model of an Arm-based microcontroller system optimized for AI and TinyML workloads. It supports Cortex-M CPUs and the Ethos-U NPU, making it ideal for early testing, performance tuning, and validation of embedded AI applications, all before physical hardware is available.

The Corstone-320 reference system is free to use, but you'll need to accept the license agreement during installation. For more information, see the [Corstone-320 documentation](https://developer.arm.com/documentation/109761/0000?lang=en).

## What's next?
In the next section, you'll explore how ExecuTorch compiles and deploys models to run efficiently on simulated hardware.

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
---
# User change
title: "Understand the ExecuTorch workflow"

weight: 3

# Do not modify these elements
layout: "learningpathall"
---
## Overview

Before setting up your environment, it helps to understand how ExecuTorch processes a model and runs it on Arm-based hardware. ExecuTorch uses ahead-of-time (AOT) compilation to transform PyTorch models into optimized operator graphs that run efficiently on resource-constrained systems. The workflow supports hybrid execution across CPU and NPU cores, allowing you to profile, debug, and deploy TinyML workloads with low runtime overhead and high portability across Arm microcontrollers.

## ExecuTorch in three steps

ExecuTorch works in three main steps:

**Step 1: Export the model**

- Convert a trained PyTorch model into an operator graph
- Identify operators that can be offloaded to the Ethos-U NPU (for example, ReLU, conv, and quantize)

**Step 2: Compile with the AOT compiler**

- Translate the operator graph into an optimized, quantized format
- Use `--delegate` to move eligible operations to the Ethos-U accelerator
- Save the compiled output as a `.pte` file

**Step 3: Deploy and run**

- Execute the compiled model on an FVP or physical target
- The Ethos-U NPU runs delegated operators - all others run on the Cortex-M CPU

For more detail, see the [ExecuTorch documentation](https://docs.pytorch.org/executorch/stable/intro-how-it-works.html).


## A visual overview

The diagram below summarizes the ExecuTorch workflow from model export to deployment. It shows how a trained PyTorch model is transformed into an optimized, quantized format and deployed to a target system such as an Arm Fixed Virtual Platform (FVP).

- On the left, the model is exported into a graph of operators, with eligible layers flagged for NPU acceleration.
- In the center, the AOT compiler optimizes and delegates operations, producing a `.pte` file ready for deployment.
- On the right, the model is executed on embedded Arm hardware, where delegated operators run on the Ethos-U NPU, and the rest are handled by the Cortex-M CPU.

This three-step workflow ensures your TinyML models are performance-tuned and hardware-aware before deployment—even without access to physical silicon.

![Diagram showing the three-step ExecuTorch workflow from model export to deployment#center](./how-executorch-works-high-level.png "The three-step ExecuTorch workflow from model export to deployment")

## What's next?

Now that you understand how ExecuTorch works, you're ready to set up your environment and install the tools.
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
---
# User change
title: "Set up your ExecuTorch environment"

weight: 4

# Do not modify these elements
layout: "learningpathall"
---
## Set up overview

Before you can deploy and test models with ExecuTorch, you need to set up your local development environment. This section walks you through installing system dependencies, creating a virtual environment, and cloning the ExecuTorch repository on Ubuntu or WSL. Once complete, you'll be ready to run TinyML models on a virtual Arm platform.

## Install system dependencies

{{< notice Note >}}
Make sure Python 3 is installed. It comes pre-installed on most versions of Ubuntu.
{{< /notice >}}

These instructions have been tested on:

- Ubuntu 22.04 and 24.04
- Windows Subsystem for Linux (WSL)

Run the following commands to install the dependencies:

```bash
sudo apt update
sudo apt install python-is-python3 python3-dev python3-venv gcc g++ make -y
```

## Create a virtual environment

Create and activate a Python virtual environment:

```console
python3 -m venv $HOME/executorch-venv
source $HOME/executorch-venv/bin/activate
```
Your shell prompt should now start with `(executorch)` to indicate the environment is active.

## Install ExecuTorch

Clone the ExecuTorch repository and install dependencies:

``` bash
cd $HOME
git clone https://github.com/pytorch/executorch.git
cd executorch
```

Set up internal submodules:

```bash
git submodule sync
git submodule update --init --recursive
./install_executorch.sh
```

{{% notice Tip %}}
If you encounter a stale `buck` environment, reset it using:

```bash
ps aux | grep buck
pkill -f buck
```
{{% /notice %}}

## Verify the installation:

Check that ExecuTorch is correctly installed:

```bash
pip list | grep executorch
```
Expected output:

```output
executorch 0.8.0a0+92fb0cc
```

## What's next?

Now that ExecuTorch is installed, you're ready to simulate your TinyML model on an Arm Fixed Virtual Platform (FVP). In the next section, you'll configure and launch a Fixed Virtual Platform.
Loading