{{ $title }}
+{{ partial "head/jsonld.html" . }}
+
diff --git a/themes/arm-design-system-hugo-theme/layouts/partials/head/jsonld.html b/themes/arm-design-system-hugo-theme/layouts/partials/head/jsonld.html
new file mode 100644
index 0000000000..d9ffaf085d
--- /dev/null
+++ b/themes/arm-design-system-hugo-theme/layouts/partials/head/jsonld.html
@@ -0,0 +1,67 @@
+{{/* layouts/partials/head/jsonld.html */}}
+
+{{/* ---------------------------------------------------------------
+ Render JSON‑LD only for Learning‑Path _index.md main pages
+---------------------------------------------------------------- */}}
+{{- if and .IsSection (eq .Params.learning_path_main_page "yes") -}}
+
+ {{/* -------- Helper : Build ISO‑8601 duration (PT30M, PT2H, …) */}}
+ {{- $duration := "" -}}
+ {{- with .Params.minutes_to_complete -}}
+ {{- $duration = printf "PT%dM" (int .) -}}
+ {{- end -}}
+
+ {{/* -------- Learning objectives & prerequisites */}}
+ {{- $objectives := slice -}}
+ {{- with .Params.learning_objectives -}}
+ {{- range . }}{{ $objectives = $objectives | append ( . | plainify ) }}{{ end -}}
+ {{- end -}}
+
+ {{- $prereqs := slice -}}
+ {{- with .Params.prerequisites -}}
+ {{- range . }}{{ $prereqs = $prereqs | append ( . | plainify ) }}{{ end -}}
+ {{- end -}}
+
+ {{/* -------- Collect tag‑style params into one keywords list */}}
+ {{- $keywords := slice -}}
+ {{- $tagParams := slice
+ "skilllevels"
+ "cloud_service_providers"
+ "armips"
+ "subjects"
+ "operatingsystems"
+ "tools_software_languages"
+ -}}
+ {{- range $tagParams -}}
+ {{- $v := index $.Params . -}}
+ {{- with $v -}}
+ {{- if reflect.IsSlice $v -}}
+ {{- range $v }}{{ $keywords = $keywords | append ( . | plainify ) }}{{ end -}}
+ {{- else -}}
+ {{- $keywords = $keywords | append ( $v | plainify ) -}}
+ {{- end -}}
+ {{- end -}}
+ {{- end -}}
+
+ {{/* -------- Assemble JSON‑LD dict */}}
+ {{- $j := dict
+ "@context" "https://schema.org"
+ "@type" "Course"
+ "name" .Title
+ -}}
+
+ {{- with .Params.who_is_this_for }}{{ $j = merge $j (dict "description" ( . | plainify )) }}{{ end -}}
+ {{- if $duration }}{{ $j = merge $j (dict "timeRequired" $duration) }}{{ end -}}
+ {{- with .Params.skilllevels }}{{ $j = merge $j (dict "educationalLevel" .) }}{{ end -}}
+ {{- with $objectives }}{{ if gt (len .) 0 }}{{ $j = merge $j (dict "teaches" .) }}{{ end }}{{ end -}}
+ {{- with $prereqs }}{{ if gt (len .) 0 }}{{ $j = merge $j (dict "competencyRequired" .) }}{{ end }}{{ end -}}
+ {{- with .Params.author }}{{ $j = merge $j (dict "author" (dict "@type" "Person" "name" .)) }}{{ end -}}
+ {{- if $keywords }}{{ $j = merge $j (dict "keywords" (delimit (uniq $keywords) ", ")) }}{{ end -}}
+ {{- with .Site.Title }}{{ $j = merge $j (dict "provider" (dict "@type" "Organization" "name" .)) }}{{ end -}}
+
+ {{/* -------- Emit into */}}
+
+
+{{- end -}}
\ No newline at end of file
From 58c8ae1dec6ef1f076a2e3b14a73de4614b3ab52 Mon Sep 17 00:00:00 2001
From: Maddy Underwood
Date: Tue, 29 Jul 2025 15:01:53 +0000
Subject: [PATCH 02/55] Updates
---
.../1-overview.md | 26 ++++++++------
.../2-env-setup.md | 35 +++++++++----------
.../3-env-setup-fvp.md | 29 ++++++++++-----
.../visualizing-ethos-u-performance/_index.md | 20 +++++------
4 files changed, 61 insertions(+), 49 deletions(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md
index 7345c0c727..eba7f174d3 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md
@@ -5,21 +5,25 @@ weight: 2
### FIXED, DO NOT MODIFY
layout: learningpathall
---
+## Visualize ML on Embedded Devices
-## Visualizing ML on Embedded Devices
+Choosing the right hardware for your machine learning (ML) model starts with the right tools. With Arm [Fixed Virtual Platforms](https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms) (FVPs), you can explore and visualize ML performance early in the development process—before hardware is even available.
-Selecting the best hardware for machine learning (ML) models depends on effective tools. You can visualize ML performance early in the development cycle by using Arm [Fixed Virtual Platforms](https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms) (FVPs).
+## What is TinyML?
-## TinyML
+This Learning Path focuses on TinyML: machine learning designed to run on resource-constrained devices with limited memory, compute, and power.
-This Learning Path uses TinyML. TinyML is machine learning tailored to function on devices with limited resources, constrained memory, low power, and fewer processing capabilities.
+If you are interested in building and deploying your own TinyML models, see [Introduction to TinyML on Arm using PyTorch and ExecuTorch](/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/).
-For a learning path focused on creating and deploying your own TinyML models, please see [Introduction to TinyML on Arm using PyTorch and ExecuTorch](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/)
+## What is ExecuTorch?
-## Benefits and applications
+ExecuTorch is a lightweight runtime designed for efficient execution of PyTorch models on resource-constrained devices. It enables machine learning inference on embedded and edge platforms, making it well-suited for Arm-based hardware. Since Arm processors are widely used in mobile, IoT, and embedded applications, ExecuTorch leverages Arm's efficient CPU architectures to deliver optimized performance while maintaining low power consumption. By integrating with Arm's compute libraries, it ensures smooth execution of AI workloads on Arm-powered devices, from Cortex-M microcontrollers to Cortex-A application processors.
-New products, like Arm's [Ethos-U85](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u85) NPU are available on FVPs earlier than on physical devices. FVPs also have a graphical user interface (GUI), which is useful for for ML performance visualization due to:
-- visual confirmation that your ML model is running on the desired device,
-- clearly indicated instruction counts,
-- confirmation of total execution time and
-- visually appealing output for prototypes and demos.
+## Why use virtual platforms?
+
+New Arm hardware, such as the [Ethos-U85](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u85) NPU, becomes available on FVPs before physical devices ship. These virtual platforms also include a built-in graphical user interface (GUI) that helps you:
+
+- Confirm your model is executing on the intended virtual hardware
+- Visualize instruction counts
+- Review total execution time
+- Capture clear outputs for demos and prototypes
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup.md
index 2787107f19..fd1adbf1a5 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup.md
@@ -1,6 +1,6 @@
---
# User change
-title: "Install ExecuTorch"
+title: "Set up your development environment"
weight: 3
@@ -8,17 +8,15 @@ weight: 3
layout: "learningpathall"
---
-In this section, you will prepare a development environment to compile a machine learning model.
-
-## Introduction to ExecuTorch
-
-ExecuTorch is a lightweight runtime designed for efficient execution of PyTorch models on resource-constrained devices. It enables machine learning inference on embedded and edge platforms, making it well-suited for Arm-based hardware. Since Arm processors are widely used in mobile, IoT, and embedded applications, ExecuTorch leverages Arm's efficient CPU architectures to deliver optimized performance while maintaining low power consumption. By integrating with Arm's compute libraries, it ensures smooth execution of AI workloads on Arm-powered devices, from Cortex-M microcontrollers to Cortex-A application processors.
## Install dependencies
-These instructions have been tested on Ubuntu 22.04, 24.04, and on Windows Subsystem for Linux (WSL).
+These instructions have been tested on:
+
+- Ubuntu 22.04 and 24.04
+- Windows Subsystem for Linux (WSL)
-Python3 is required and comes installed with Ubuntu, but some additional packages are needed:
+Make sure Python 3 is installed (it comes with Ubuntu by default). Then install the required system packages:
```bash
sudo apt update
@@ -27,18 +25,18 @@ sudo apt install python-is-python3 python3-dev python3-venv gcc g++ make -y
## Create a virtual environment
-Create a Python virtual environment using `python venv`:
+Create and activate a Python virtual environment:
```console
python3 -m venv $HOME/executorch-venv
source $HOME/executorch-venv/bin/activate
```
-The prompt of your terminal now has `(executorch)` as a prefix to indicate the virtual environment is active.
+After activation, your terminal prompt should show (executorch) to indicate that the environment is active.
-## Install Executorch
+## Install ExecuTorch
-From within the Python virtual environment, run the commands below to download the ExecuTorch repository and install the required packages:
+Clone the ExecuTorch repository and install its dependencies:
``` bash
cd $HOME
@@ -46,7 +44,7 @@ git clone https://github.com/pytorch/executorch.git
cd executorch
```
-Run the commands below to set up the ExecuTorch internal dependencies:
+Set up internal dependencies:
```bash
git submodule sync
@@ -54,8 +52,8 @@ git submodule update --init --recursive
./install_executorch.sh
```
-{{% notice Note %}}
-If you run into an issue of `buck` running in a stale environment, reset it by running the following instructions:
+{{% notice Tip %}}
+If you run into issues with `buck` running in a stale environment, reset it:
```bash
ps aux | grep buck
@@ -63,16 +61,17 @@ pkill -f buck
```
{{% /notice %}}
-After running the commands, `executorch` should be listed upon running `pip list`:
+Verify the installation:
```bash
pip list | grep executorch
```
+Example output:
```output
executorch 0.8.0a0+92fb0cc
```
-## Next Steps
+## Next steps
-Proceed to the next section to learn about and set up the virtualized hardware.
+Now that ExecuTorch is installed, you're ready to simulate your TinyML model on virtual Arm hardware. In the next section, you'll configure and launch a Fixed Virtual Platform.
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-env-setup-fvp.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-env-setup-fvp.md
index bc80217465..509a434359 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-env-setup-fvp.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-env-setup-fvp.md
@@ -2,25 +2,32 @@
# User change
title: "Set up the Corstone-320 FVP on Linux"
-weight: 4 # 1 is first, 2 is second, etc.
+weight: 4
# Do not modify these elements
layout: "learningpathall"
---
+## What is Corstone-320?
-In this section, you will run scripts to set up the Corstone-320 reference package.
+To simulate embedded AI workloads on Arm hardware, you’ll use the Corstone-320 Fixed Virtual Platform (FVP). This pre-silicon software development environment for Arm-based microcontrollers provides a virtual representation of hardware, allowing developers to test and optimize software before actual hardware is available. Designed for AI and machine learning workloads, it includes support for Arm's Ethos-U NPU and Cortex-M processors, making it ideal for embedded AI applications. The FVP accelerates development by enabling early software validation and performance tuning in a flexible, simulation-based environment.
+
+The Corstone-320 reference system is free to use, but you'll need to accept the license agreement during installation.
+For more information, see the [official Corstone-320 documentation](https://developer.arm.com/documentation/109761/0000?lang=en).
+
+## Set up the Corstone-320 FVP for ExecuTorch
+
+Before you begin, make sure you’ve completed the steps in the previous section to install ExecuTorch.
+
+{{< notice note >}}
+On macOS, you'll need to perform additional setup to support FVP execution.
+See the [FVPs-on-Mac GitHub repo](https://github.com/Arm-Examples/FVPs-on-Mac/) for instructions before continuing.
+{{< /notice >}}
-The Corstone-320 Fixed Virtual Platform (FVP) is a pre-silicon software development environment for Arm-based microcontrollers. It provides a virtual representation of hardware, allowing developers to test and optimize software before actual hardware is available. Designed for AI and machine learning workloads, it includes support for Arm's Ethos-U NPU and Cortex-M processors, making it ideal for embedded AI applications. The FVP accelerates development by enabling early software validation and performance tuning in a flexible, simulation-based environment.
-The Corstone reference system is provided free of charge, although you will have to accept the license in the next step. For more information on Corstone-320, check out the [official documentation](https://developer.arm.com/documentation/109761/0000?lang=en).
-## Corstone-320 FVP Setup for ExecuTorch
-{{% notice macOS %}}
-Setting up FVPs on MacOS requires some extra steps, outlined in GitHub repo [FVPs-on-Mac](https://github.com/Arm-Examples/FVPs-on-Mac/). macOS users must do this first, before setting up the Corstone-320 FVP.
-{{% /notice %}}
Navigate to the Arm examples directory in the ExecuTorch repository. Run the following command.
@@ -50,3 +57,9 @@ Test that the setup was successful by running the `run.sh` script for Ethos-U85,
You will see a number of examples run on the FVP.
This confirms the installation, so you can now proceed to the next section.
+
+
+
+
+
+
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/_index.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/_index.md
index 613a355ee5..3fb4c492ae 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/_index.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/_index.md
@@ -1,23 +1,19 @@
---
-title: Visualizing Ethos-U Performance on Arm FVPs
-
-draft: true
-cascade:
- draft: true
+title: Visualize Ethos-U NPU Performance with ExecuTorch on Arm FVPs
minutes_to_complete: 120
-who_is_this_for: This is an introductory topic for developers and data scientists new to Tiny Machine Learning (TinyML), who want to understand and visualize ExecuTorch performance on a virtual device.
+who_is_this_for: This is an introductory topic for developers and data scientists who are new to TinyML and want to visualize ExecuTorch model performance on virtual Arm hardware.
learning_objectives:
- - Identify suitable Arm-based devices for TinyML applications.
- - Install Fixed Virtual Platforms (FVPs).
- - Deploy a TinyML ExecuTorch model to a Corstone-320 FVP.
- - Observe model execution on the FVP's graphical user interface (GUI).
+ - Identify Arm-based targets suitable for TinyML workloads
+ - Install and configure Fixed Virtual Platforms (FVPs)
+ - Deploy a TinyML model using ExecuTorch on a Corstone-320 FVP
+ - Visualize model execution using the FVP graphical interface
prerequisites:
- - Basic knowledge of Machine Learning concepts.
- - A computer running Linux or macOS.
+ - Familiarity with basic machine learning concepts
+ - A Linux or macOS computer with Python 3 installed
author: Waheed Brown
From f24ca7e4b0ffe77a706cb4da7032a1b9691217fb Mon Sep 17 00:00:00 2001
From: Maddy Underwood
Date: Tue, 29 Jul 2025 22:29:54 +0000
Subject: [PATCH 03/55] Starting major refactor
---
.../1-overview.md | 18 ++++++++++---
.../4-how-executorch-works.md | 27 +++++++++++--------
2 files changed, 31 insertions(+), 14 deletions(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md
index eba7f174d3..6a63fe4b0c 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md
@@ -7,18 +7,30 @@ layout: learningpathall
---
## Visualize ML on Embedded Devices
-Choosing the right hardware for your machine learning (ML) model starts with the right tools. With Arm [Fixed Virtual Platforms](https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms) (FVPs), you can explore and visualize ML performance early in the development process—before hardware is even available.
+Choosing the right hardware for your machine learning (ML) model starts with the right tools.
+
+Arm [Fixed Virtual Platforms](https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms) (FVPs) let you visualize and test model performance early—before any physical hardware is available.
## What is TinyML?
-This Learning Path focuses on TinyML: machine learning designed to run on resource-constrained devices with limited memory, compute, and power.
+TinyML is machine learning optimized to run on low-power, resource-constrained devices. These models must fit within tight memory and compute budgets, making them ideal for embedded systems.
+
+This Learning Path focuses on using TinyML models with virtualized Arm hardware to simulate real-world AI workloads on microcontrollers and NPUs.
-If you are interested in building and deploying your own TinyML models, see [Introduction to TinyML on Arm using PyTorch and ExecuTorch](/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/).
+If you're looking to build and train your own TinyML models, check out the [Introduction to TinyML on Arm using PyTorch and ExecuTorch](/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/).
## What is ExecuTorch?
ExecuTorch is a lightweight runtime designed for efficient execution of PyTorch models on resource-constrained devices. It enables machine learning inference on embedded and edge platforms, making it well-suited for Arm-based hardware. Since Arm processors are widely used in mobile, IoT, and embedded applications, ExecuTorch leverages Arm's efficient CPU architectures to deliver optimized performance while maintaining low power consumption. By integrating with Arm's compute libraries, it ensures smooth execution of AI workloads on Arm-powered devices, from Cortex-M microcontrollers to Cortex-A application processors.
+ExecuTorch provides:
+
+- Ahead-of-time (AOT) compilation for faster inference
+
+- Delegation of selected operators to accelerators like Ethos-U
+
+- Tight integration with Arm compute libraries
+
## Why use virtual platforms?
New Arm hardware, such as the [Ethos-U85](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u85) NPU, becomes available on FVPs before physical devices ship. These virtual platforms also include a built-in graphical user interface (GUI) that helps you:
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-how-executorch-works.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-how-executorch-works.md
index e2061aa1e2..7dd078a35d 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-how-executorch-works.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-how-executorch-works.md
@@ -1,6 +1,6 @@
---
# User change
-title: "How ExecuTorch Works"
+title: "Understand the ExecuTorch workflow"
weight: 5 # 1 is first, 2 is second, etc.
@@ -8,22 +8,27 @@ weight: 5 # 1 is first, 2 is second, etc.
layout: "learningpathall"
---
+Before setting up your environment, it helps to understand how ExecuTorch processes a model and runs it on Arm-based hardware.
+
To get a better understanding of [How ExecuTorch Works](https://docs.pytorch.org/executorch/stable/intro-how-it-works.html) refer to the official PyTorch Documentation. A summary is provided here for your reference:
+## ExecuTorch pipeline overview
+
+ExecuTorch works in three main stages:
+
1. **Export the model:**
- * Generate a Graph
- * A graph is series of operators (ReLU, quantize, etc.) eligible for delegation to an accelerator
- * Your goal is to identify operators for acceleration on the Ethos-U NPU
-2. **Compile to ExecuTorch:**
- * This is the ahead-of-time compiler
- * This is why ExecuTorch inference is faster than PyTorch inference
- * Delegate operators to an accelerator, like the Ethos-U NPU
-3. **Run on targeted device:**
- * Deploy the ML model to the Fixed Virtual Platform (FVP) or physical device
+ * Convert a trained PyTorch model into an operator graph.
+ * Identify operators that can be offloaded to the Ethos-U NPU (for example, ReLU, conv, quantize).
+2. **Compile with the AOT compiler:**
+ * Translate the operator graph into an optimized, quantized format.
+ * Use `--delegate` to move eligible operations to the Ethos-U accelerator.
+ * Save the compiled output as a `.pte` file.
+3. **Deploy and run:**
+ * Deploy the ML model to a Fixed Virtual Platform (FVP) or physical device
* Execute operators on the CPU and delegated operators on the Ethos-U NPU
**Diagram of How ExecuTorch Works**
-
+
## Deploy a TinyML Model
From 3aad411717cdbec3b916da1ffb10235d2c6c9120 Mon Sep 17 00:00:00 2001
From: Maddy Underwood
Date: Wed, 30 Jul 2025 12:38:21 +0000
Subject: [PATCH 04/55] Updated overview with expanded information
---
.../1-overview.md | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md
index 6a63fe4b0c..2bde349cdb 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md
@@ -7,33 +7,33 @@ layout: learningpathall
---
## Visualize ML on Embedded Devices
-Choosing the right hardware for your machine learning (ML) model starts with the right tools.
+Choosing the right hardware for your machine learning (ML) model starts with the right tools. In many cases, you need to test and iterate on software before the target hardware is even available,especially when working with cutting-edge accelerators like the Ethos-U NPU.
-Arm [Fixed Virtual Platforms](https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms) (FVPs) let you visualize and test model performance early—before any physical hardware is available.
+Arm [Fixed Virtual Platforms](https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms) (FVPs) let you visualize and test model performance before any physical hardware is available.
## What is TinyML?
-TinyML is machine learning optimized to run on low-power, resource-constrained devices. These models must fit within tight memory and compute budgets, making them ideal for embedded systems.
+TinyML is machine learning optimized to run on low-power, resource-constrained devices such as Arm Cortex-M microcontrollers and NPUs like the Ethos-U. These models must fit within tight memory and compute budgets, making them ideal for embedded systems.
This Learning Path focuses on using TinyML models with virtualized Arm hardware to simulate real-world AI workloads on microcontrollers and NPUs.
If you're looking to build and train your own TinyML models, check out the [Introduction to TinyML on Arm using PyTorch and ExecuTorch](/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/).
-## What is ExecuTorch?
+What is ExecuTorch?
-ExecuTorch is a lightweight runtime designed for efficient execution of PyTorch models on resource-constrained devices. It enables machine learning inference on embedded and edge platforms, making it well-suited for Arm-based hardware. Since Arm processors are widely used in mobile, IoT, and embedded applications, ExecuTorch leverages Arm's efficient CPU architectures to deliver optimized performance while maintaining low power consumption. By integrating with Arm's compute libraries, it ensures smooth execution of AI workloads on Arm-powered devices, from Cortex-M microcontrollers to Cortex-A application processors.
+ExecuTorch is a lightweight runtime for executing PyTorch models on embedded and edge devices. It supports efficient model inference on a range of Arm processors, ranging from Cortex-M CPUs to Ethos-U NPUs, with support for hybrid CPU+accelerator execution.
ExecuTorch provides:
- Ahead-of-time (AOT) compilation for faster inference
-
- Delegation of selected operators to accelerators like Ethos-U
-
- Tight integration with Arm compute libraries
## Why use virtual platforms?
-New Arm hardware, such as the [Ethos-U85](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u85) NPU, becomes available on FVPs before physical devices ship. These virtual platforms also include a built-in graphical user interface (GUI) that helps you:
+Arm Fixed Virtual Platforms (FVPs) are virtual hardware models used to simulate Arm-based systems like the Corstone-320. They allow developers to validate and tune software before silicon is available, which is especially important when targeting newly released accelerators like the [Ethos-U85](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u85) NPU.
+
+These virtual platforms also include a built-in graphical user interface (GUI) that helps you:
- Confirm your model is executing on the intended virtual hardware
- Visualize instruction counts
From dc4ed3679551d444f56f960a3506a6df02ab0d1b Mon Sep 17 00:00:00 2001
From: Maddy Underwood
Date: Wed, 30 Jul 2025 17:06:35 +0000
Subject: [PATCH 05/55] Refactoring
---
.../1-overview.md | 6 +--
.../2-env-setup.md | 18 +++++----
.../2-executorch-workflow.md | 39 +++++++++++++++++++
.../4-how-executorch-works.md | 23 -----------
4 files changed, 53 insertions(+), 33 deletions(-)
create mode 100644 content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-executorch-workflow.md
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md
index 2bde349cdb..d145ca41d6 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md
@@ -7,7 +7,7 @@ layout: learningpathall
---
## Visualize ML on Embedded Devices
-Choosing the right hardware for your machine learning (ML) model starts with the right tools. In many cases, you need to test and iterate on software before the target hardware is even available,especially when working with cutting-edge accelerators like the Ethos-U NPU.
+Choosing the right hardware for your machine learning (ML) model starts with having the right tools. In many cases, you need to test and iterate on software before the target hardware is even available,especially when working with cutting-edge accelerators like the Ethos-U NPU.
Arm [Fixed Virtual Platforms](https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms) (FVPs) let you visualize and test model performance before any physical hardware is available.
@@ -19,7 +19,7 @@ This Learning Path focuses on using TinyML models with virtualized Arm hardware
If you're looking to build and train your own TinyML models, check out the [Introduction to TinyML on Arm using PyTorch and ExecuTorch](/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/).
-What is ExecuTorch?
+## What is ExecuTorch?
ExecuTorch is a lightweight runtime for executing PyTorch models on embedded and edge devices. It supports efficient model inference on a range of Arm processors, ranging from Cortex-M CPUs to Ethos-U NPUs, with support for hybrid CPU+accelerator execution.
@@ -31,7 +31,7 @@ ExecuTorch provides:
## Why use virtual platforms?
-Arm Fixed Virtual Platforms (FVPs) are virtual hardware models used to simulate Arm-based systems like the Corstone-320. They allow developers to validate and tune software before silicon is available, which is especially important when targeting newly released accelerators like the [Ethos-U85](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u85) NPU.
+Arm Fixed Virtual Platforms (FVPs) are virtual hardware models used to simulate Arm-based systems like the Corstone-320. They allow developers to validate and tune software before silicon is available, which is especially important when targeting newly-released accelerators like the [Ethos-U85](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u85) NPU.
These virtual platforms also include a built-in graphical user interface (GUI) that helps you:
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup.md
index fd1adbf1a5..7d308089cb 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup.md
@@ -1,22 +1,27 @@
---
# User change
-title: "Set up your development environment"
+title: "Set up your ExecuTorch environment"
-weight: 3
+weight: 4
# Do not modify these elements
layout: "learningpathall"
---
+Get your development environment ready to deploy and run models with ExecuTorch.
-## Install dependencies
+## Install system dependencies
+
+{{< notice Note >}}
+Make sure Python 3 is installed. It comes pre-installed on most versions of Ubuntu.
+{{< /notice >}}
These instructions have been tested on:
- Ubuntu 22.04 and 24.04
- Windows Subsystem for Linux (WSL)
-Make sure Python 3 is installed (it comes with Ubuntu by default). Then install the required system packages:
+Install the required system packages:
```bash
sudo apt update
@@ -31,12 +36,11 @@ Create and activate a Python virtual environment:
python3 -m venv $HOME/executorch-venv
source $HOME/executorch-venv/bin/activate
```
-After activation, your terminal prompt should show (executorch) to indicate that the environment is active.
-
+Your shell prompt should now start with `(executorch)` to indicate the environment is active.
## Install ExecuTorch
-Clone the ExecuTorch repository and install its dependencies:
+Clone the ExecuTorch repository and install dependencies:
``` bash
cd $HOME
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-executorch-workflow.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-executorch-workflow.md
new file mode 100644
index 0000000000..3803456bb8
--- /dev/null
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-executorch-workflow.md
@@ -0,0 +1,39 @@
+---
+# User change
+title: "Understand the ExecuTorch workflow"
+
+weight: 3
+
+# Do not modify these elements
+layout: "learningpathall"
+---
+
+Before setting up your environment, it helps to understand how ExecuTorch processes a model and runs it on Arm-based hardware.
+
+## ExecuTorch pipeline overview
+
+ExecuTorch works in three main stages:
+
+**Export the model**
+
+ - Convert a trained PyTorch model into an operator graph.
+ - Identify operators that can be offloaded to the Ethos-U NPU (e.g., ReLU, conv, quantize).
+
+**Compile with the AOT compiler**
+
+ - Translate the operator graph into an optimized, quantized format.
+ - Use `--delegate` to move eligible operations to the Ethos-U accelerator.
+ - Save the compiled output as a `.pte` file.
+
+**Deploy and run**
+
+ - Execute the compiled model on an FVP or physical target.
+ - The Ethos-U NPU runs delegated operators; all others run on the Cortex-M CPU.
+
+## Visual overview
+
+
+
+## What's next?
+
+Now that you understand how ExecuTorch works, you're ready to set up your environment and install the tools.
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-how-executorch-works.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-how-executorch-works.md
index 7dd078a35d..1689cd58eb 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-how-executorch-works.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-how-executorch-works.md
@@ -7,29 +7,6 @@ weight: 5 # 1 is first, 2 is second, etc.
# Do not modify these elements
layout: "learningpathall"
---
-
-Before setting up your environment, it helps to understand how ExecuTorch processes a model and runs it on Arm-based hardware.
-
-To get a better understanding of [How ExecuTorch Works](https://docs.pytorch.org/executorch/stable/intro-how-it-works.html) refer to the official PyTorch Documentation. A summary is provided here for your reference:
-
-## ExecuTorch pipeline overview
-
-ExecuTorch works in three main stages:
-
-1. **Export the model:**
- * Convert a trained PyTorch model into an operator graph.
- * Identify operators that can be offloaded to the Ethos-U NPU (for example, ReLU, conv, quantize).
-2. **Compile with the AOT compiler:**
- * Translate the operator graph into an optimized, quantized format.
- * Use `--delegate` to move eligible operations to the Ethos-U accelerator.
- * Save the compiled output as a `.pte` file.
-3. **Deploy and run:**
- * Deploy the ML model to a Fixed Virtual Platform (FVP) or physical device
- * Execute operators on the CPU and delegated operators on the Ethos-U NPU
-
-**Diagram of How ExecuTorch Works**
-
-
## Deploy a TinyML Model
With your development environment set up, you can deploy a simple PyTorch model.
From a15dc4cdd4f5394a97213d28179ce42dbeca509d Mon Sep 17 00:00:00 2001
From: Maddy Underwood
Date: Wed, 30 Jul 2025 18:05:00 +0000
Subject: [PATCH 06/55] Refactored and rewritten.
---
.../visualizing-ethos-u-performance/2-env-setup.md | 10 ++++++----
.../2-executorch-workflow.md | 2 +-
2 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup.md
index 7d308089cb..c62f966a52 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup.md
@@ -48,7 +48,7 @@ git clone https://github.com/pytorch/executorch.git
cd executorch
```
-Set up internal dependencies:
+Set up internal submodules:
```bash
git submodule sync
@@ -57,7 +57,7 @@ git submodule update --init --recursive
```
{{% notice Tip %}}
-If you run into issues with `buck` running in a stale environment, reset it:
+If you encounter a stale `buck` environment, reset it using:
```bash
ps aux | grep buck
@@ -65,12 +65,14 @@ pkill -f buck
```
{{% /notice %}}
-Verify the installation:
+## Verify the installation:
+
+Check that ExecuTorch is correctly installed:
```bash
pip list | grep executorch
```
-Example output:
+Expected output:
```output
executorch 0.8.0a0+92fb0cc
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-executorch-workflow.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-executorch-workflow.md
index 3803456bb8..0a00c43dd5 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-executorch-workflow.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-executorch-workflow.md
@@ -17,7 +17,7 @@ ExecuTorch works in three main stages:
**Export the model**
- Convert a trained PyTorch model into an operator graph.
- - Identify operators that can be offloaded to the Ethos-U NPU (e.g., ReLU, conv, quantize).
+ - Identify operators that can be offloaded to the Ethos-U NPU (for example, ReLU, conv, quantize).
**Compile with the AOT compiler**
From fe96d6f3368128e06301381d4b63453cb4631481 Mon Sep 17 00:00:00 2001
From: Maddy Underwood
Date: Wed, 30 Jul 2025 21:16:51 +0000
Subject: [PATCH 07/55] Continuing to refactor and rewrite
---
.../1-overview.md | 7 +++
.../3-env-setup-fvp.md | 46 ++++++++++---------
.../4-how-executorch-works.md | 32 +++++++++++--
.../5-configure-fvp-gui.md | 4 +-
4 files changed, 62 insertions(+), 27 deletions(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md
index d145ca41d6..11a7f532c8 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md
@@ -39,3 +39,10 @@ These virtual platforms also include a built-in graphical user interface (GUI) t
- Visualize instruction counts
- Review total execution time
- Capture clear outputs for demos and prototypes
+
+## What is Corstone-320?
+
+To simulate embedded AI workloads on Arm hardware, you’ll use the Corstone-320 Fixed Virtual Platform (FVP). This pre-silicon software development environment for Arm-based microcontrollers provides a virtual representation of hardware, allowing developers to test and optimize software before actual hardware is available. Designed for AI and machine learning workloads, it includes support for Arm's Ethos-U NPU and Cortex-M processors, making it ideal for embedded AI applications. The FVP accelerates development by enabling early software validation and performance tuning in a flexible, simulation-based environment.
+
+The Corstone-320 reference system is free to use, but you'll need to accept the license agreement during installation.
+For more information, see the [official Corstone-320 documentation](https://developer.arm.com/documentation/109761/0000?lang=en).
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-env-setup-fvp.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-env-setup-fvp.md
index 509a434359..01fd9b52ba 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-env-setup-fvp.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-env-setup-fvp.md
@@ -2,19 +2,15 @@
# User change
title: "Set up the Corstone-320 FVP on Linux"
-weight: 4
+weight: 5
# Do not modify these elements
layout: "learningpathall"
---
-## What is Corstone-320?
-To simulate embedded AI workloads on Arm hardware, you’ll use the Corstone-320 Fixed Virtual Platform (FVP). This pre-silicon software development environment for Arm-based microcontrollers provides a virtual representation of hardware, allowing developers to test and optimize software before actual hardware is available. Designed for AI and machine learning workloads, it includes support for Arm's Ethos-U NPU and Cortex-M processors, making it ideal for embedded AI applications. The FVP accelerates development by enabling early software validation and performance tuning in a flexible, simulation-based environment.
+Use the Corstone-320 Fixed Virtual Platform (FVP) to simulate an Arm-based system and run your ExecuTorch-compiled model.
-The Corstone-320 reference system is free to use, but you'll need to accept the license agreement during installation.
-For more information, see the [official Corstone-320 documentation](https://developer.arm.com/documentation/109761/0000?lang=en).
-
-## Set up the Corstone-320 FVP for ExecuTorch
+## Install the Corstone-320 FVP
Before you begin, make sure you’ve completed the steps in the previous section to install ExecuTorch.
@@ -23,40 +19,48 @@ On macOS, you'll need to perform additional setup to support FVP execution.
See the [FVPs-on-Mac GitHub repo](https://github.com/Arm-Examples/FVPs-on-Mac/) for instructions before continuing.
{{< /notice >}}
+Run the setup script provided in the ExecuTorch examples directory:
+```bash
+cd $HOME/executorch/examples/arm
+./setup.sh --i-agree-to-the-contained-eula
+```
+This installs the FVP and extracts all necessary components. It also prints a command to configure your shell environment.
+## Add the FVP to your system path
-
-
-Navigate to the Arm examples directory in the ExecuTorch repository. Run the following command.
+Run the following command to update your environment:
```bash
-cd $HOME/executorch/examples/arm
-./setup.sh --i-agree-to-the-contained-eula
+source $HOME/executorch/examples/arm/ethos-u-scratch/setup_path.sh
```
+This ensures the FVP binaries are available in your terminal session.
-After the script has finished running, it prints a command to run to finalize the installation. This step adds the FVP executables to your system path.
+## Verify your setup
+
+Run a quick test to check that the FVP is working:
```bash
-source $HOME/executorch/examples/arm/ethos-u-scratch/setup_path.sh
+ ./examples/arm/run.sh --target=ethos-u85-256
```
-Test that the setup was successful by running the `run.sh` script for Ethos-U85, which is the target device for Corstone-320:
+This executes a built-in example on the Ethos-U85 configuration of the Corstone-320 platform.
{{% notice macOS %}}
-**Start Docker:** on macOS, FVPs run inside a Docker container.
+On macOS, make sure Docker is running. FVPs execute inside a Docker container on macOS systems.
{{% /notice %}}
-```bash
- ./examples/arm/run.sh --target=ethos-u85-256
-```
+If you see example output from the platform, the setup is complete.
+
+## Next steps
+You're now ready to deploy and run your own model using ExecuTorch and the Corstone-320 FVP.
+
+
-You will see a number of examples run on the FVP.
-This confirms the installation, so you can now proceed to the next section.
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-how-executorch-works.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-how-executorch-works.md
index 1689cd58eb..e3a70e1252 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-how-executorch-works.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-how-executorch-works.md
@@ -1,19 +1,21 @@
---
# User change
-title: "Understand the ExecuTorch workflow"
+title: "Run your first model with ExecuTorch"
-weight: 5 # 1 is first, 2 is second, etc.
+weight: 6 # 1 is first, 2 is second, etc.
# Do not modify these elements
layout: "learningpathall"
---
## Deploy a TinyML Model
-With your development environment set up, you can deploy a simple PyTorch model.
+Now that your environment and virtual hardware are ready, you are ready to run your first model using ExecuTorch on the Corstone-320 FVP.
+
+## Deploy Mobilenet V2 with ExecuTorch
This example deploys the [MobileNet V2](https://pytorch.org/hub/pytorch_vision_mobilenet_v2/) computer vision model. The model is a convolutional neural network (CNN) that extracts visual features from an image. It is used for image classification and object detection.
-The actual Python code for the MobileNet V2 model is in your local `executorch` repo: [executorch/examples/models/mobilenet_v2/model.py](https://github.com/pytorch/executorch/blob/main/examples/models/mobilenet_v2/model.py). You can deploy it using [run.sh](https://github.com/pytorch/executorch/blob/main/examples/arm/run.sh), just like you did in the previous step, with some extra parameters:
+The Python code for the MobileNet V2 model is in your local `executorch` repo: [executorch/examples/models/mobilenet_v2/model.py](https://github.com/pytorch/executorch/blob/main/examples/models/mobilenet_v2/model.py). You can deploy it using [run.sh](https://github.com/pytorch/executorch/blob/main/examples/arm/run.sh), just like you did in the previous step, with some extra parameters:
{{% notice macOS %}}
@@ -38,3 +40,25 @@ The actual Python code for the MobileNet V2 model is in your local `executorch`
|--intermediates mv2_u85/|Directory where intermediate files (e.g., TOSA, YAMLs, debug graphs) will be saved Useful output files for **manual debugging**|
|--debug|Verbose debugging logging|
|--evaluate|Validates model output, provides timing estimates|
+
+## What to expect
+
+ExecuTorch will:
+
+- Compile the PyTorch model to .pte format
+- Generate intermediate files (YAMLs, graphs, etc.)
+- Run the compiled model on the FVP
+- Output execution timing, operator delegation, and performance stats
+
+You should see output like:
+
+```bash
+Batch Inference time 4.94 ms, 202.34 inferences/s
+Total delegated subgraphs: 1
+Number of delegated nodes: 419
+```
+
+This confirms that the model was successfully compiled, deployed, and run with NPU acceleration.
+
+## Next steps
+If you’d like to visualize instruction counts and performance using the GUI, continue to the next (optional) section.
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-configure-fvp-gui.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-configure-fvp-gui.md
index e3902cafd4..8424e65299 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-configure-fvp-gui.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-configure-fvp-gui.md
@@ -1,8 +1,8 @@
---
# User change
-title: "Configure the FVP GUI (optional)"
+title: "Set up the Corstone-320 FVP"
-weight: 6 # 1 is first, 2 is second, etc.
+weight: 7 # 1 is first, 2 is second, etc.
# Do not modify these elements
layout: "learningpathall"
From 7694067d7be7511d184ed1f701b1314104e9abd9 Mon Sep 17 00:00:00 2001
From: Joe <4088382+JoeStech@users.noreply.github.com>
Date: Wed, 30 Jul 2025 16:35:19 -0600
Subject: [PATCH 08/55] fix quote wrapping
---
.../layouts/partials/head/jsonld.html | 16 ++++------------
1 file changed, 4 insertions(+), 12 deletions(-)
diff --git a/themes/arm-design-system-hugo-theme/layouts/partials/head/jsonld.html b/themes/arm-design-system-hugo-theme/layouts/partials/head/jsonld.html
index d9ffaf085d..49f858bb4a 100644
--- a/themes/arm-design-system-hugo-theme/layouts/partials/head/jsonld.html
+++ b/themes/arm-design-system-hugo-theme/layouts/partials/head/jsonld.html
@@ -1,27 +1,22 @@
{{/* layouts/partials/head/jsonld.html */}}
-
{{/* ---------------------------------------------------------------
Render JSON‑LD only for Learning‑Path _index.md main pages
---------------------------------------------------------------- */}}
{{- if and .IsSection (eq .Params.learning_path_main_page "yes") -}}
-
{{/* -------- Helper : Build ISO‑8601 duration (PT30M, PT2H, …) */}}
{{- $duration := "" -}}
{{- with .Params.minutes_to_complete -}}
{{- $duration = printf "PT%dM" (int .) -}}
{{- end -}}
-
{{/* -------- Learning objectives & prerequisites */}}
{{- $objectives := slice -}}
{{- with .Params.learning_objectives -}}
{{- range . }}{{ $objectives = $objectives | append ( . | plainify ) }}{{ end -}}
{{- end -}}
-
{{- $prereqs := slice -}}
{{- with .Params.prerequisites -}}
{{- range . }}{{ $prereqs = $prereqs | append ( . | plainify ) }}{{ end -}}
{{- end -}}
-
{{/* -------- Collect tag‑style params into one keywords list */}}
{{- $keywords := slice -}}
{{- $tagParams := slice
@@ -42,14 +37,12 @@
{{- end -}}
{{- end -}}
{{- end -}}
-
{{/* -------- Assemble JSON‑LD dict */}}
{{- $j := dict
"@context" "https://schema.org"
"@type" "Course"
"name" .Title
-}}
-
{{- with .Params.who_is_this_for }}{{ $j = merge $j (dict "description" ( . | plainify )) }}{{ end -}}
{{- if $duration }}{{ $j = merge $j (dict "timeRequired" $duration) }}{{ end -}}
{{- with .Params.skilllevels }}{{ $j = merge $j (dict "educationalLevel" .) }}{{ end -}}
@@ -58,10 +51,9 @@
{{- with .Params.author }}{{ $j = merge $j (dict "author" (dict "@type" "Person" "name" .)) }}{{ end -}}
{{- if $keywords }}{{ $j = merge $j (dict "keywords" (delimit (uniq $keywords) ", ")) }}{{ end -}}
{{- with .Site.Title }}{{ $j = merge $j (dict "provider" (dict "@type" "Organization" "name" .)) }}{{ end -}}
-
{{/* -------- Emit into */}}
-
-
+
{{- end -}}
\ No newline at end of file
From d69ba6972b23b8ec2b79c2f0635f077a0337152a Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 11:06:32 +0100
Subject: [PATCH 09/55] Update 1-overview.md
---
.../visualizing-ethos-u-performance/1-overview.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md
index 11a7f532c8..11970b31d5 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md
@@ -29,7 +29,7 @@ ExecuTorch provides:
- Delegation of selected operators to accelerators like Ethos-U
- Tight integration with Arm compute libraries
-## Why use virtual platforms?
+## Why use Arm Fixed Virtual Platforms?
Arm Fixed Virtual Platforms (FVPs) are virtual hardware models used to simulate Arm-based systems like the Corstone-320. They allow developers to validate and tune software before silicon is available, which is especially important when targeting newly-released accelerators like the [Ethos-U85](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u85) NPU.
From b9ae15b20a131e109b2bd823ed1b5e86b5d7751f Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 11:12:06 +0100
Subject: [PATCH 10/55] Rename 4-how-executorch-works.md to 4-run-model.md
---
.../{4-how-executorch-works.md => 4-run-model.md} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/{4-how-executorch-works.md => 4-run-model.md} (100%)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-how-executorch-works.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-run-model.md
similarity index 100%
rename from content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-how-executorch-works.md
rename to content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-run-model.md
From bcbea199b11c8a9b1644c017534d0822b85d2837 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 11:12:51 +0100
Subject: [PATCH 11/55] Rename 2-env-setup.md to 2-env-setup-execut.md
---
.../{2-env-setup.md => 2-env-setup-execut.md} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/{2-env-setup.md => 2-env-setup-execut.md} (100%)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup-execut.md
similarity index 100%
rename from content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup.md
rename to content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup-execut.md
From 11e63b138e208dab412c62371df0384b475787c6 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 11:16:27 +0100
Subject: [PATCH 12/55] Update and rename 4-run-model.md to 7-run-model.md
---
.../{4-run-model.md => 7-run-model.md} | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
rename content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/{4-run-model.md => 7-run-model.md} (98%)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-run-model.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/7-run-model.md
similarity index 98%
rename from content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-run-model.md
rename to content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/7-run-model.md
index e3a70e1252..46af4540e5 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-run-model.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/7-run-model.md
@@ -2,7 +2,7 @@
# User change
title: "Run your first model with ExecuTorch"
-weight: 6 # 1 is first, 2 is second, etc.
+weight: 7 # 1 is first, 2 is second, etc.
# Do not modify these elements
layout: "learningpathall"
From 76b04cbdf04149a2bca841852a5ea68f4bdc3ff7 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 11:17:09 +0100
Subject: [PATCH 13/55] Update 2-env-setup-execut.md
---
.../visualizing-ethos-u-performance/2-env-setup-execut.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup-execut.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup-execut.md
index c62f966a52..f54c724770 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup-execut.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup-execut.md
@@ -2,7 +2,7 @@
# User change
title: "Set up your ExecuTorch environment"
-weight: 4
+weight: 3
# Do not modify these elements
layout: "learningpathall"
From 3a6be3008249b6a252acd243597add162126a8a7 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 11:17:48 +0100
Subject: [PATCH 14/55] Update 2-env-setup-execut.md
---
.../visualizing-ethos-u-performance/2-env-setup-execut.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup-execut.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup-execut.md
index f54c724770..c62f966a52 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup-execut.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup-execut.md
@@ -2,7 +2,7 @@
# User change
title: "Set up your ExecuTorch environment"
-weight: 3
+weight: 4
# Do not modify these elements
layout: "learningpathall"
From d2c3ecf4934a6948258b8e1b9c78579a59d2382a Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 11:18:50 +0100
Subject: [PATCH 15/55] Update 5-configure-fvp-gui.md
---
.../visualizing-ethos-u-performance/5-configure-fvp-gui.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-configure-fvp-gui.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-configure-fvp-gui.md
index 8424e65299..f9aa0150b4 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-configure-fvp-gui.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-configure-fvp-gui.md
@@ -2,7 +2,7 @@
# User change
title: "Set up the Corstone-320 FVP"
-weight: 7 # 1 is first, 2 is second, etc.
+weight: 6 # 1 is first, 2 is second, etc.
# Do not modify these elements
layout: "learningpathall"
From c34910c06ae6ecd95fa2653b85f10c0b876a8383 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 11:19:15 +0100
Subject: [PATCH 16/55] Update 6-evaluate-output.md
---
.../visualizing-ethos-u-performance/6-evaluate-output.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-evaluate-output.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-evaluate-output.md
index 2ab22dbdf2..96f4f6602d 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-evaluate-output.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-evaluate-output.md
@@ -2,7 +2,7 @@
# User change
title: "Evaluate Ethos-U Performance"
-weight: 7 # 1 is first, 2 is second, etc.
+weight: 8 # 1 is first, 2 is second, etc.
# Do not modify these elements
layout: "learningpathall"
From 4b4ed4fe82715bbf952f983cdfa62b3bc49cc2da Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 11:20:12 +0100
Subject: [PATCH 17/55] Rename 2-env-setup-execut.md to 3-env-setup-execut.md
---
.../{2-env-setup-execut.md => 3-env-setup-execut.md} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/{2-env-setup-execut.md => 3-env-setup-execut.md} (100%)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup-execut.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-env-setup-execut.md
similarity index 100%
rename from content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-env-setup-execut.md
rename to content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-env-setup-execut.md
From f8efe04369c46f1972909c22de270c3f5f7cf160 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 11:20:46 +0100
Subject: [PATCH 18/55] Rename 1-overview.md to 2-overview.md
---
.../{1-overview.md => 2-overview.md} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/{1-overview.md => 2-overview.md} (100%)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-overview.md
similarity index 100%
rename from content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/1-overview.md
rename to content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-overview.md
From f1b990f16eb3a53d08f7067faea6e5d741797833 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 11:21:11 +0100
Subject: [PATCH 19/55] Rename 2-executorch-workflow.md to
3-executorch-workflow.md
---
.../{2-executorch-workflow.md => 3-executorch-workflow.md} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/{2-executorch-workflow.md => 3-executorch-workflow.md} (100%)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-executorch-workflow.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-executorch-workflow.md
similarity index 100%
rename from content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-executorch-workflow.md
rename to content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-executorch-workflow.md
From abdf33f7477651c859f27d9c275d15474cc61535 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 11:21:43 +0100
Subject: [PATCH 20/55] Rename 3-env-setup-execut.md to 4-env-setup-execut.md
---
.../{3-env-setup-execut.md => 4-env-setup-execut.md} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/{3-env-setup-execut.md => 4-env-setup-execut.md} (100%)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-env-setup-execut.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-env-setup-execut.md
similarity index 100%
rename from content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-env-setup-execut.md
rename to content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-env-setup-execut.md
From 3324d3a84c6d9dadd5d58186202966e6214728ff Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 11:22:05 +0100
Subject: [PATCH 21/55] Rename 3-env-setup-fvp.md to 5-env-setup-fvp.md
---
.../{3-env-setup-fvp.md => 5-env-setup-fvp.md} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/{3-env-setup-fvp.md => 5-env-setup-fvp.md} (100%)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-env-setup-fvp.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-env-setup-fvp.md
similarity index 100%
rename from content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-env-setup-fvp.md
rename to content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-env-setup-fvp.md
From 2801e51416b7b20e00d54dd4c73a194d9bf06818 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 11:22:36 +0100
Subject: [PATCH 22/55] Rename 5-configure-fvp-gui.md to 6-configure-fvp-gui.md
---
.../{5-configure-fvp-gui.md => 6-configure-fvp-gui.md} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/{5-configure-fvp-gui.md => 6-configure-fvp-gui.md} (100%)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-configure-fvp-gui.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-configure-fvp-gui.md
similarity index 100%
rename from content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-configure-fvp-gui.md
rename to content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-configure-fvp-gui.md
From e780bbbb2ea7db6ec236a0919daae3cfd1ccff82 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 11:22:59 +0100
Subject: [PATCH 23/55] Rename 6-evaluate-output.md to 8-evaluate-output.md
---
.../{6-evaluate-output.md => 8-evaluate-output.md} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/{6-evaluate-output.md => 8-evaluate-output.md} (100%)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-evaluate-output.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/8-evaluate-output.md
similarity index 100%
rename from content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-evaluate-output.md
rename to content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/8-evaluate-output.md
From 220c6b1b1108ebec8c86fd37b1c0aaa53f294744 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 12:17:15 +0100
Subject: [PATCH 24/55] Update 2-overview.md
Tightening language
---
.../2-overview.md | 16 +++++++++-------
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-overview.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-overview.md
index 11970b31d5..5d329bbfde 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-overview.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-overview.md
@@ -5,9 +5,11 @@ weight: 2
### FIXED, DO NOT MODIFY
layout: learningpathall
---
-## Visualize ML on Embedded Devices
+## Visualize ML on embedded devices
-Choosing the right hardware for your machine learning (ML) model starts with having the right tools. In many cases, you need to test and iterate on software before the target hardware is even available,especially when working with cutting-edge accelerators like the Ethos-U NPU.
+In this section, you’ll learn how TinyML, ExecuTorch, and Arm Fixed Virtual Platforms work together to simulate embedded AI workloads before hardware is available.
+
+Choosing the right hardware for your machine learning (ML) model starts with having the right tools. In many cases, you need to test and iterate before your target hardware is even available, especially when working with cutting-edge accelerators like the Ethos-U NPU.
Arm [Fixed Virtual Platforms](https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms) (FVPs) let you visualize and test model performance before any physical hardware is available.
@@ -17,11 +19,11 @@ TinyML is machine learning optimized to run on low-power, resource-constrained d
This Learning Path focuses on using TinyML models with virtualized Arm hardware to simulate real-world AI workloads on microcontrollers and NPUs.
-If you're looking to build and train your own TinyML models, check out the [Introduction to TinyML on Arm using PyTorch and ExecuTorch](/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/).
+If you're looking to build and train your own TinyML models, follow the [Introduction to TinyML on Arm using PyTorch and ExecuTorch](/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/).
## What is ExecuTorch?
-ExecuTorch is a lightweight runtime for executing PyTorch models on embedded and edge devices. It supports efficient model inference on a range of Arm processors, ranging from Cortex-M CPUs to Ethos-U NPUs, with support for hybrid CPU+accelerator execution.
+ExecuTorch is a lightweight runtime for running PyTorch models on embedded and edge devices. It supports efficient model inference on a range of Arm processors, ranging from Cortex-M CPUs to Ethos-U NPUs, with support for hybrid CPU+accelerator execution.
ExecuTorch provides:
@@ -29,20 +31,20 @@ ExecuTorch provides:
- Delegation of selected operators to accelerators like Ethos-U
- Tight integration with Arm compute libraries
-## Why use Arm Fixed Virtual Platforms?
+## Why should I use Arm Fixed Virtual Platforms?
Arm Fixed Virtual Platforms (FVPs) are virtual hardware models used to simulate Arm-based systems like the Corstone-320. They allow developers to validate and tune software before silicon is available, which is especially important when targeting newly-released accelerators like the [Ethos-U85](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u85) NPU.
These virtual platforms also include a built-in graphical user interface (GUI) that helps you:
-- Confirm your model is executing on the intended virtual hardware
+- Confirm your model is running on the intended virtual hardware
- Visualize instruction counts
- Review total execution time
- Capture clear outputs for demos and prototypes
## What is Corstone-320?
-To simulate embedded AI workloads on Arm hardware, you’ll use the Corstone-320 Fixed Virtual Platform (FVP). This pre-silicon software development environment for Arm-based microcontrollers provides a virtual representation of hardware, allowing developers to test and optimize software before actual hardware is available. Designed for AI and machine learning workloads, it includes support for Arm's Ethos-U NPU and Cortex-M processors, making it ideal for embedded AI applications. The FVP accelerates development by enabling early software validation and performance tuning in a flexible, simulation-based environment.
+The Corstone-320 FVP is a virtual model of an Arm-based microcontroller system optimized for AI and TinyML workloads. It supports Cortex-M CPUs and the Ethos-U NPU, making it ideal for early testing, performance tuning, and validation of embedded AI applications, all before physical hardware is available.
The Corstone-320 reference system is free to use, but you'll need to accept the license agreement during installation.
For more information, see the [official Corstone-320 documentation](https://developer.arm.com/documentation/109761/0000?lang=en).
From 6ed3e476b9732724b2503c16d1061c6e07be772d Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 12:24:08 +0100
Subject: [PATCH 25/55] Update 3-executorch-workflow.md
- Reworded section intro for smoother transition from previous content
- Reformatted pipeline steps into numbered headings for better readability
- Added descriptive alt text for diagram image
---
.../3-executorch-workflow.md | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-executorch-workflow.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-executorch-workflow.md
index 0a00c43dd5..eb91d9f8f6 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-executorch-workflow.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-executorch-workflow.md
@@ -10,29 +10,29 @@ layout: "learningpathall"
Before setting up your environment, it helps to understand how ExecuTorch processes a model and runs it on Arm-based hardware.
-## ExecuTorch pipeline overview
+## How the ExecuTorch workflow operates
ExecuTorch works in three main stages:
-**Export the model**
+**Step 1: Export the model**
- Convert a trained PyTorch model into an operator graph.
- Identify operators that can be offloaded to the Ethos-U NPU (for example, ReLU, conv, quantize).
-**Compile with the AOT compiler**
+**Step 2: Compile with the AOT compiler**
- Translate the operator graph into an optimized, quantized format.
- Use `--delegate` to move eligible operations to the Ethos-U accelerator.
- Save the compiled output as a `.pte` file.
-**Deploy and run**
+**Step 3: Deploy and run**
- Execute the compiled model on an FVP or physical target.
- The Ethos-U NPU runs delegated operators; all others run on the Cortex-M CPU.
## Visual overview
-
+
## What's next?
From b85d83b99917a6bffece4e2132c453b75bfa4b86 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 12:32:16 +0100
Subject: [PATCH 26/55] Update 4-env-setup-execut.md
Tightened language and made terminology more consistent.
---
.../visualizing-ethos-u-performance/4-env-setup-execut.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-env-setup-execut.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-env-setup-execut.md
index c62f966a52..1b1ef0f838 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-env-setup-execut.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-env-setup-execut.md
@@ -21,7 +21,7 @@ These instructions have been tested on:
- Ubuntu 22.04 and 24.04
- Windows Subsystem for Linux (WSL)
-Install the required system packages:
+## Install the required system packages:
```bash
sudo apt update
@@ -80,4 +80,4 @@ executorch 0.8.0a0+92fb0cc
## Next steps
-Now that ExecuTorch is installed, you're ready to simulate your TinyML model on virtual Arm hardware. In the next section, you'll configure and launch a Fixed Virtual Platform.
+Now that ExecuTorch is installed, you're ready to simulate your TinyML model on an Arm Fixed Virtual Platform (FVP). In the next section, you'll configure and launch a Fixed Virtual Platform.
From a2c1c6a4bf850867176a27256c45198b6c9b7483 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 12:53:45 +0100
Subject: [PATCH 27/55] Update 5-env-setup-fvp.md
Tidying up
---
.../5-env-setup-fvp.md | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-env-setup-fvp.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-env-setup-fvp.md
index 01fd9b52ba..480a60faa8 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-env-setup-fvp.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-env-setup-fvp.md
@@ -1,6 +1,6 @@
---
# User change
-title: "Set up the Corstone-320 FVP on Linux"
+title: "Set up the Corstone-320 Fixed Virtual Platform"
weight: 5
@@ -8,7 +8,7 @@ weight: 5
layout: "learningpathall"
---
-Use the Corstone-320 Fixed Virtual Platform (FVP) to simulate an Arm-based system and run your ExecuTorch-compiled model.
+In this section, you’ll install and configure the Corstone-320 FVP to simulate an Arm-based embedded system. This lets you run ExecuTorch-compiled models in a virtual environment without any hardware required.
## Install the Corstone-320 FVP
@@ -26,23 +26,27 @@ cd $HOME/executorch/examples/arm
./setup.sh --i-agree-to-the-contained-eula
```
+The `--i-agree-to-the-contained-eula` flag is required to run the script. It indicates your acceptance of Arm’s licensing terms for using the FVP.
+
This installs the FVP and extracts all necessary components. It also prints a command to configure your shell environment.
-## Add the FVP to your system path
+## Add the FVP to your system PATH
Run the following command to update your environment:
```bash
source $HOME/executorch/examples/arm/ethos-u-scratch/setup_path.sh
```
+
This ensures the FVP binaries are available in your terminal session.
## Verify your setup
Run a quick test to check that the FVP is working:
+
```bash
- ./examples/arm/run.sh --target=ethos-u85-256
+./examples/arm/run.sh --target=ethos-u85-256
```
This executes a built-in example on the Ethos-U85 configuration of the Corstone-320 platform.
@@ -56,7 +60,7 @@ On macOS, make sure Docker is running. FVPs execute inside a Docker container on
If you see example output from the platform, the setup is complete.
## Next steps
-You're now ready to deploy and run your own model using ExecuTorch and the Corstone-320 FVP.
+You’re now ready to deploy and run your own TinyML model using ExecuTorch on the Corstone-320 FVP.
From 3d6aee424819ae103da1df6f3abae7fadc8a0bac Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 12:57:45 +0100
Subject: [PATCH 28/55] Update 6-configure-fvp-gui.md
Title change and enhancements.
---
.../visualizing-ethos-u-performance/6-configure-fvp-gui.md | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-configure-fvp-gui.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-configure-fvp-gui.md
index f9aa0150b4..f62dd7269d 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-configure-fvp-gui.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-configure-fvp-gui.md
@@ -1,6 +1,6 @@
---
# User change
-title: "Set up the Corstone-320 FVP"
+title: "Enable GUI and deploy a model on Corstone-320 FVP"
weight: 6 # 1 is first, 2 is second, etc.
@@ -8,6 +8,8 @@ weight: 6 # 1 is first, 2 is second, etc.
layout: "learningpathall"
---
+In this section, you'll enable GUI output for the Corstone-320 FVP and deploy a real TinyML model to observe instruction counts and output in the visual interface.
+
## Find your IP address
Note down your computer's IP address:
From c062d497ba58793826622585daa7e9c1a3223c9c Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 13:52:19 +0100
Subject: [PATCH 29/55] Update 7-run-model.md
Streamlined.
---
.../visualizing-ethos-u-performance/7-run-model.md | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/7-run-model.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/7-run-model.md
index 46af4540e5..78444f0247 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/7-run-model.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/7-run-model.md
@@ -1,6 +1,6 @@
---
# User change
-title: "Run your first model with ExecuTorch"
+title: "Deploy and run Mobilenet V2 on the Corstone-320 FVP"
weight: 7 # 1 is first, 2 is second, etc.
@@ -9,7 +9,7 @@ layout: "learningpathall"
---
## Deploy a TinyML Model
-Now that your environment and virtual hardware are ready, you are ready to run your first model using ExecuTorch on the Corstone-320 FVP.
+With your environment and FVP now set up, you're ready to deploy and run a real TinyML model using ExecuTorch.
## Deploy Mobilenet V2 with ExecuTorch
@@ -17,9 +17,9 @@ This example deploys the [MobileNet V2](https://pytorch.org/hub/pytorch_vision_m
The Python code for the MobileNet V2 model is in your local `executorch` repo: [executorch/examples/models/mobilenet_v2/model.py](https://github.com/pytorch/executorch/blob/main/examples/models/mobilenet_v2/model.py). You can deploy it using [run.sh](https://github.com/pytorch/executorch/blob/main/examples/arm/run.sh), just like you did in the previous step, with some extra parameters:
-{{% notice macOS %}}
+{{% notice Tip %}}
-**Start Docker:** on macOS, FVPs run inside a Docker container.
+On macOS, make sure Docker is running. FVPs execute inside a Docker container.
{{% /notice %}}
@@ -31,6 +31,8 @@ The Python code for the MobileNet V2 model is in your local `executorch` repo: [
--model_name=mv2
```
+The `--model_name=mv2` flag tells `run.sh` to use the Mobilenet V2 model defined in examples/models/mobilenet_v2/model.py.
+
**Explanation of run.sh Parameters**
|run.sh Parameter|Meaning / Context|
|--------------|-----------------|
@@ -58,7 +60,7 @@ Total delegated subgraphs: 1
Number of delegated nodes: 419
```
-This confirms that the model was successfully compiled, deployed, and run with NPU acceleration.
+A high number of delegated nodes means the majority of model execution was successfully offloaded to the Ethos-U NPU for acceleration. This confirms that the model was successfully compiled, deployed, and run with NPU acceleration.
## Next steps
If you’d like to visualize instruction counts and performance using the GUI, continue to the next (optional) section.
From c7d85d4ac38ebf7f7e71f5795a979855789699b4 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 14:03:44 +0100
Subject: [PATCH 30/55] Update 8-evaluate-output.md
Tweaks
---
.../visualizing-ethos-u-performance/8-evaluate-output.md | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/8-evaluate-output.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/8-evaluate-output.md
index 96f4f6602d..9b05e9ae7c 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/8-evaluate-output.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/8-evaluate-output.md
@@ -8,9 +8,11 @@ weight: 8 # 1 is first, 2 is second, etc.
layout: "learningpathall"
---
+Now that you've successfully run the MobileNet V2 model on the Corstone-320 FVP, this section shows how to read and interpret performance data output by ExecuTorch.
+
## Observe Ahead-of-Time Compilation
-- The below output snippet from [run.sh](https://github.com/pytorch/executorch/blob/main/examples/arm/run.sh) is how you can confirm ahead-of-time compilation
-- Specifically you want to see that the original PyTorch model was converted to an ExecuTorch `.pte` file
+- The following output from [run.sh](https://github.com/pytorch/executorch/blob/main/examples/arm/run.sh) confirms that Ahead-of-Time (AOT) compilation was successful.
+- Specifically you want to confirm that the original PyTorch model was compiled into an ExecuTorch `.pte` file
- For the MobileNet V2 example, the compiled ExecuTorch file will be output as `mv2_arm_delegate_ethos-u85-128.pte`
{{% notice Note %}}
@@ -162,4 +164,4 @@ I [executorch:arm_perf_monitor.cpp:184] ethosu_pmu_cntr4 : 130
|ethosu_pmu_cntr3|External DRAM write beats(ETHOSU_PMU_EXT_WR_DATA_BEAT_WRITTEN)|Number of write data beats to external memory.|Helps detect offloading or insufficient SRAM.|
|ethosu_pmu_cntr4|Idle cycles(ETHOSU_PMU_NPU_IDLE)|Number of cycles where the NPU had no work scheduled (i.e., idle).|High idle count = possible pipeline stalls or bad scheduling.|
-In this learning path you have successfully learned how to deploy a MobileNet V2 Model using ExecuTorch on Arm's Corstone-320 FVP.
+In this Learning Path, you have successfully learned how to deploy a MobileNet V2 model using ExecuTorch on Arm's Corstone-320 FVP. You're now ready to apply what you've learned to other models and configurations using ExecuTorch.
From b43def6480a193098c143cb4dbfa9933fd1c979f Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 15:20:26 +0100
Subject: [PATCH 31/55] Update _index.md
Used sentence case for title and added (FVP) to tags.
---
.../visualizing-ethos-u-performance/_index.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/_index.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/_index.md
index 3fb4c492ae..02cd1fe232 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/_index.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/_index.md
@@ -1,5 +1,5 @@
---
-title: Visualize Ethos-U NPU Performance with ExecuTorch on Arm FVPs
+title: Visualize Ethos-U NPU performance with ExecuTorch on Arm FVPs
minutes_to_complete: 120
@@ -32,7 +32,7 @@ operatingsystems:
tools_software_languages:
- Arm Virtual Hardware
- - Fixed Virtual Platform
+ - Fixed Virtual Platform (FVP)
- Python
- PyTorch
- ExecuTorch
From df3e97f10caeed65d70dc0c37e47544d1dc933e1 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 15:22:28 +0100
Subject: [PATCH 32/55] Update 2-overview.md
Removed "official".
---
.../visualizing-ethos-u-performance/2-overview.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-overview.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-overview.md
index 5d329bbfde..8d9ff39605 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-overview.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-overview.md
@@ -47,4 +47,4 @@ These virtual platforms also include a built-in graphical user interface (GUI) t
The Corstone-320 FVP is a virtual model of an Arm-based microcontroller system optimized for AI and TinyML workloads. It supports Cortex-M CPUs and the Ethos-U NPU, making it ideal for early testing, performance tuning, and validation of embedded AI applications, all before physical hardware is available.
The Corstone-320 reference system is free to use, but you'll need to accept the license agreement during installation.
-For more information, see the [official Corstone-320 documentation](https://developer.arm.com/documentation/109761/0000?lang=en).
+For more information, see the [Corstone-320 documentation](https://developer.arm.com/documentation/109761/0000?lang=en).
From 8f0e0f7a626f83d6e4a535976b43a21abe18c5d2 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 15:27:16 +0100
Subject: [PATCH 33/55] Update 3-executorch-workflow.md
Fixes
---
.../3-executorch-workflow.md | 21 +++++++++----------
1 file changed, 10 insertions(+), 11 deletions(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-executorch-workflow.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-executorch-workflow.md
index eb91d9f8f6..2a54ad9732 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-executorch-workflow.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-executorch-workflow.md
@@ -7,32 +7,31 @@ weight: 3
# Do not modify these elements
layout: "learningpathall"
---
+## How the ExecuTorch workflow operates
Before setting up your environment, it helps to understand how ExecuTorch processes a model and runs it on Arm-based hardware.
-## How the ExecuTorch workflow operates
-
-ExecuTorch works in three main stages:
+ExecuTorch works in three main steps:
**Step 1: Export the model**
- - Convert a trained PyTorch model into an operator graph.
- - Identify operators that can be offloaded to the Ethos-U NPU (for example, ReLU, conv, quantize).
+ - Convert a trained PyTorch model into an operator graph
+ - Identify operators that can be offloaded to the Ethos-U NPU (for example, ReLU, conv, and quantize)
**Step 2: Compile with the AOT compiler**
- - Translate the operator graph into an optimized, quantized format.
- - Use `--delegate` to move eligible operations to the Ethos-U accelerator.
- - Save the compiled output as a `.pte` file.
+ - Translate the operator graph into an optimized, quantized format
+ - Use `--delegate` to move eligible operations to the Ethos-U accelerator
+ - Save the compiled output as a `.pte` file
**Step 3: Deploy and run**
- - Execute the compiled model on an FVP or physical target.
- - The Ethos-U NPU runs delegated operators; all others run on the Cortex-M CPU.
+ - Execute the compiled model on an FVP or physical target
+ - The Ethos-U NPU runs delegated operators - all others run on the Cortex-M CPU
## Visual overview
-
+
## What's next?
From 371a8122d1a2bae170dc68707afbeabfa342e40f Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 15:31:16 +0100
Subject: [PATCH 34/55] Update 4-env-setup-execut.md
Final
---
.../visualizing-ethos-u-performance/4-env-setup-execut.md | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-env-setup-execut.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-env-setup-execut.md
index 1b1ef0f838..7cfff66566 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-env-setup-execut.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-env-setup-execut.md
@@ -7,8 +7,9 @@ weight: 4
# Do not modify these elements
layout: "learningpathall"
---
+## Set up overview
-Get your development environment ready to deploy and run models with ExecuTorch.
+Before you can deploy and test models with ExecuTorch, you need to set up your local development environment. This section walks you through installing system dependencies, creating a virtual environment, and cloning the ExecuTorch repository on Ubuntu or WSL. Once complete, you'll be ready to run TinyML models on a virtual Arm platform.
## Install system dependencies
From 8529464d9215f280c0532f32533e4c73a2029ba8 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 15:35:13 +0100
Subject: [PATCH 35/55] Update 5-env-setup-fvp.md
---
.../visualizing-ethos-u-performance/5-env-setup-fvp.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-env-setup-fvp.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-env-setup-fvp.md
index 480a60faa8..a555620c06 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-env-setup-fvp.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-env-setup-fvp.md
@@ -8,6 +8,8 @@ weight: 5
layout: "learningpathall"
---
+## Get started with the Corstone-320 FVP
+
In this section, you’ll install and configure the Corstone-320 FVP to simulate an Arm-based embedded system. This lets you run ExecuTorch-compiled models in a virtual environment without any hardware required.
## Install the Corstone-320 FVP
From adc975519350f4122a47b6c33e7350b22806abb4 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 15:49:20 +0100
Subject: [PATCH 36/55] Update 6-configure-fvp-gui.md
---
.../6-configure-fvp-gui.md | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-configure-fvp-gui.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-configure-fvp-gui.md
index f62dd7269d..19234c655d 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-configure-fvp-gui.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-configure-fvp-gui.md
@@ -8,7 +8,9 @@ weight: 6 # 1 is first, 2 is second, etc.
layout: "learningpathall"
---
-In this section, you'll enable GUI output for the Corstone-320 FVP and deploy a real TinyML model to observe instruction counts and output in the visual interface.
+## Visualize model execution using the FVP GUI
+
+You’ll now enable the graphical interface for the Corstone-320 FVP and run a real TinyML model to observe instruction counts and performance output in a windowed display.
## Find your IP address
@@ -16,11 +18,11 @@ Note down your computer's IP address:
```bash
ip addr show
```
-Note down the IP address of your active network interface (inet) which you will use later to pass as an argument to the FVP.
+You'll use the IP address of your active network interface (inet) later to pass as an argument to the FVP.
-{{% notice macOS %}}
+{{% notice Note %}}
-Note down your `en0` IP address (or whichever network adapter is active):
+For macOS, note down your `en0` IP address (or whichever network adapter is active):
```bash
ipconfig getifaddr en0 # Returns your Mac's WiFi IP address
From 720ccc87bd2c0ac68798f9004bdef4f63040bec4 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Thu, 31 Jul 2025 16:32:13 +0100
Subject: [PATCH 37/55] Update 6-configure-fvp-gui.md
---
.../6-configure-fvp-gui.md | 9 +++------
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-configure-fvp-gui.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-configure-fvp-gui.md
index 19234c655d..27ec6a347c 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-configure-fvp-gui.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-configure-fvp-gui.md
@@ -61,12 +61,9 @@ Edit the following parameters in your locally checked out [executorch/backends/a
{{% notice macOS %}}
-- **Start Docker:** on macOS, FVPs run inside a Docker container.
-
- **Do not use Colima Docker!**
-
- - Make sure to use an [official version of Docker](https://www.docker.com/products/docker-desktop/) and not a free version like the [Colima](https://github.com/abiosoft/colima?tab=readme-ov-file) Docker container runtime
- - `run.sh` assumes Docker Desktop style networking (`host.docker.internal`) which breaks with Colima
+- **Start Docker:** on macOS, FVPs run inside a Docker container.
+- Make sure to use an [official version of Docker](https://www.docker.com/products/docker-desktop/) and not a free version like the [Colima](https://github.com/abiosoft/colima?tab=readme-ov-file) Docker container runtime
+ - `run.sh` assumes Docker Desktop style networking (`host.docker.internal`) which breaks with Colima
- Colima then breaks the FVP GUI
- **Start XQuartz:** on macOS, the FVP GUI runs using XQuartz.
From 36883f7067752826730e35b32ea30c52d8d42bc4 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Fri, 1 Aug 2025 17:37:38 +0000
Subject: [PATCH 38/55] Reordered files. Continued editing.
---
.../2-overview.md | 19 ++++++++--
.../3-executorch-workflow.md | 19 ++++++++--
.../4-env-setup-execut.md | 4 +-
.../5-env-setup-fvp.md | 7 ++--
.../{7-run-model.md => 6-run-model.md} | 2 +-
...gure-fvp-gui.md => 7-configure-fvp-gui.md} | 38 ++++++++++---------
.../8-evaluate-output.md | 4 +-
7 files changed, 61 insertions(+), 32 deletions(-)
rename content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/{7-run-model.md => 6-run-model.md} (98%)
rename content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/{6-configure-fvp-gui.md => 7-configure-fvp-gui.md} (84%)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-overview.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-overview.md
index 8d9ff39605..23997c19c6 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-overview.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-overview.md
@@ -5,7 +5,7 @@ weight: 2
### FIXED, DO NOT MODIFY
layout: learningpathall
---
-## Visualize ML on embedded devices
+## Simulate and evaluate TinyML performance on Arm virtual hardware
In this section, you’ll learn how TinyML, ExecuTorch, and Arm Fixed Virtual Platforms work together to simulate embedded AI workloads before hardware is available.
@@ -13,6 +13,15 @@ Choosing the right hardware for your machine learning (ML) model starts with hav
Arm [Fixed Virtual Platforms](https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms) (FVPs) let you visualize and test model performance before any physical hardware is available.
+ By simulating hardware behavior at the system level, FVPs allow you to:
+
+- Benchmark inference speed and measure operator-level performance
+- Identify which operations are delegated to the NPU and which execute on the CPU
+- Validate end-to-end integration between components like ExecuTorch and Arm NN
+- Iterate faster by debugging and optimizing your workload without relying on hardware
+
+This makes FVPs a crucial tool for embedded ML workflows where precision, portability, and early validation matter.
+
## What is TinyML?
TinyML is machine learning optimized to run on low-power, resource-constrained devices such as Arm Cortex-M microcontrollers and NPUs like the Ethos-U. These models must fit within tight memory and compute budgets, making them ideal for embedded systems.
@@ -31,7 +40,7 @@ ExecuTorch provides:
- Delegation of selected operators to accelerators like Ethos-U
- Tight integration with Arm compute libraries
-## Why should I use Arm Fixed Virtual Platforms?
+## Why use Arm Fixed Virtual Platforms?
Arm Fixed Virtual Platforms (FVPs) are virtual hardware models used to simulate Arm-based systems like the Corstone-320. They allow developers to validate and tune software before silicon is available, which is especially important when targeting newly-released accelerators like the [Ethos-U85](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u85) NPU.
@@ -46,5 +55,7 @@ These virtual platforms also include a built-in graphical user interface (GUI) t
The Corstone-320 FVP is a virtual model of an Arm-based microcontroller system optimized for AI and TinyML workloads. It supports Cortex-M CPUs and the Ethos-U NPU, making it ideal for early testing, performance tuning, and validation of embedded AI applications, all before physical hardware is available.
-The Corstone-320 reference system is free to use, but you'll need to accept the license agreement during installation.
-For more information, see the [Corstone-320 documentation](https://developer.arm.com/documentation/109761/0000?lang=en).
+The Corstone-320 reference system is free to use, but you'll need to accept the license agreement during installation. For more information, see the [Corstone-320 documentation](https://developer.arm.com/documentation/109761/0000?lang=en).
+
+## What's next?
+In the next section, you'll explore how ExecuTorch compiles and deploys models to run efficiently on simulated hardware.
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-executorch-workflow.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-executorch-workflow.md
index 2a54ad9732..6ae810e4b0 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-executorch-workflow.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/3-executorch-workflow.md
@@ -7,9 +7,11 @@ weight: 3
# Do not modify these elements
layout: "learningpathall"
---
-## How the ExecuTorch workflow operates
+## Overview
-Before setting up your environment, it helps to understand how ExecuTorch processes a model and runs it on Arm-based hardware.
+Before setting up your environment, it helps to understand how ExecuTorch processes a model and runs it on Arm-based hardware. ExecuTorch uses ahead-of-time (AOT) compilation to transform PyTorch models into optimized operator graphs that run efficiently on resource-constrained systems. The workflow supports hybrid execution across CPU and NPU cores, allowing you to profile, debug, and deploy TinyML workloads with low runtime overhead and high portability across Arm microcontrollers.
+
+## ExecuTorch in three steps
ExecuTorch works in three main steps:
@@ -29,7 +31,18 @@ ExecuTorch works in three main steps:
- Execute the compiled model on an FVP or physical target
- The Ethos-U NPU runs delegated operators - all others run on the Cortex-M CPU
-## Visual overview
+For more detail, see the [ExecuTorch documentation](https://docs.pytorch.org/executorch/stable/intro-how-it-works.html).
+
+
+## A visual overview
+
+The diagram below summarizes the ExecuTorch workflow from model export to deployment. It shows how a trained PyTorch model is transformed into an optimized, quantized format and deployed to a target system such as an Arm Fixed Virtual Platform (FVP).
+
+- On the left, the model is exported into a graph of operators, with eligible layers flagged for NPU acceleration.
+- In the center, the AOT compiler optimizes and delegates operations, producing a `.pte` file ready for deployment.
+- On the right, the model is executed on embedded Arm hardware, where delegated operators run on the Ethos-U NPU, and the rest are handled by the Cortex-M CPU.
+
+This three-step workflow ensures your TinyML models are performance-tuned and hardware-aware before deployment—even without access to physical silicon.

diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-env-setup-execut.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-env-setup-execut.md
index 7cfff66566..fa02f06a35 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-env-setup-execut.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/4-env-setup-execut.md
@@ -22,7 +22,7 @@ These instructions have been tested on:
- Ubuntu 22.04 and 24.04
- Windows Subsystem for Linux (WSL)
-## Install the required system packages:
+Run the following commands to install the dependencies:
```bash
sudo apt update
@@ -79,6 +79,6 @@ Expected output:
executorch 0.8.0a0+92fb0cc
```
-## Next steps
+## What's next?
Now that ExecuTorch is installed, you're ready to simulate your TinyML model on an Arm Fixed Virtual Platform (FVP). In the next section, you'll configure and launch a Fixed Virtual Platform.
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-env-setup-fvp.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-env-setup-fvp.md
index a555620c06..1208f92761 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-env-setup-fvp.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/5-env-setup-fvp.md
@@ -16,9 +16,10 @@ In this section, you’ll install and configure the Corstone-320 FVP to simulate
Before you begin, make sure you’ve completed the steps in the previous section to install ExecuTorch.
-{{< notice note >}}
-On macOS, you'll need to perform additional setup to support FVP execution.
-See the [FVPs-on-Mac GitHub repo](https://github.com/Arm-Examples/FVPs-on-Mac/) for instructions before continuing.
+{{< notice Note >}}
+If you're using macOS, you need to perform additional setup to support FVP execution.
+
+See the FVPs-on-Mac GitHub repo for instructions before continuing.
{{< /notice >}}
Run the setup script provided in the ExecuTorch examples directory:
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/7-run-model.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-run-model.md
similarity index 98%
rename from content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/7-run-model.md
rename to content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-run-model.md
index 78444f0247..6026e6218e 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/7-run-model.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-run-model.md
@@ -2,7 +2,7 @@
# User change
title: "Deploy and run Mobilenet V2 on the Corstone-320 FVP"
-weight: 7 # 1 is first, 2 is second, etc.
+weight: 6 # 1 is first, 2 is second, etc.
# Do not modify these elements
layout: "learningpathall"
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-configure-fvp-gui.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/7-configure-fvp-gui.md
similarity index 84%
rename from content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-configure-fvp-gui.md
rename to content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/7-configure-fvp-gui.md
index 27ec6a347c..6e6ac5cf91 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-configure-fvp-gui.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/7-configure-fvp-gui.md
@@ -2,7 +2,7 @@
# User change
title: "Enable GUI and deploy a model on Corstone-320 FVP"
-weight: 6 # 1 is first, 2 is second, etc.
+weight: 7 # 1 is first, 2 is second, etc.
# Do not modify these elements
layout: "learningpathall"
@@ -10,7 +10,7 @@ layout: "learningpathall"
## Visualize model execution using the FVP GUI
-You’ll now enable the graphical interface for the Corstone-320 FVP and run a real TinyML model to observe instruction counts and performance output in a windowed display.
+You’ve successfully deployed a model on the Corstone-320 FVP from the command line. In this step, you’ll enable the platform’s built-in graphical output and re-run the model to observe instruction-level execution metrics in a windowed display.
## Find your IP address
@@ -30,7 +30,7 @@ ipconfig getifaddr en0 # Returns your Mac's WiFi IP address
{{% /notice %}}
-## Enable the FVP's GUI
+## Configure the FVP for GUI output
Edit the following parameters in your locally checked out [executorch/backends/arm/scripts/run_fvp.sh](https://github.com/pytorch/executorch/blob/d5fe5faadb8a46375d925b18827493cd65ec84ce/backends/arm/scripts/run_fvp.sh#L97-L102) file, to enable the Mobilenet V2 output on the FVP's GUI:
@@ -59,9 +59,24 @@ Edit the following parameters in your locally checked out [executorch/backends/a
## Deploy the model
-{{% notice macOS %}}
+Now run the Mobilenet V2 computer vision model, using [executorch/examples/arm/run.sh](https://github.com/pytorch/executorch/blob/main/examples/arm/run.sh):
+```bash
+./examples/arm/run.sh \
+--aot_arm_compiler_flags="--delegate --quantize --intermediates mv2_u85/ --debug --evaluate" \
+--output=mv2_u85 \
+--target=ethos-u85-128 \
+--model_name=mv2
+```
+
+Observe that the FVP loads the model file, compiles the PyTorch model to ExecuTorch `.pte` format and then shows an instruction count in the top right of the GUI:
+
+
+
+{{% notice Note %}}
-- **Start Docker:** on macOS, FVPs run inside a Docker container.
+For macOS users, follow these instructions:
+
+- Start Docker. FVPs run inside a Docker container.
- Make sure to use an [official version of Docker](https://www.docker.com/products/docker-desktop/) and not a free version like the [Colima](https://github.com/abiosoft/colima?tab=readme-ov-file) Docker container runtime
- `run.sh` assumes Docker Desktop style networking (`host.docker.internal`) which breaks with Colima
- Colima then breaks the FVP GUI
@@ -74,16 +89,3 @@ Edit the following parameters in your locally checked out [executorch/backends/a
xhost + 127.0.0.1 # The Docker container seems to proxy through localhost
```
{{% /notice %}}
-
-Now run the Mobilenet V2 computer vision model, using [executorch/examples/arm/run.sh](https://github.com/pytorch/executorch/blob/main/examples/arm/run.sh):
-```bash
-./examples/arm/run.sh \
---aot_arm_compiler_flags="--delegate --quantize --intermediates mv2_u85/ --debug --evaluate" \
---output=mv2_u85 \
---target=ethos-u85-128 \
---model_name=mv2
-```
-
-Observe that the FVP loads the model file, compiles the PyTorch model to ExecuTorch `.pte` format and then shows an instruction count in the top right of the GUI:
-
-
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/8-evaluate-output.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/8-evaluate-output.md
index 9b05e9ae7c..10bbc4d2d7 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/8-evaluate-output.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/8-evaluate-output.md
@@ -164,4 +164,6 @@ I [executorch:arm_perf_monitor.cpp:184] ethosu_pmu_cntr4 : 130
|ethosu_pmu_cntr3|External DRAM write beats(ETHOSU_PMU_EXT_WR_DATA_BEAT_WRITTEN)|Number of write data beats to external memory.|Helps detect offloading or insufficient SRAM.|
|ethosu_pmu_cntr4|Idle cycles(ETHOSU_PMU_NPU_IDLE)|Number of cycles where the NPU had no work scheduled (i.e., idle).|High idle count = possible pipeline stalls or bad scheduling.|
-In this Learning Path, you have successfully learned how to deploy a MobileNet V2 model using ExecuTorch on Arm's Corstone-320 FVP. You're now ready to apply what you've learned to other models and configurations using ExecuTorch.
+## Summary
+
+In this Learning Path, you have learned how to deploy a MobileNet V2 model using ExecuTorch on Arm's Corstone-320 FVP. You're now ready to apply what you've learned to other models and configurations using ExecuTorch.
From 946cbfddbc359be1dee1b5a3d20838aa3831ba64 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Fri, 1 Aug 2025 17:53:26 +0000
Subject: [PATCH 39/55] Final
---
.../visualizing-ethos-u-performance/6-run-model.md | 4 +---
.../visualizing-ethos-u-performance/8-evaluate-output.md | 8 +++++---
.../visualizing-ethos-u-performance/_index.md | 1 +
3 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-run-model.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-run-model.md
index 6026e6218e..cc6b3d8e17 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-run-model.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/6-run-model.md
@@ -7,12 +7,10 @@ weight: 6 # 1 is first, 2 is second, etc.
# Do not modify these elements
layout: "learningpathall"
---
-## Deploy a TinyML Model
+## Deploy Mobilenet V2 with ExecuTorch
With your environment and FVP now set up, you're ready to deploy and run a real TinyML model using ExecuTorch.
-## Deploy Mobilenet V2 with ExecuTorch
-
This example deploys the [MobileNet V2](https://pytorch.org/hub/pytorch_vision_mobilenet_v2/) computer vision model. The model is a convolutional neural network (CNN) that extracts visual features from an image. It is used for image classification and object detection.
The Python code for the MobileNet V2 model is in your local `executorch` repo: [executorch/examples/models/mobilenet_v2/model.py](https://github.com/pytorch/executorch/blob/main/examples/models/mobilenet_v2/model.py). You can deploy it using [run.sh](https://github.com/pytorch/executorch/blob/main/examples/arm/run.sh), just like you did in the previous step, with some extra parameters:
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/8-evaluate-output.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/8-evaluate-output.md
index 10bbc4d2d7..1034282f82 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/8-evaluate-output.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/8-evaluate-output.md
@@ -8,7 +8,9 @@ weight: 8 # 1 is first, 2 is second, etc.
layout: "learningpathall"
---
-Now that you've successfully run the MobileNet V2 model on the Corstone-320 FVP, this section shows how to read and interpret performance data output by ExecuTorch.
+## Interpreting the results
+
+Now that you've successfully deployed and executed the MobileNet V2 model on the Corstone-320 FVP, this section walks you through how to interpret the resulting performance data. This includes inference time, operator delegation, and hardware-level metrics from the Ethos-U NPU.
## Observe Ahead-of-Time Compilation
- The following output from [run.sh](https://github.com/pytorch/executorch/blob/main/examples/arm/run.sh) confirms that Ahead-of-Time (AOT) compilation was successful.
@@ -17,7 +19,7 @@ Now that you've successfully run the MobileNet V2 model on the Corstone-320 FVP,
{{% notice Note %}}
-In the below sample outputs, the `executorch` directory path is indicated as `/path/to/executorch`. Your actual path will depend on where you cloned your local copy of the [executorch repo](https://github.com/pytorch/executorch/tree/main).
+In the examples below, `/path/to/executorch` represents the directory where you cloned your local copy of the [ExecuTorch repo](https://github.com/pytorch/executorch/tree/main). Replace it with your actual path when running commands or reviewing output.
{{% /notice %}}
@@ -164,6 +166,6 @@ I [executorch:arm_perf_monitor.cpp:184] ethosu_pmu_cntr4 : 130
|ethosu_pmu_cntr3|External DRAM write beats(ETHOSU_PMU_EXT_WR_DATA_BEAT_WRITTEN)|Number of write data beats to external memory.|Helps detect offloading or insufficient SRAM.|
|ethosu_pmu_cntr4|Idle cycles(ETHOSU_PMU_NPU_IDLE)|Number of cycles where the NPU had no work scheduled (i.e., idle).|High idle count = possible pipeline stalls or bad scheduling.|
-## Summary
+## Review
In this Learning Path, you have learned how to deploy a MobileNet V2 model using ExecuTorch on Arm's Corstone-320 FVP. You're now ready to apply what you've learned to other models and configurations using ExecuTorch.
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/_index.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/_index.md
index 02cd1fe232..c8bc257324 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/_index.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/_index.md
@@ -38,6 +38,7 @@ tools_software_languages:
- ExecuTorch
- Arm Compute Library
- GCC
+ - Docker
further_reading:
- resource:
From 1aeec60e13e4ba36a5ade9329b97b5ba01960851 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Fri, 1 Aug 2025 20:57:07 +0000
Subject: [PATCH 40/55] Replaced index file in order to fix hugo render issues.
---
.../java-perf-flamegraph/_index.md | 70 ++++++++-----------
1 file changed, 31 insertions(+), 39 deletions(-)
diff --git a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/_index.md b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/_index.md
index 06a3c9281c..58b32435f9 100644
--- a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/_index.md
+++ b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/_index.md
@@ -1,57 +1,49 @@
---
title: Analyze Java Performance on Arm servers using FlameGraphs
-
-draft: true
-cascade:
- draft: true
-
minutes_to_complete: 30
-who_is_this_for: This is an introductory topic for software developers looking to analyze the performance of their Java applications on the Arm Neoverse based servers using flame graphs.
+who_is_this_for: "This is an introductory topic for software developers looking to analyze the performance of their Java applications on the Arm Neoverse based servers using flame graphs."
learning_objectives:
- - How to set up tomcat benchmark environment
- - How to generate flame graphs for Java applications using async-profiler
- - How to generate flame graphs for Java applications using Java agent
+ - Set up a Tomcat benchmarking environment
+ - Generate flame graphs using async-profiler
+ - Generate flame graphs using a Java agent
prerequisites:
- - An Arm-based and x86 computer running Ubuntu. You can use a server instance from a cloud service provider of your choice.
- - Basic familiarity with Java applications and flame graphs
+ - "An Arm-based and x86 computer running Ubuntu. You can use a server instance from a cloud service provider of your choice."
+ - Basic familiarity with Java applications and flame graphs
-author: Ying Yu, Martin Ma
+author:
+ - Ying Yu
+ - Martin Ma
-### Tags
+# Tags
skilllevels: Introductory
subjects: Performance and Architecture
armips:
- - Neoverse
-
+ - Neoverse
+
tools_software_languages:
- - OpenJDK-21
- - Tomcat
- - Async-profiler
- - FlameGraph
- - wrk2
-operatingsystems:
- - Linux
+ - OpenJDK-21
+ - Tomcat
+ - Async-profiler
+ - FlameGraph
+ - wrk2
+operatingsystems:
+ - Linux
further_reading:
- - resource:
- title: OpenJDK Wiki
- link: https://wiki.openjdk.org/
- type: documentation
- - resource:
- title: Java FlameGraphs
- link: https://www.brendangregg.com/flamegraphs.html
- type: website
-
-
-
-
-### FIXED, DO NOT MODIFY
-# ================================================================================
-weight: 1 # _index.md always has weight of 1 to order correctly
-layout: "learningpathall" # All files under learning paths have this same wrapper
-learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content.
+ - resource:
+ title: OpenJDK Wiki
+ link: https://wiki.openjdk.org/
+ type: documentation
+ - resource:
+ title: Java FlameGraphs
+ link: https://www.brendangregg.com/flamegraphs.html
+ type: website
+
+weight: 1
+layout: "learningpathall"
+learning_path_main_page: "yes"
---
From e0757e949f08530a643b11eea954563c466bda86 Mon Sep 17 00:00:00 2001
From: GitHub Actions Stats Bot <>
Date: Mon, 4 Aug 2025 01:30:25 +0000
Subject: [PATCH 41/55] automatic update of stats files
---
data/stats_current_test_info.yml | 3 +-
data/stats_weekly_data.yml | 121 ++++++++++++++++++++++++++++++-
2 files changed, 121 insertions(+), 3 deletions(-)
diff --git a/data/stats_current_test_info.yml b/data/stats_current_test_info.yml
index 5e020828c9..95124a0643 100644
--- a/data/stats_current_test_info.yml
+++ b/data/stats_current_test_info.yml
@@ -1,5 +1,5 @@
summary:
- content_total: 391
+ content_total: 393
content_with_all_tests_passing: 0
content_with_tests_enabled: 61
sw_categories:
@@ -196,4 +196,3 @@ sw_categories:
zlib:
readable_title: Learn how to build and use Cloudflare zlib on Arm servers
tests_and_status: []
-
diff --git a/data/stats_weekly_data.yml b/data/stats_weekly_data.yml
index 12463a4ab0..f396902423 100644
--- a/data/stats_weekly_data.yml
+++ b/data/stats_weekly_data.yml
@@ -7011,4 +7011,123 @@
issues:
avg_close_time_hrs: 0
num_issues: 21
- percent_closed_vs_total: 0.0
\ No newline at end of file
+ percent_closed_vs_total: 0.0
+- a_date: '2025-08-04'
+ content:
+ automotive: 3
+ cross-platform: 34
+ embedded-and-microcontrollers: 43
+ install-guides: 105
+ iot: 6
+ laptops-and-desktops: 38
+ mobile-graphics-and-gaming: 35
+ servers-and-cloud-computing: 129
+ total: 393
+ contributions:
+ external: 98
+ internal: 519
+ github_engagement:
+ num_forks: 30
+ num_prs: 18
+ individual_authors:
+ adnan-alsinan: 2
+ alaaeddine-chakroun: 2
+ albin-bernhardsson: 1
+ albin-bernhardsson,-julie-gaskin: 1
+ alex-su: 1
+ alexandros-lamprineas: 1
+ andrew-choi: 2
+ andrew-kilroy: 1
+ annie-tallund: 4
+ arm: 3
+ arnaud-de-grandmaison: 5
+ aude-vuilliomenet: 1
+ avin-zarlez: 1
+ barbara-corriero: 1
+ basma-el-gaabouri: 1
+ ben-clark: 1
+ bolt-liu: 2
+ brenda-strech: 1
+ bright-edudzi-gershon-kordorwu: 1
+ chaodong-gong: 1
+ chen-zhang: 1
+ chenying-kuo: 1
+ christophe-favergeon: 1
+ christopher-seidl: 7
+ cyril-rohr: 1
+ daniel-gubay: 1
+ daniel-nguyen: 2
+ david-spickett: 2
+ dawid-borycki: 33
+ diego-russo: 2
+ dominica-abena-o.-amanfo: 1
+ elham-harirpoush: 2
+ florent-lebeau: 5
+ "fr\xE9d\xE9ric--lefred--descamps": 2
+ gabriel-peterson: 5
+ gayathri-narayana-yegna-narayanan: 2
+ georgios-mermigkis: 1
+ geremy-cohen: 3
+ gian-marco-iodice: 1
+ graham-woodward: 1
+ han-yin: 1
+ iago-calvo-lista: 1
+ james-whitaker: 1
+ jason-andrews: 105
+ jeff-young: 1
+ joana-cruz: 1
+ joe-stech: 6
+ johanna-skinnider: 2
+ jonathan-davies: 2
+ jose-emilio-munoz-lopez: 1
+ julie-gaskin: 5
+ julien-jayat: 1
+ julien-simon: 1
+ julio-suarez: 6
+ jun-he: 1
+ kasper-mecklenburg: 1
+ kieran-hejmadi: 12
+ koki-mitsunami: 2
+ konstantinos-margaritis: 8
+ kristof-beyls: 1
+ leandro-nunes: 1
+ liliya-wu: 1
+ mark-thurman: 1
+ masoud-koleini: 1
+ mathias-brossard: 1
+ michael-hall: 5
+ na-li: 1
+ nader-zouaoui: 2
+ nikhil-gupta: 1
+ nina-drozd: 1
+ nobel-chowdary-mandepudi: 6
+ odin-shen: 9
+ owen-wu: 2
+ pareena-verma: 46
+ paul-howard: 3
+ peter-harris: 1
+ pranay-bakre: 5
+ preema-merlin-dsouza: 1
+ przemyslaw-wirkus: 2
+ qixiang-xu: 1
+ rani-chowdary-mandepudi: 1
+ rin-dobrescu: 1
+ roberto-lopez-mendez: 2
+ ronan-synnott: 45
+ shuheng-deng: 1
+ thirdai: 1
+ tianyu-li: 2
+ tom-pilar: 1
+ uma-ramalingam: 1
+ varun-chari: 2
+ visualsilicon: 1
+ willen-yang: 1
+ william-liang: 1
+ ying-yu: 2
+ yiyang-fan: 1
+ zach-lasiuk: 2
+ zhengjun-xing: 2
+ issues:
+ avg_close_time_hrs: 0
+ num_issues: 26
+ percent_closed_vs_total: 0.0
From 11d65897ffe7d40eac97735aaa14c2574e55a5c3 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Mon, 4 Aug 2025 16:06:31 +0000
Subject: [PATCH 42/55] Updates
---
.../java-perf-flamegraph/1_setup.md | 44 ++++++++++++-------
.../java-perf-flamegraph/_index.md | 14 +++---
2 files changed, 35 insertions(+), 23 deletions(-)
diff --git a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md
index 6fbe8aeb81..59bad6c2bc 100644
--- a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md
+++ b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md
@@ -1,5 +1,5 @@
---
-title: Setup Tomcat Benchmark Environment
+title: Set up Tomcat benchmark environment
weight: 2
### FIXED, DO NOT MODIFY
@@ -8,43 +8,51 @@ layout: learningpathall
## Overview
-There are numerous performance analysis methods and tools for Java applications, among which the call stack flame graph method is regarded as a conventional entry-level approach. Therefore, generating flame graphs is considered a basic operation.
-Various methods and tools are available for generating Java flame graphs, including `async-profiler`, `Java Agent`, `jstack`, `JFR` (Java Flight Recorder), etc.
-This Learning Path focuses on introducing two simple and easy-to-use methods: `async-profiler` and `Java Agent`.
+Flame graphs are a widely used entry point for analyzing Java application performance. Various methods and tools are available for generating Java flame graphs, including `async-profiler`, `Java Agent`, `jstack`, and `JFR` (Java Flight Recorder). This Learning Path focuses on two practical approaches: using `async-profiler` and a Java agent.
-## Setup Benchmark Server - Tomcat
-- [Apache Tomcat](https://tomcat.apache.org/) is an open-source Java Servlet container that enables running Java web applications, handling HTTP requests and serving dynamic content.
-- As a core component in Java web development, Apache Tomcat supports Servlet, JSP, and WebSocket technologies, providing a lightweight runtime environment for web apps.
+In this section, you'll set up a benchmark environment using Apache Tomcat and `wrk2` to simulate HTTP load and evaluate performance on an Arm-based server.
+
+## Set up the Tomcat benchmark server
+[Apache Tomcat](https://tomcat.apache.org/) is an open-source Java Servlet container that runs Java web applications, handles HTTP requests, and serves dynamic content. As a core component in Java web development, Apache Tomcat supports Servlet, JSP, and WebSocket technologies, providing a lightweight runtime environment for web apps.
+
+## Install the Java Development Kit (JDK)
+
+On your **Arm-based Ubuntu server**, install OpenJDK 21:
-1. Start by installing Java Development Kit (JDK) on your Arm-based server running Ubuntu:
```bash
sudo apt update
sudo apt install -y openjdk-21-jdk
```
-2. Next, you can install Tomcat by either [building it from source](https://github.com/apache/tomcat) or downloading the pre-built package simply from [the official website](https://tomcat.apache.org/whichversion.html)
+## Install Tomcat
+
+You can either build Tomcat [from source](https://github.com/apache/tomcat) or download the pre-built package from [the Tomcat Apache website](https://tomcat.apache.org/whichversion.html):
+
```bash
wget -c https://dlcdn.apache.org/tomcat/tomcat-11/v11.0.9/bin/apache-tomcat-11.0.9.tar.gz
tar xzf apache-tomcat-11.0.9.tar.gz
```
-3. If you intend to access the built-in examples of Tomcat via an intranet IP or even an external IP, you need to modify a configuration file as shown:
+## Enable access to Tomcat examples
+
+To access the built-in examples from your local network or external IP, modify the context.xml file:
+
```bash
vi apache-tomcat-11.0.9/webapps/examples/META-INF/context.xml
```
-Then change the allow value as shown and save the changes:
+Update the `RemoteAddrValve` configuration to allow all IPs:
```output
# change
# to
```
-Now you can start Tomcat Server:
+## Start the Tomcat Server:
```bash
./apache-tomcat-11.0.9/bin/startup.sh
```
-The output from starting the server should look like:
+You should see output like:
```output
Using CATALINA_BASE: /home/ubuntu/apache-tomcat-11.0.9
@@ -56,11 +64,15 @@ Using CATALINA_OPTS:
Tomcat started.
```
-4. If you can access the page at "http://${tomcat_ip}:8080/examples" via a browser, you can proceed to the next benchmarking step.
+## Confirm server access
+
+In your browser, open `http://${tomcat_ip}:8080/examples`
+
+You should see the Tomcat welcome page and examples, as shown below:
-
+
-
+
Make sure port 8080 is open in the security group of the IP address for your Arm-based Linux machine.
diff --git a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/_index.md b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/_index.md
index 58b32435f9..3b563dcca6 100644
--- a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/_index.md
+++ b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/_index.md
@@ -2,16 +2,16 @@
title: Analyze Java Performance on Arm servers using FlameGraphs
minutes_to_complete: 30
-who_is_this_for: "This is an introductory topic for software developers looking to analyze the performance of their Java applications on the Arm Neoverse based servers using flame graphs."
+who_is_this_for: This is an introductory topic for developers who want to analyze the performance of Java applications on the Arm Neoverse-based servers using FlameGraphs.
learning_objectives:
- - Set up a Tomcat benchmarking environment
- - Generate flame graphs using async-profiler
- - Generate flame graphs using a Java agent
+ - Set up a benchmarking environment using Tomcat and wrk2
+ - Generate FlameGraphs using async-profiler
+ - Generate FlameGraphs using a Java agent
prerequisites:
- - "An Arm-based and x86 computer running Ubuntu. You can use a server instance from a cloud service provider of your choice."
- - Basic familiarity with Java applications and flame graphs
+ - Access to both Arm-based and x86-based computers running Ubuntu (you can use cloud-based server instances)
+ - Basic familiarity with Java applications and performance profiling using FlameGraphs
author:
- Ying Yu
@@ -26,7 +26,7 @@ armips:
tools_software_languages:
- OpenJDK-21
- Tomcat
- - Async-profiler
+ - async-profiler
- FlameGraph
- wrk2
From 10bc3e92bf8818bd9e29c10752c2c05b20ab48ca Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Mon, 4 Aug 2025 19:48:11 +0000
Subject: [PATCH 43/55] Updates
---
.../java-perf-flamegraph/1_setup.md | 63 ++++++++++++-------
.../java-perf-flamegraph/2_async-profiler.md | 44 +++++++++----
.../java-perf-flamegraph/3_agent.md | 16 +++--
3 files changed, 85 insertions(+), 38 deletions(-)
diff --git a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md
index 59bad6c2bc..f1bd37caf6 100644
--- a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md
+++ b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md
@@ -9,16 +9,16 @@ layout: learningpathall
## Overview
-Flame graphs are a widely used entry point for analyzing Java application performance. Various methods and tools are available for generating Java flame graphs, including `async-profiler`, `Java Agent`, `jstack`, and `JFR` (Java Flight Recorder). This Learning Path focuses on two practical approaches: using `async-profiler` and a Java agent.
+Flame graphs are a widely used entry point for analyzing Java application performance. Tools for generating flame graphs include`async-profiler`, Java agents, `jstack`, and Java Flight Recorder (JFR)). This Learning Path focuses on two practical approaches: using `async-profiler` and a Java agent.
In this section, you'll set up a benchmark environment using Apache Tomcat and `wrk2` to simulate HTTP load and evaluate performance on an Arm-based server.
## Set up the Tomcat benchmark server
-[Apache Tomcat](https://tomcat.apache.org/) is an open-source Java Servlet container that runs Java web applications, handles HTTP requests, and serves dynamic content. As a core component in Java web development, Apache Tomcat supports Servlet, JSP, and WebSocket technologies, providing a lightweight runtime environment for web apps.
+[Apache Tomcat](https://tomcat.apache.org/) is an open-source Java Servlet container that runs Java web applications, handles HTTP requests, and serves dynamic content. It supports technologies such as Servlet, JSP, and WebSocket.
## Install the Java Development Kit (JDK)
-On your **Arm-based Ubuntu server**, install OpenJDK 21:
+Install OpenJDK 21 on your Arm-based Ubuntu server:
```bash
sudo apt update
@@ -27,27 +27,33 @@ sudo apt install -y openjdk-21-jdk
## Install Tomcat
-You can either build Tomcat [from source](https://github.com/apache/tomcat) or download the pre-built package from [the Tomcat Apache website](https://tomcat.apache.org/whichversion.html):
+Download and extract Tomcat:
```bash
wget -c https://dlcdn.apache.org/tomcat/tomcat-11/v11.0.9/bin/apache-tomcat-11.0.9.tar.gz
tar xzf apache-tomcat-11.0.9.tar.gz
```
+Alternatively, you can build Tomcat [from source](https://github.com/apache/tomcat).
## Enable access to Tomcat examples
-To access the built-in examples from your local network or external IP, modify the context.xml file:
+To access the built-in examples from your local network or external IP, modify the `context.xml` file:
```bash
vi apache-tomcat-11.0.9/webapps/examples/META-INF/context.xml
```
Update the `RemoteAddrValve` configuration to allow all IPs:
-```output
-# change
-# to
+
+
+
+
+
-```
-## Start the Tomcat Server:
+
+## Start the Tomcat server:
+
+Start the server:
+
```bash
./apache-tomcat-11.0.9/bin/startup.sh
```
@@ -66,44 +72,57 @@ Tomcat started.
## Confirm server access
-In your browser, open `http://${tomcat_ip}:8080/examples`
+In your browser, open:
+
+`http://${tomcat_ip}:8080/examples`
You should see the Tomcat welcome page and examples, as shown below:
-
+
+
+
-
+{{% notice Note %}}Make sure port 8080 is open in the security group of the IP address for your Arm-based Linux machine.{{% /notice%}}
-Make sure port 8080 is open in the security group of the IP address for your Arm-based Linux machine.
+## Set up the benchmarking client using wrk2
+[Wrk2](https://github.com/giltene/wrk2) is a high-performance HTTP benchmarking tool specialized in generating constant throughput loads and measuring latency percentiles for web services. `wrk2` is an enhanced version of `wrk` that provides accurate latency statistics under controlled request rates, ideal for performance testing of HTTP servers.
-## Setup Benchmark Client - [wrk2](https://github.com/giltene/wrk2)
-`wrk2` is a high-performance HTTP benchmarking tool specialized in generating constant throughput loads and measuring latency percentiles for web services. `wrk2` is an enhanced version of `wrk` that provides accurate latency statistics under controlled request rates, ideal for performance testing of HTTP servers.
+{{% notice Note %}}
+Currently `wrk2` is only supported on x86 machines. Run the benchmark client steps below on an `x86_64` server running Ubuntu.
+{{%/notice%}}
-Currently `wrk2` is only supported on x86 machines. You will run the Benchmark Client steps shown below on an x86_64 server running Ubuntu.
+## Install dependencies
+Install the required packages:
-1. To use `wrk2`, you will need to install some essential tools before you can build it:
```bash
sudo apt-get update
sudo apt-get install -y build-essential libssl-dev git zlib1g-dev
```
-2. Now you can clone and build it from source:
+## Clone and build wrk2
+
+Clone the repository and compile the tool:
+
```bash
sudo git clone https://github.com/giltene/wrk2.git
cd wrk2
sudo make
```
-Move the executable to somewhere in your PATH:
+
+Move the binary to a directory in your system’s PATH:
```bash
sudo cp wrk /usr/local/bin
```
-3. Finally, you can run the benchmark of Tomcat through wrk2.
+## Run the benchmark
+
+Use the following command to benchmark the HelloWorld servlet running on Tomcat:
+
```bash
wrk -c32 -t16 -R50000 -d60 http://${tomcat_ip}:8080/examples/servlets/servlet/HelloWorldExample
```
-Shown below is the output of wrk2:
+You should see output similar to:
```console
Running 1m test @ http://172.26.203.139:8080/examples/servlets/servlet/HelloWorldExample
diff --git a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/2_async-profiler.md b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/2_async-profiler.md
index 5346d45fac..cd1f236620 100644
--- a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/2_async-profiler.md
+++ b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/2_async-profiler.md
@@ -1,32 +1,54 @@
---
-title: Java FlameGraph - Async-profiler
+title: Generate Java flame graphs using async-profiler
weight: 3
### FIXED, DO NOT MODIFY
layout: learningpathall
---
-## Java Flame Graph Generation using [async-profiler](https://github.com/async-profiler/async-profiler)
-`async-profiler` is a low-overhead sampling profiler for JVM applications, capable of capturing CPU, allocation, and lock events to generate actionable performance insights.
-A lightweight tool for Java performance analysis, `async-profiler` produces flame graphs and detailed stack traces with minimal runtime impact, suitable for production environments. In this section, you will learn how to install and use it to profile your Tomcat instance being benchmarked.
+## Overview
+
+[Async-profiler](https://github.com/async-profiler/async-profiler) is a low-overhead sampling profiler for JVM applications. It can capture CPU usage, memory allocations, and lock events to generate flame graphs and detailed stack traces.
+
+
+This tool is well-suited for production environments due to its minimal runtime impact. In this section, you'll install and run `async-profiler` to analyze performance on your Tomcat instance under benchmark load.
+
+{{%notice Note%}}
+Install and run `async-profiler` on the same Arm-based Linux machine where Tomcat is running to ensure accurate profiling.
+{{%/notice%}}
+
+## Install async-profiler
+
+Download and extract the latest release:
-You should deploy `async-profiler` on the same Arm Linux machine where Tomcat is running to ensure accurate performance profiling.
-1. Download async-profiler-4.0 and uncompress
```bash
wget -c https://github.com/async-profiler/async-profiler/releases/download/v4.0/async-profiler-4.0-linux-arm64.tar.gz
tar xzf async-profiler-4.0-linux-arm64.tar.gz
```
-2. Run async-profiler to profile the Tomcat instance under benchmarking
+## Run the profiler
+
+Navigate to the profiler binary directory:
+
```bash
cd async-profiler-4.0-linux-arm64/bin
-./asprof -d 10 -f profile.html $(jps | awk /Bootstrap/'{print $1}')
```
-You can also run:
+Run async-profiler against the Tomcat process:
+
+```bash
+./asprof -d 10 -f profile.html $(jps | awk /Bootstrap/'{print $1}')
```
+Alternatively, if you already know the process ID (PID):
+
+```bash
./asprof -d 10 -f profile.html ${tomcat_process_id}
```
+* `-d 10` sets the profiling duration to 10 seconds
+
+* `-f profile.html` specifies the output file
+
+## View the flame graph
-3. Now launch `profile.html` in a browser to analyse your profiling result
+Open the generated `profile.html` file in a browser to view your Java flame graph:
-
+
diff --git a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/3_agent.md b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/3_agent.md
index 96ff1ea117..c3e85843d9 100644
--- a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/3_agent.md
+++ b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/3_agent.md
@@ -1,5 +1,5 @@
---
-title: Java FlameGraph - Java Agent
+title: Generate Java flame graphs using a Java agent
weight: 4
@@ -7,10 +7,16 @@ weight: 4
layout: learningpathall
---
-## Java Flame Graph Generation using Java agent and perf
-To profile a Java application with perf and ensure proper symbol resolution, you must include `libperf-jvmti.so` when launching the Java application.
-- `libperf-jvmti.so` is a JVM TI agent library enabling perf to resolve Java symbols, facilitating accurate profiling of Java applications.
-- A specialized shared library, `libperf-jvmti.so` bridges perf and the JVM, enabling proper translation of memory addresses to Java method names during profiling.
+## Overview
+
+
+You can profile a Java application using `perf` by including a Java agent that enables symbol resolution. This allows `perf` to capture meaningful method names instead of raw memory addresses.
+
+The required library is `libperf-jvmti.so`, a JVM Tool Interface (JVMTI) agent that bridges `perf` and the JVM. It ensures that stack traces collected during profiling can be accurately resolved to Java methods.
+
+In this section, you'll configure Tomcat to use this Java agent and generate a flame graph using the FlameGraph toolkit.
+
+## Locate the Java agent
1. Find where `libperf-jvmti.so` is installed on your Arm-based Linux server:
```bash
From 001003f1b53106aece93df4c44a4fc617e924c1d Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Mon, 4 Aug 2025 21:18:17 +0000
Subject: [PATCH 44/55] Updates
---
.../java-perf-flamegraph/1_setup.md | 9 +++--
.../java-perf-flamegraph/3_agent.md | 36 ++++++++++++++-----
.../java-perf-flamegraph/_index.md | 10 +++---
3 files changed, 36 insertions(+), 19 deletions(-)
diff --git a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md
index f1bd37caf6..9c3db78611 100644
--- a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md
+++ b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md
@@ -9,7 +9,7 @@ layout: learningpathall
## Overview
-Flame graphs are a widely used entry point for analyzing Java application performance. Tools for generating flame graphs include`async-profiler`, Java agents, `jstack`, and Java Flight Recorder (JFR)). This Learning Path focuses on two practical approaches: using `async-profiler` and a Java agent.
+Flame graphs are a widely used entry point for analyzing Java application performance. Tools for generating flame graphs include `async-profiler`, Java agents, `jstack`, and Java Flight Recorder (JFR). This Learning Path focuses on two practical approaches: using `async-profiler` and a Java agent.
In this section, you'll set up a benchmark environment using Apache Tomcat and `wrk2` to simulate HTTP load and evaluate performance on an Arm-based server.
@@ -50,7 +50,7 @@ Update the `RemoteAddrValve` configuration to allow all IPs:
-## Start the Tomcat server:
+## Start the Tomcat server
Start the server:
@@ -72,9 +72,7 @@ Tomcat started.
## Confirm server access
-In your browser, open:
-
-`http://${tomcat_ip}:8080/examples`
+In your browser, open: `http://${tomcat_ip}:8080/examples`.
You should see the Tomcat welcome page and examples, as shown below:
@@ -111,6 +109,7 @@ sudo make
```
Move the binary to a directory in your system’s PATH:
+
```bash
sudo cp wrk /usr/local/bin
```
diff --git a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/3_agent.md b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/3_agent.md
index c3e85843d9..c2de6d84a4 100644
--- a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/3_agent.md
+++ b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/3_agent.md
@@ -9,7 +9,6 @@ layout: learningpathall
## Overview
-
You can profile a Java application using `perf` by including a Java agent that enables symbol resolution. This allows `perf` to capture meaningful method names instead of raw memory addresses.
The required library is `libperf-jvmti.so`, a JVM Tool Interface (JVMTI) agent that bridges `perf` and the JVM. It ensures that stack traces collected during profiling can be accurately resolved to Java methods.
@@ -18,37 +17,56 @@ In this section, you'll configure Tomcat to use this Java agent and generate a f
## Locate the Java agent
-1. Find where `libperf-jvmti.so` is installed on your Arm-based Linux server:
+Locate the `libperf-jvmti.so` library:
+
```bash
pushd /usr/lib
find . -name libperf-jvmti.so`
```
-The output will show the path of the library that you will then include in your Tomcat setup file:
+The output will show the path to the shared object file:
+
+## Modify Tomcat configuration
+
+Open the Tomcat launch script:
+
```bash
vi apache-tomcat-11.0.9/bin/catalina.sh
```
-Add JAVA_OPTS="$JAVA_OPTS -agentpath:/usr/lib/linux-tools-6.8.0-63/libperf-jvmti.so -XX:+PreserveFramePointer" to `catalina.sh`. Make sure the path matches the location on your machine from the previous step.
+Add the following line (replace the path if different on your system):
+```bash
+JAVA_OPTS="$JAVA_OPTS -agentpath:/usr/lib/linux-tools-6.8.0-63/libperf-jvmti.so -XX:+PreserveFramePointer"
+```
Now shutdown and restart Tomcat:
+
```bash
cd apache-tomcat-11.0.9/bin
./shutdown.sh
./startup.sh
```
-2. Use perf to profile Tomcat, and restart wrk that running on your x86 instance if necessary:
+## Run perf to record profiling data
+
+Run the following command to record a 10-second profile of the Tomcat process:
+
```bash
sudo perf record -g -k1 -p $(jps | awk /Bootstrap/'{print $1}') -- sleep 10
```
-This command will record the collected data in a file named `perf.data`
+This generates a file named `perf.data`.
+
+If needed, restart `wrk` on your x86 client to generate load during profiling.
+
+## Generate a flame graph
+
+Clone the FlameGraph repository and add it to your PATH:
-3. Convert the collected `perf.data` into a Java flame graph using FlameGraph
```bash
git clone https://github.com/brendangregg/FlameGraph.git
export PATH=$PATH:`pwd`/FlameGraph
sudo perf inject -j -i perf.data | perf script | stackcollapse-perf.pl | flamegraph.pl &> profile.svg
```
+## View the result
-4. You can now successfully launch `profile.svg` in a browser to analyse the profiling result
+You can now launch `profile.svg` in a browser to analyse the profiling result:
-
+
diff --git a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/_index.md b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/_index.md
index 3b563dcca6..b675188fa3 100644
--- a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/_index.md
+++ b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/_index.md
@@ -1,17 +1,17 @@
---
-title: Analyze Java Performance on Arm servers using FlameGraphs
+title: Analyze Java performance on Arm servers using flame graphs
minutes_to_complete: 30
-who_is_this_for: This is an introductory topic for developers who want to analyze the performance of Java applications on the Arm Neoverse-based servers using FlameGraphs.
+who_is_this_for: This is an introductory topic for developers who want to analyze the performance of Java applications on Arm Neoverse-based servers using flame graphs.
learning_objectives:
- Set up a benchmarking environment using Tomcat and wrk2
- - Generate FlameGraphs using async-profiler
- - Generate FlameGraphs using a Java agent
+ - Generate flame graphs using async-profiler
+ - Generate flame graphs using a Java agent
prerequisites:
- Access to both Arm-based and x86-based computers running Ubuntu (you can use cloud-based server instances)
- - Basic familiarity with Java applications and performance profiling using FlameGraphs
+ - Basic familiarity with Java applications and performance profiling using flame graphs
author:
- Ying Yu
From 2ce6f68e7c83f87eaf908d64ae8565b5a40b6f78 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Mon, 4 Aug 2025 21:30:38 +0000
Subject: [PATCH 45/55] Final tweaks
---
.../java-perf-flamegraph/1_setup.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md
index 9c3db78611..e18fb38d9b 100644
--- a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md
+++ b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md
@@ -76,9 +76,9 @@ In your browser, open: `http://${tomcat_ip}:8080/examples`.
You should see the Tomcat welcome page and examples, as shown below:
-
+
-
+
{{% notice Note %}}Make sure port 8080 is open in the security group of the IP address for your Arm-based Linux machine.{{% /notice%}}
From eec8379e154667a5e968e978fbcaad0fff61b0b5 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Tue, 5 Aug 2025 11:11:52 +0000
Subject: [PATCH 46/55] Polished Learning Path metadata: split objectives,
improved readability, aligned tags and titles
---
.../azure-vm/_index.md | 27 +++++++++----------
1 file changed, 13 insertions(+), 14 deletions(-)
diff --git a/content/learning-paths/servers-and-cloud-computing/azure-vm/_index.md b/content/learning-paths/servers-and-cloud-computing/azure-vm/_index.md
index 357a8bdcd5..58aa33065f 100644
--- a/content/learning-paths/servers-and-cloud-computing/azure-vm/_index.md
+++ b/content/learning-paths/servers-and-cloud-computing/azure-vm/_index.md
@@ -1,23 +1,22 @@
---
title: Create an Azure Linux 3.0 virtual machine with Cobalt 100 processors
-draft: true
-cascade:
- draft: true
-
minutes_to_complete: 120
-who_is_this_for: This Learning Path explains how to create a virtual machine on Azure running Azure Linux 3.0 on Cobalt 100 processors.
+who_is_this_for: This is an advanced topic for developers who want to run Azure Linux 3.0 on Arm-based Cobalt 100 processors in a custom virtual machine.
learning_objectives:
- - Use QEMU to create a raw disk image, boot a VM using an Aarch64 ISO, install the OS, and convert the raw disk image to VHD format.
- - Upload the VHD file to Azure and use the Azure Shared Image Gallery (SIG) to create a custom image.
- - Use the Azure CLI to create an Azure Linux 3.0 VM for Arm, using the custom image from the Azure SIG.
+ - Use QEMU to create a raw disk image
+ - Boot a virtual machine using an AArch64 ISO and install Azure Linux 3.0
+ - Convert the raw disk image to VHD format
+ - Upload the VHD file to Azure
+ - Use Azure Shared Image Gallery (SIG) to create a custom image
+ - Create an Azure Linux 3.0 virtual machine on Arm using the Azure CLI and the custom image
prerequisites:
- - A [Microsoft Azure](https://azure.microsoft.com/) account with permission to create resources, including instances using Cobalt 100 processors.
- - A Linux machine with [QEMU](https://www.qemu.org/download/) and the [Azure CLI](/install-guides/azure-cli/) installed and authenticated.
+ - A [Microsoft Azure](https://azure.microsoft.com/) account with permission to create resources, including instances using Cobalt 100 processors
+ - A Linux machine with [QEMU](https://www.qemu.org/download/) and the [Azure CLI](/install-guides/azure-cli/) installed and authenticated
author: Jason Andrews
@@ -38,19 +37,19 @@ operatingsystems:
further_reading:
- resource:
- title: Azure Virtual Machines documentation
+ title: Virtual machines in Azure
link: https://learn.microsoft.com/en-us/azure/virtual-machines/
type: documentation
- resource:
- title: Azure Shared Image Gallery documentation
+ title: Store and share images in an Azure Compute Gallery
link: https://learn.microsoft.com/en-us/azure/virtual-machines/shared-image-galleries
type: documentation
- resource:
- title: QEMU User Documentation
+ title: QEMU Documentation
link: https://wiki.qemu.org/Documentation
type: documentation
- resource:
- title: Upload a VHD to Azure and create an image
+ title: Upload a VHD to Azure or copy a managed disk to another region - Azure CLI
link: https://learn.microsoft.com/en-us/azure/virtual-machines/linux/upload-vhd
type: documentation
From da1529da59c442a54a769a9f3e276f1312fcf883 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Tue, 5 Aug 2025 11:24:34 +0000
Subject: [PATCH 47/55] Edited intro section for tone, structure, and SEO
alignment around Arm VM support.
---
.../azure-vm/background.md | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/content/learning-paths/servers-and-cloud-computing/azure-vm/background.md b/content/learning-paths/servers-and-cloud-computing/azure-vm/background.md
index fa9b4854f7..7ddf7de034 100644
--- a/content/learning-paths/servers-and-cloud-computing/azure-vm/background.md
+++ b/content/learning-paths/servers-and-cloud-computing/azure-vm/background.md
@@ -6,24 +6,34 @@ weight: 2
layout: "learningpathall"
---
-## What is Azure Linux 3.0?
+## What is Azure Linux 3.0 and how can I use it?
-Azure Linux 3.0 is a Linux distribution developed and maintained by Microsoft, specifically designed for use on the Azure cloud platform. It is optimized for running cloud-native workloads, such as containers, microservices, and Kubernetes clusters, and emphasizes performance, security, and reliability. Azure Linux 3.0 provides native support for the Arm (AArch64) architecture, enabling efficient, scalable, and cost-effective deployments on Arm-based infrastructure within Azure.
+Azure Linux 3.0 is a Microsoft-developed Linux distribution designed specifically for cloud-native workloads on the Azure platform. It is optimized for running cloud-native workloads, such as containers, microservices, and Kubernetes clusters, and emphasizes performance, security, and reliability.
+
+Azure Linux 3.0 includes native support for the Arm (AArch64) architecture, enabling efficient, scalable, and cost-effective deployments on Arm-based Azure infrastructure.
+
+## Can I run Azure Linux 3.0 on Arm-based Azure virtual machines?
Currently, Azure Linux 3.0 is not available as a ready-made virtual machine image for Arm-based VMs in the Azure Marketplace. Only x86_64 images, published by Ntegral Inc., are offered. This means you cannot directly create an Azure Linux 3.0 VM for Arm from the Azure portal or CLI.
+## How can I create a custom Azure Linux image for Arm?
+
However, you can still run Azure Linux 3.0 on Arm-based Azure VMs by creating your own disk image. Using QEMU, an open-source machine emulator and virtualizer, you can build a custom Azure Linux 3.0 Arm image locally. After building the image, you can upload it to your Azure account as a managed disk or custom image. This process allows you to deploy and manage Azure Linux 3.0 VMs on Arm infrastructure, even before official images are available.
This Learning Path guides you through the steps to build an Azure Linux 3.0 disk image with QEMU, upload it to Azure, and prepare it for use in creating virtual machines.
Following this process, you'll be able to create and run Azure Linux 3.0 VMs on Arm-based Azure infrastructure.
+## What tools do I need to build an Azure Linux image locally?
+
To get started install the dependencies on your local Linux machine. The instructions work for both Arm or x86 running Ubuntu.
```bash
sudo apt update && sudo apt install qemu-system-arm qemu-system-aarch64 qemu-efi-aarch64 qemu-utils ovmf -y
```
+## What tools do I need to build an Azure Linux image locally?
+
You also need to install the Azure CLI. Refer to [How to install the Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest). You can also use the [Azure CLI install guide](/install-guides/azure-cli/) for Arm Linux systems.
Make sure the CLI is working by running the version command and confirm the version is printed.
@@ -43,4 +53,6 @@ You should see an output similar to:
}
```
+## What’s the next step after setting up my environment?
+
Continue to learn how to prepare the Azure Linux disk image.
\ No newline at end of file
From 7fe069e1a59d8bc6c48c6a550a718d3f6d630aa4 Mon Sep 17 00:00:00 2001
From: Odin Shen Coder
Date: Tue, 5 Aug 2025 15:20:01 +0100
Subject: [PATCH 48/55] Update FVP link and contributors list.
---
assets/contributors.csv | 2 +-
.../refinfra-quick-start/test-with-fvp-3.md | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/assets/contributors.csv b/assets/contributors.csv
index 1317f49b31..ca9d8d4a27 100644
--- a/assets/contributors.csv
+++ b/assets/contributors.csv
@@ -92,6 +92,6 @@ Aude Vuilliomenet,Arm,,,,
Andrew Kilroy,Arm,,,,
Peter Harris,Arm,,,,
Chenying Kuo,Adlink,evshary,evshary,,
-William Liang,,wyliang,,,
+William Liang,,,wyliang,,
Waheed Brown,Arm,https://github.com/armwaheed,https://www.linkedin.com/in/waheedbrown/,,
Aryan Bhusari,Arm,,https://www.linkedin.com/in/aryanbhusari,,
\ No newline at end of file
diff --git a/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/test-with-fvp-3.md b/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/test-with-fvp-3.md
index d8467f5081..f077d98dac 100644
--- a/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/test-with-fvp-3.md
+++ b/content/learning-paths/servers-and-cloud-computing/refinfra-quick-start/test-with-fvp-3.md
@@ -22,7 +22,7 @@ wget https://developer.arm.com/-/cdn-downloads/permalink/FVPs-Neoverse-Infrastru
Unpack the tarball and run the install script:
```bash
-tar -xf FVP_RD_N2_11.24_12_Linux64.tgz
+tar -xf FVP_RD_N2_11.25_23_Linux64.tgz
./FVP_RD_N2.sh --i-agree-to-the-contained-eula --no-interactive
```
From decb0f25ec029f42b3e08e8c38ef6fd083a07f5b Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Tue, 5 Aug 2025 14:49:24 +0000
Subject: [PATCH 49/55] Cleaned up Learning Path introduction: fixed
formatting, clarified QEMU flow, and aligned with Learning Paths style.
---
.../azure-vm/background.md | 32 ++++++++++++-------
1 file changed, 20 insertions(+), 12 deletions(-)
diff --git a/content/learning-paths/servers-and-cloud-computing/azure-vm/background.md b/content/learning-paths/servers-and-cloud-computing/azure-vm/background.md
index 7ddf7de034..1170c0f9d4 100644
--- a/content/learning-paths/servers-and-cloud-computing/azure-vm/background.md
+++ b/content/learning-paths/servers-and-cloud-computing/azure-vm/background.md
@@ -1,5 +1,5 @@
---
-title: "About Azure Linux"
+title: "Build and run Azure Linux 3.0 on Arm-based virtual machines"
weight: 2
@@ -8,41 +8,49 @@ layout: "learningpathall"
## What is Azure Linux 3.0 and how can I use it?
-Azure Linux 3.0 is a Microsoft-developed Linux distribution designed specifically for cloud-native workloads on the Azure platform. It is optimized for running cloud-native workloads, such as containers, microservices, and Kubernetes clusters, and emphasizes performance, security, and reliability.
+Azure Linux 3.0 is a Microsoft-developed Linux distribution designed for cloud-native workloads on the Azure platform. It is optimized for running containers, microservices, and Kubernetes clusters, with a focus on performance, security, and reliability.
Azure Linux 3.0 includes native support for the Arm (AArch64) architecture, enabling efficient, scalable, and cost-effective deployments on Arm-based Azure infrastructure.
## Can I run Azure Linux 3.0 on Arm-based Azure virtual machines?
-Currently, Azure Linux 3.0 is not available as a ready-made virtual machine image for Arm-based VMs in the Azure Marketplace. Only x86_64 images, published by Ntegral Inc., are offered. This means you cannot directly create an Azure Linux 3.0 VM for Arm from the Azure portal or CLI.
+Currently, Azure Linux 3.0 isn't available as a ready-made virtual machine image for Arm-based VMs in the Azure Marketplace. Only x86_64 images, published by Ntegral Inc., are available. This means you can't directly create an Azure Linux 3.0 VM for Arm from the Azure portal or CLI.
## How can I create a custom Azure Linux image for Arm?
-However, you can still run Azure Linux 3.0 on Arm-based Azure VMs by creating your own disk image. Using QEMU, an open-source machine emulator and virtualizer, you can build a custom Azure Linux 3.0 Arm image locally. After building the image, you can upload it to your Azure account as a managed disk or custom image. This process allows you to deploy and manage Azure Linux 3.0 VMs on Arm infrastructure, even before official images are available.
+You can still run Azure Linux 3.0 on Arm-based Azure VMs by creating your own disk image. Using [QEMU](https://www.qemu.org/), an open-source machine emulator and virtualizer, you can build a custom Azure Linux 3.0 Arm image locally. After building the image, upload it to your Azure account as a managed disk or custom image. This process allows you to deploy and manage Azure Linux 3.0 VMs on Arm infrastructure, even before official images are available.
-This Learning Path guides you through the steps to build an Azure Linux 3.0 disk image with QEMU, upload it to Azure, and prepare it for use in creating virtual machines.
+This Learning Path guides you through the steps to:
-Following this process, you'll be able to create and run Azure Linux 3.0 VMs on Arm-based Azure infrastructure.
+- Build an Azure Linux 3.0 disk image with QEMU
+- Upload the image to Azure
+- Create a virtual machine from the custom image
+
+By the end of this process, you'll be able to run Azure Linux 3.0 VMs on Arm-based Azure infrastructure.
## What tools do I need to build an Azure Linux image locally?
-To get started install the dependencies on your local Linux machine. The instructions work for both Arm or x86 running Ubuntu.
+To get started, install the dependencies on your local Linux machine. The instructions work for both Arm and x86 machines running Ubuntu.
+
+Install QEMU and related tools:
```bash
sudo apt update && sudo apt install qemu-system-arm qemu-system-aarch64 qemu-efi-aarch64 qemu-utils ovmf -y
```
-## What tools do I need to build an Azure Linux image locally?
+You'll also need the Azure CLI. To install it, follow the [Azure CLI install guide](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest).
+
+If you're using an Arm-based system, you can also see the [Azure CLI install guide](/install-guides/azure-cli/) for Arm Linux systems.
-You also need to install the Azure CLI. Refer to [How to install the Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest). You can also use the [Azure CLI install guide](/install-guides/azure-cli/) for Arm Linux systems.
+## How do I verify the Azure CLI installation?
-Make sure the CLI is working by running the version command and confirm the version is printed.
+After installing the CLI, verify it's working by running the following command:
```bash
az version
```
-You should see an output similar to:
+You should see an output similar to the following:
```output
{
@@ -55,4 +63,4 @@ You should see an output similar to:
## What’s the next step after setting up my environment?
-Continue to learn how to prepare the Azure Linux disk image.
\ No newline at end of file
+Next, you'll learn how to build the Azure Linux 3.0 disk image using QEMU.
\ No newline at end of file
From 08b01aaa6daf213973eb9ab6c435ec8d99690056 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Tue, 5 Aug 2025 15:31:16 +0000
Subject: [PATCH 50/55] Updates
---
.../azure-vm/azure-vm.md | 34 +++++++++----------
1 file changed, 16 insertions(+), 18 deletions(-)
diff --git a/content/learning-paths/servers-and-cloud-computing/azure-vm/azure-vm.md b/content/learning-paths/servers-and-cloud-computing/azure-vm/azure-vm.md
index 0159ccae7d..a448e47b2a 100644
--- a/content/learning-paths/servers-and-cloud-computing/azure-vm/azure-vm.md
+++ b/content/learning-paths/servers-and-cloud-computing/azure-vm/azure-vm.md
@@ -6,29 +6,30 @@ weight: 3
layout: learningpathall
---
-You can view the Azure Linux 3.0 project on [GitHub](https://github.com/microsoft/azurelinux). There are links to the ISO downloads in the project README.
+You can view the Azure Linux 3.0 project on [GitHub](https://github.com/microsoft/azurelinux). There project README includes links to ISO downloads.
-Using QEMU, you can create a raw disk image and boot a virtual machine with the ISO to install the OS on the disk.
-
-Once the installation is complete, you can convert the raw disk to a fixed-size VHD, upload it to Azure Blob Storage, and then use the Azure CLI to create a custom Arm image.
+Using [QEMU](https://www.qemu.org/), you can create a raw disk image, boot a virtual machine with the ISO, and install the operating system. After installation is complete, you'll convert the image to a fixed-size VHD, upload it to Azure Blob Storage, and use the Azure CLI to create a custom Arm image.
## Download and create a virtual disk file
-Use `wget` to download the Azure Linux ISO image file.
+Use `wget` to download the Azure Linux ISO image file:
```bash
wget https://aka.ms/azurelinux-3.0-aarch64.iso
```
-Use `qemu-img` to create a 32 GB empty raw disk image to install the OS.
-
-You can increase the disk size by modifying the value passed to `qemu-img`.
+Create a 32 GB empty raw disk image to install the OS:
```bash
qemu-img create -f raw azurelinux-arm64.raw 34359738368
```
-## Boot and install the OS
+{{% notice Note %}}
+You can change the disk size by adjusting the value passed to `qemu-img`.
+{{% /notice %}}
+
+
+## Boot the VM and install Azure Linux
Use QEMU to boot the operating system in an emulated Arm VM.
@@ -46,14 +47,11 @@ qemu-system-aarch64 \
-device virtio-net-device,netdev=net0
```
-Navigate through the installer by entering the hostname, username, and password for the custom image.
-You should use the username of `azureuser` if you want match the instructions on the following pages.
-
-Be patient, it takes some time to complete the full installation.
+Navigate through the installer by entering the hostname, username, and password for the custom image. Use `azureuser` as the username to match the configuration used in later steps.
-At the end of installation you are prompted for confirmation to reboot the system.
+{{% notice Note %}}The installation process takes several minutes.{{% /notice %}}
-Once the newly installed OS boots successfully, install the Azure Linux Agent for VM provisioning, and power off the VM.
+At the end of installation, confirm the reboot prompt. After rebooting into the newly-installed OS, install and enable the Azure Linux Agent:
```bash
sudo dnf install WALinuxAgent -y
@@ -62,7 +60,7 @@ sudo systemctl start waagent
sudo poweroff
```
-Be patient, it takes some time to install the packages and power off.
+{{% notice Note %}} It can take a few minutes to install the agent and power off the VM.{{% /notice %}}
## Convert the raw disk to VHD Format
@@ -73,7 +71,7 @@ qemu-img convert -f raw -o subformat=fixed,force_size -O vpc azurelinux-arm64.ra
```
{{% notice Note %}}
-VHD files have 512 bytes of footer attached at the end. The `force_size` flag ensures that the exact virtual size specified is used for the final VHD file. Without this, QEMU may round the size or adjust for footer overhead (especially when converting from raw to VHD). The `force_size` flag forces the final image to match the original size. This flag makes the final VHD size a whole number in MB or GB, which is required for Azure.
+VHD files include a 512-byte footer at the end. The `force_size` flag ensures the final image size exactly matches the requested virtual size. Without this, QEMU may round the size or adjust for footer overhead (especially when converting from raw to VHD). The `force_size` flag forces the final image to match the original size. This is required for Azure compatibility, as it avoids rounding errors and ensures the VHD ends at a whole MB or GB boundary.
{{% /notice %}}
-Next, you can save the image in your Azure account.
+Next, you'll upload the image to your Azure account.
From 78f96f2205a8feac66e35a8752ea2bf0466f27cd Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Tue, 5 Aug 2025 15:46:14 +0000
Subject: [PATCH 51/55] Updates
---
.../servers-and-cloud-computing/azure-vm/azure-vm.md | 2 +-
.../servers-and-cloud-computing/azure-vm/save-image.md | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/content/learning-paths/servers-and-cloud-computing/azure-vm/azure-vm.md b/content/learning-paths/servers-and-cloud-computing/azure-vm/azure-vm.md
index a448e47b2a..24da6ccc36 100644
--- a/content/learning-paths/servers-and-cloud-computing/azure-vm/azure-vm.md
+++ b/content/learning-paths/servers-and-cloud-computing/azure-vm/azure-vm.md
@@ -6,7 +6,7 @@ weight: 3
layout: learningpathall
---
-You can view the Azure Linux 3.0 project on [GitHub](https://github.com/microsoft/azurelinux). There project README includes links to ISO downloads.
+You can view the Azure Linux 3.0 project on [GitHub](https://github.com/microsoft/azurelinux). The project README includes links to ISO downloads.
Using [QEMU](https://www.qemu.org/), you can create a raw disk image, boot a virtual machine with the ISO, and install the operating system. After installation is complete, you'll convert the image to a fixed-size VHD, upload it to Azure Blob Storage, and use the Azure CLI to create a custom Arm image.
diff --git a/content/learning-paths/servers-and-cloud-computing/azure-vm/save-image.md b/content/learning-paths/servers-and-cloud-computing/azure-vm/save-image.md
index ab66336077..32b00772f4 100644
--- a/content/learning-paths/servers-and-cloud-computing/azure-vm/save-image.md
+++ b/content/learning-paths/servers-and-cloud-computing/azure-vm/save-image.md
@@ -10,7 +10,7 @@ You can now use the Azure CLI to create a disk image in Azure and copy the local
## Prepare Azure resources for the image
-Before uploading the VHD file to Azure storage, set the environment variables for the Azure CLI.
+Before uploading the VHD file to Azure storage, set the environment variables for the Azure CLI:
```bash
RESOURCE_GROUP="MyCustomARM64Group"
@@ -37,7 +37,7 @@ VM_SIZE="Standard_D4ps_v6"
You can modify the environment variables such as RESOURCE_GROUP, VM_NAME, and LOCATION based on your naming preferences, region, and resource requirements.
{{% /notice %}}
-Make sure to login to Azure using the CLI.
+Logi n to Azure using the CLI:
```bash
az login
From 5364f1bdfde5980892b80de8d2980033933423cd Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Wed, 6 Aug 2025 13:54:44 +0000
Subject: [PATCH 52/55] Tightened language, removed trailing whitespace.
---
.../azure-vm/_index.md | 2 +-
.../azure-vm/azure-vm.md | 38 +++---
.../azure-vm/background.md | 20 ++-
.../azure-vm/save-image.md | 125 ++++++++++--------
.../azure-vm/start-vm.md | 32 ++---
5 files changed, 118 insertions(+), 99 deletions(-)
diff --git a/content/learning-paths/servers-and-cloud-computing/azure-vm/_index.md b/content/learning-paths/servers-and-cloud-computing/azure-vm/_index.md
index 58aa33065f..3ee259c09d 100644
--- a/content/learning-paths/servers-and-cloud-computing/azure-vm/_index.md
+++ b/content/learning-paths/servers-and-cloud-computing/azure-vm/_index.md
@@ -1,7 +1,7 @@
---
title: Create an Azure Linux 3.0 virtual machine with Cobalt 100 processors
-minutes_to_complete: 120
+minutes_to_complete: 120
who_is_this_for: This is an advanced topic for developers who want to run Azure Linux 3.0 on Arm-based Cobalt 100 processors in a custom virtual machine.
diff --git a/content/learning-paths/servers-and-cloud-computing/azure-vm/azure-vm.md b/content/learning-paths/servers-and-cloud-computing/azure-vm/azure-vm.md
index 24da6ccc36..c9745ebf32 100644
--- a/content/learning-paths/servers-and-cloud-computing/azure-vm/azure-vm.md
+++ b/content/learning-paths/servers-and-cloud-computing/azure-vm/azure-vm.md
@@ -25,7 +25,7 @@ qemu-img create -f raw azurelinux-arm64.raw 34359738368
```
{{% notice Note %}}
-You can change the disk size by adjusting the value passed to `qemu-img`.
+You can change the disk size by adjusting the value passed to `qemu-img`. Ensure it meets the minimum disk size requirements for Azure (typically at least 30 GB).
{{% /notice %}}
@@ -34,44 +34,44 @@ You can change the disk size by adjusting the value passed to `qemu-img`.
Use QEMU to boot the operating system in an emulated Arm VM.
```bash
-qemu-system-aarch64 \
- -machine virt \
- -cpu cortex-a72 \
- -m 4096 \
- -nographic \
- -bios /usr/share/qemu-efi-aarch64/QEMU_EFI.fd \
- -drive if=none,file=azurelinux-arm64.raw,format=raw,id=hd0 \
- -device virtio-blk-device,drive=hd0 \
- -cdrom azurelinux-3.0-aarch64.iso \
- -netdev user,id=net0 \
+qemu-system-aarch64 \
+ -machine virt \
+ -cpu cortex-a72 \
+ -m 4096 \
+ -nographic \
+ -bios /usr/share/qemu-efi-aarch64/QEMU_EFI.fd \
+ -drive if=none,file=azurelinux-arm64.raw,format=raw,id=hd0 \
+ -device virtio-blk-device,drive=hd0 \
+ -cdrom azurelinux-3.0-aarch64.iso \
+ -netdev user,id=net0 \
-device virtio-net-device,netdev=net0
```
-Navigate through the installer by entering the hostname, username, and password for the custom image. Use `azureuser` as the username to match the configuration used in later steps.
+Follow the installer prompts to enter the hostname, username, and password. Use `azureuser` as the username to ensure compatibility with later steps.
{{% notice Note %}}The installation process takes several minutes.{{% /notice %}}
At the end of installation, confirm the reboot prompt. After rebooting into the newly-installed OS, install and enable the Azure Linux Agent:
```bash
-sudo dnf install WALinuxAgent -y
-sudo systemctl enable waagent
-sudo systemctl start waagent
+sudo dnf install WALinuxAgent -y
+sudo systemctl enable waagent
+sudo systemctl start waagent
sudo poweroff
```
{{% notice Note %}} It can take a few minutes to install the agent and power off the VM.{{% /notice %}}
-## Convert the raw disk to VHD Format
+## Convert the raw disk to VHD format
-Now that the raw disk image is ready to be used, convert the image to fixed-size VHD, making it compatible with Azure.
+Now that the raw disk image is ready for you to use, convert it to fixed-size VHD, which makes it compatible with Azure.
```bash
qemu-img convert -f raw -o subformat=fixed,force_size -O vpc azurelinux-arm64.raw azurelinux-arm64.vhd
```
{{% notice Note %}}
-VHD files include a 512-byte footer at the end. The `force_size` flag ensures the final image size exactly matches the requested virtual size. Without this, QEMU may round the size or adjust for footer overhead (especially when converting from raw to VHD). The `force_size` flag forces the final image to match the original size. This is required for Azure compatibility, as it avoids rounding errors and ensures the VHD ends at a whole MB or GB boundary.
+VHD files include a 512-byte footer at the end. The `force_size` flag ensures the final image size matches the requested virtual size. Without this, QEMU might round the size or adjust for footer overhead (especially when converting from raw to VHD). The `force_size` flag forces the final image to match the original size. This is required for Azure compatibility, as it avoids rounding errors and ensures the VHD ends at a whole MB or GB boundary.
{{% /notice %}}
-Next, you'll upload the image to your Azure account.
+In the next step, you'll upload the VHD image to Azure and register it as a custom image for use with Arm-based virtual machines.
diff --git a/content/learning-paths/servers-and-cloud-computing/azure-vm/background.md b/content/learning-paths/servers-and-cloud-computing/azure-vm/background.md
index 1170c0f9d4..65b0b4c00e 100644
--- a/content/learning-paths/servers-and-cloud-computing/azure-vm/background.md
+++ b/content/learning-paths/servers-and-cloud-computing/azure-vm/background.md
@@ -1,5 +1,5 @@
---
-title: "Build and run Azure Linux 3.0 on Arm-based virtual machines"
+title: "Build and run Azure Linux 3.0 on an Arm-based Azure virtual machine"
weight: 2
@@ -10,15 +10,15 @@ layout: "learningpathall"
Azure Linux 3.0 is a Microsoft-developed Linux distribution designed for cloud-native workloads on the Azure platform. It is optimized for running containers, microservices, and Kubernetes clusters, with a focus on performance, security, and reliability.
-Azure Linux 3.0 includes native support for the Arm (AArch64) architecture, enabling efficient, scalable, and cost-effective deployments on Arm-based Azure infrastructure.
+Azure Linux 3.0 includes native support for the Arm architecture (AArch64), enabling efficient, scalable, and cost-effective deployments on Arm-based Azure infrastructure.
## Can I run Azure Linux 3.0 on Arm-based Azure virtual machines?
-Currently, Azure Linux 3.0 isn't available as a ready-made virtual machine image for Arm-based VMs in the Azure Marketplace. Only x86_64 images, published by Ntegral Inc., are available. This means you can't directly create an Azure Linux 3.0 VM for Arm from the Azure portal or CLI.
+At the time of writing, Azure Linux 3.0 isn't available as a prebuilt virtual machine image for Arm-based VMs in the Azure Marketplace. Only x86_64 images (published by Ntegral Inc.) are available. This means you can't directly create an Azure Linux 3.0 VM for Arm from the Azure portal or CLI.
-## How can I create a custom Azure Linux image for Arm?
+## How can I create and use a custom Azure Linux image for Arm?
-You can still run Azure Linux 3.0 on Arm-based Azure VMs by creating your own disk image. Using [QEMU](https://www.qemu.org/), an open-source machine emulator and virtualizer, you can build a custom Azure Linux 3.0 Arm image locally. After building the image, upload it to your Azure account as a managed disk or custom image. This process allows you to deploy and manage Azure Linux 3.0 VMs on Arm infrastructure, even before official images are available.
+To run Azure Linux 3.0 on an Arm-based VM, you'll need to build a custom image manually. Using [QEMU](https://www.qemu.org/), an open-source machine emulator and virtualizer, you can build the image locally. After the build completes, upload the resulting image to your Azure account as either a managed disk or a custom image resource. This process lets you deploy and manage Azure Linux 3.0 VMs on Arm-based Azure infrastructure, even before official images are published in the Marketplace. This gives you full control over image configuration and early access to Arm-native workloads.
This Learning Path guides you through the steps to:
@@ -28,9 +28,9 @@ This Learning Path guides you through the steps to:
By the end of this process, you'll be able to run Azure Linux 3.0 VMs on Arm-based Azure infrastructure.
-## What tools do I need to build an Azure Linux image locally?
+## What tools do I need to build the Azure Linux image locally?
-To get started, install the dependencies on your local Linux machine. The instructions work for both Arm and x86 machines running Ubuntu.
+You can build the image on either an Arm or x86 Ubuntu system. First, install the required tools:
Install QEMU and related tools:
@@ -40,7 +40,7 @@ sudo apt update && sudo apt install qemu-system-arm qemu-system-aarch64 qemu-efi
You'll also need the Azure CLI. To install it, follow the [Azure CLI install guide](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest).
-If you're using an Arm-based system, you can also see the [Azure CLI install guide](/install-guides/azure-cli/) for Arm Linux systems.
+If you're using an Arm Linux machine, see the [Azure CLI install guide](/install-guides/azure-cli/).
## How do I verify the Azure CLI installation?
@@ -61,6 +61,4 @@ You should see an output similar to the following:
}
```
-## What’s the next step after setting up my environment?
-
-Next, you'll learn how to build the Azure Linux 3.0 disk image using QEMU.
\ No newline at end of file
+In the next section, you'll learn how to build the Azure Linux 3.0 disk image using QEMU.
\ No newline at end of file
diff --git a/content/learning-paths/servers-and-cloud-computing/azure-vm/save-image.md b/content/learning-paths/servers-and-cloud-computing/azure-vm/save-image.md
index 32b00772f4..71a3e34010 100644
--- a/content/learning-paths/servers-and-cloud-computing/azure-vm/save-image.md
+++ b/content/learning-paths/servers-and-cloud-computing/azure-vm/save-image.md
@@ -6,94 +6,109 @@ weight: 4
layout: learningpathall
---
-You can now use the Azure CLI to create a disk image in Azure and copy the local image to Azure.
+## Section overview
-## Prepare Azure resources for the image
+You're now ready to use the Azure CLI to create and upload a custom disk image to Azure. In this section, you'll configure environment variables, provision the necessary Azure resources, and upload a `.vhd` file. Then, you'll use the Shared Image Gallery to register the image for use with custom virtual machines.
-Before uploading the VHD file to Azure storage, set the environment variables for the Azure CLI:
+## How do I set up environment variables for the Azure CLI?
+
+Before uploading your VHD file, set the environment variables for the Azure CLI:
```bash
-RESOURCE_GROUP="MyCustomARM64Group"
-LOCATION="centralindia"
-STORAGE_ACCOUNT="mycustomarm64storage"
-CONTAINER_NAME="mycustomarm64container"
-VHD_NAME="azurelinux-arm64.vhd"
-GALLERY_NAME="MyCustomARM64Gallery"
-IMAGE_DEF_NAME="MyAzureLinuxARM64Def"
-IMAGE_VERSION="1.0.0"
-PUBLISHER="custom"
-OFFER="custom-offer"
-SKU="custom-sku"
-OS_TYPE="Linux"
-ARCHITECTURE="Arm64"
-HYPERV_GEN="V2"
-STORAGE_ACCOUNT_TYPE="Standard_LRS"
-VM_NAME="MyAzureLinuxARMVM"
-ADMIN_USER="azureuser"
+RESOURCE_GROUP="MyCustomARM64Group"
+LOCATION="centralindia"
+STORAGE_ACCOUNT="mycustomarm64storage"
+CONTAINER_NAME="mycustomarm64container"
+VHD_NAME="azurelinux-arm64.vhd"
+GALLERY_NAME="MyCustomARM64Gallery"
+IMAGE_DEF_NAME="MyAzureLinuxARM64Def"
+IMAGE_VERSION="1.0.0"
+PUBLISHER="custom"
+OFFER="custom-offer"
+SKU="custom-sku"
+OS_TYPE="Linux"
+ARCHITECTURE="Arm64"
+HYPERV_GEN="V2"
+STORAGE_ACCOUNT_TYPE="Standard_LRS"
+VM_NAME="MyAzureLinuxARMVM"
+ADMIN_USER="azureuser"
VM_SIZE="Standard_D4ps_v6"
```
{{% notice Note %}}
-You can modify the environment variables such as RESOURCE_GROUP, VM_NAME, and LOCATION based on your naming preferences, region, and resource requirements.
+Modify the environment variables such as RESOURCE_GROUP, VM_NAME, and LOCATION to suit your naming preferences, region, and resource requirements.
{{% /notice %}}
-Logi n to Azure using the CLI:
+## How do I log in and create Azure resources?
+
+First, log in to Azure using the CLI:
```bash
az login
```
-If a link is printed, open it in a browser and enter the provided code to authenticate.
+If prompted, open the browser link and enter the verification code to authenticate.
-Create a new resource group. If you are using an existing resource group for the RESOURCE_GROUP environment variable you can skip this step.
+Then, create a new resource group. If you are using an existing resource group for the RESOURCE_GROUP environment variable, you can skip this step:
```bash
az group create --name "$RESOURCE_GROUP" --location "$LOCATION"
```
-Create Azure blob storage.
+Create a new storage account to store your image:
```bash
-az storage account create \
- --name "$STORAGE_ACCOUNT" \
- --resource-group "$RESOURCE_GROUP" \
- --location "$LOCATION" \
- --sku Standard_LRS \
+az storage account create \
+ --name "$STORAGE_ACCOUNT" \
+ --resource-group "$RESOURCE_GROUP" \
+ --location "$LOCATION" \
+ --sku Standard_LRS \
--kind StorageV2
```
-Create a blob container in the blob storage account.
+Next, create a blob container in the storage account:
```bash
-az storage container create \
- --name "$CONTAINER_NAME" \
+az storage container create \
+ --name "$CONTAINER_NAME" \
--account-name "$STORAGE_ACCOUNT"
```
-## Upload and save the image in Azure
+## How do I upload a VHD image to Azure Blob Storage?
+
+First, retrieve the storage account key:
+
+```bash
+STORAGE_KEY=$(az storage account keys list \
+ --resource-group "$RESOURCE_GROUP" \
+ --account-name "$STORAGE_ACCOUNT" \
+ --query '[0].value' --output tsv)
+```
-Upload the VHD file to Azure.
+Then upload your VHD file to Azure Blob Storage:
```bash
-az storage blob upload \
- --account-name "$STORAGE_ACCOUNT" \
- --container-name "$CONTAINER_NAME" \
- --name "$VHD_NAME" \
+az storage blob upload \
+ --account-name "$STORAGE_ACCOUNT" \
+ --container-name "$CONTAINER_NAME" \
+ --name "$VHD_NAME" \
--file ./azurelinux-arm64.vhd
```
-You can now use the Azure console to see the image in your Azure account.
+You can now use the Azure console to view the image in your Azure account.
-Next, create a custom VM image from this VHD, using Azure Shared Image Gallery (SIG).
+## How do I register a custom image in the Azure Shared Image Gallery?
+
+Create a custom VM image from the VHD, using the Azure Shared Image Gallery (SIG):
```bash
-az sig create \
- --resource-group "$RESOURCE_GROUP" \
- --gallery-name "$GALLERY_NAME" \
+az sig create \
+ --resource-group "$RESOURCE_GROUP" \
+ --gallery-name "$GALLERY_NAME" \
--location "$LOCATION"
```
-Create the image definition.
+Create the image definition:
```bash
az sig image-definition create \
@@ -108,7 +123,7 @@ az sig image-definition create \
--hyper-v-generation "$HYPERV_GEN"
```
-Create the image version to register the VHD as a version of the custom image.
+Create the image version from the uploaded VHD:
```bash
az sig image-version create \
@@ -119,18 +134,22 @@ az sig image-version create \
--location "$LOCATION" \
--os-vhd-uri "https://${STORAGE_ACCOUNT}.blob.core.windows.net/${CONTAINER_NAME}/${VHD_NAME}" \
--os-vhd-storage-account "$STORAGE_ACCOUNT" \
- --storage-account-type "$STORAGE_ACCOUNT_TYPE"
+ --storage-account-type "$STORAGE_ACCOUNT_TYPE"
```
-Once the image has been versioned, you can retrieve the unique image ID for use in VM creation.
+## How do I retrieve the image ID for VM creation?
+
+Once the image has been versioned, you can retrieve the unique image ID for use in VM creation:
```bash
-IMAGE_ID=$(az sig image-version show \
- --resource-group "$RESOURCE_GROUP" \
- --gallery-name "$GALLERY_NAME" \
- --gallery-image-definition "$IMAGE_DEF_NAME" \
+IMAGE_ID=$(az sig image-version show \
+ --resource-group "$RESOURCE_GROUP" \
+ --gallery-name "$GALLERY_NAME" \
+ --gallery-image-definition "$IMAGE_DEF_NAME" \
--gallery-image-version "$IMAGE_VERSION" \
--query "id" -o tsv)
```
-Next, you can create a virtual machine with the new image using the image ID.
\ No newline at end of file
+You'll use this ID to deploy a new virtual machine based on your custom image.
+
+You've successfully uploaded and registered a custom Arm64 VM image in Azure. In the next section, you'll learn how to create a virtual machine using this image.
\ No newline at end of file
diff --git a/content/learning-paths/servers-and-cloud-computing/azure-vm/start-vm.md b/content/learning-paths/servers-and-cloud-computing/azure-vm/start-vm.md
index c8592c1f96..930dda7d01 100644
--- a/content/learning-paths/servers-and-cloud-computing/azure-vm/start-vm.md
+++ b/content/learning-paths/servers-and-cloud-computing/azure-vm/start-vm.md
@@ -6,29 +6,31 @@ weight: 5
layout: learningpathall
---
-## Create a virtual machine using the new image
+Now that your image is registered, you can launch a new VM using the Azure CLI and the custom image ID. This example creates a Linux VM on Cobalt 100 Arm-based processors using the custom image you created earlier.
-You can now use the newly created Azure Linux image to create a virtual machine in Azure with Cobalt 100 processors. Confirm the VM is created by looking in your Azure account in the “Virtual Machines” section.
+## How do I create a virtual machine in Azure using a custom image?
+
+Use the following command to create a virtual machine using your custom image:
```bash
-az vm create \
- --resource-group "$RESOURCE_GROUP" \
- --name "$VM_NAME" \
- --image "$IMAGE_ID" \
- --size "$VM_SIZE" \
- --admin-username "$ADMIN_USER" \
- --generate-ssh-keys \
+az vm create \
+ --resource-group "$RESOURCE_GROUP" \
+ --name "$VM_NAME" \
+ --image "$IMAGE_ID" \
+ --size "$VM_SIZE" \
+ --admin-username "$ADMIN_USER" \
+ --generate-ssh-keys \
--public-ip-sku Standard
```
After the VM is successfully created, retrieve the public IP address.
```bash
-az vm show \
- --resource-group "$RESOURCE_GROUP" \
- --name "$VM_NAME" \
- --show-details \
- --query "publicIps" \
+az vm show \
+ --resource-group "$RESOURCE_GROUP" \
+ --name "$VM_NAME" \
+ --show-details \
+ --query "publicIps" \
-o tsv
```
@@ -38,7 +40,7 @@ Use the public IP address to SSH to the VM. Replace `` with t
ssh azureuser@
```
-After you login, print the machine information.
+After connecting, print the machine information:
```bash
uname -a
From 1d615a77956a2cbd4b8fa487c77f6a5113e43ca0 Mon Sep 17 00:00:00 2001
From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com>
Date: Wed, 6 Aug 2025 14:08:17 +0000
Subject: [PATCH 53/55] Added question-framing to section headers for SEO.
---
.../servers-and-cloud-computing/azure-vm/azure-vm.md | 8 +++++---
.../servers-and-cloud-computing/azure-vm/save-image.md | 2 +-
.../servers-and-cloud-computing/azure-vm/start-vm.md | 2 ++
3 files changed, 8 insertions(+), 4 deletions(-)
diff --git a/content/learning-paths/servers-and-cloud-computing/azure-vm/azure-vm.md b/content/learning-paths/servers-and-cloud-computing/azure-vm/azure-vm.md
index c9745ebf32..58b6f28d74 100644
--- a/content/learning-paths/servers-and-cloud-computing/azure-vm/azure-vm.md
+++ b/content/learning-paths/servers-and-cloud-computing/azure-vm/azure-vm.md
@@ -6,11 +6,13 @@ weight: 3
layout: learningpathall
---
+## How do I create an Azure Linux image for Arm?
+
You can view the Azure Linux 3.0 project on [GitHub](https://github.com/microsoft/azurelinux). The project README includes links to ISO downloads.
Using [QEMU](https://www.qemu.org/), you can create a raw disk image, boot a virtual machine with the ISO, and install the operating system. After installation is complete, you'll convert the image to a fixed-size VHD, upload it to Azure Blob Storage, and use the Azure CLI to create a custom Arm image.
-## Download and create a virtual disk file
+## How do I download the Azure Linux ISO and create a raw disk image?
Use `wget` to download the Azure Linux ISO image file:
@@ -29,7 +31,7 @@ You can change the disk size by adjusting the value passed to `qemu-img`. Ensure
{{% /notice %}}
-## Boot the VM and install Azure Linux
+## How do I install Azure Linux on a raw disk image using QEMU?
Use QEMU to boot the operating system in an emulated Arm VM.
@@ -62,7 +64,7 @@ sudo poweroff
{{% notice Note %}} It can take a few minutes to install the agent and power off the VM.{{% /notice %}}
-## Convert the raw disk to VHD format
+## How do I convert a raw disk image to a fixed-size VHD for Azure?
Now that the raw disk image is ready for you to use, convert it to fixed-size VHD, which makes it compatible with Azure.
diff --git a/content/learning-paths/servers-and-cloud-computing/azure-vm/save-image.md b/content/learning-paths/servers-and-cloud-computing/azure-vm/save-image.md
index 71a3e34010..8bae3a4507 100644
--- a/content/learning-paths/servers-and-cloud-computing/azure-vm/save-image.md
+++ b/content/learning-paths/servers-and-cloud-computing/azure-vm/save-image.md
@@ -6,7 +6,7 @@ weight: 4
layout: learningpathall
---
-## Section overview
+## How do I upload and register a VHD image in Azure?
You're now ready to use the Azure CLI to create and upload a custom disk image to Azure. In this section, you'll configure environment variables, provision the necessary Azure resources, and upload a `.vhd` file. Then, you'll use the Shared Image Gallery to register the image for use with custom virtual machines.
diff --git a/content/learning-paths/servers-and-cloud-computing/azure-vm/start-vm.md b/content/learning-paths/servers-and-cloud-computing/azure-vm/start-vm.md
index 930dda7d01..67d19f2655 100644
--- a/content/learning-paths/servers-and-cloud-computing/azure-vm/start-vm.md
+++ b/content/learning-paths/servers-and-cloud-computing/azure-vm/start-vm.md
@@ -6,6 +6,8 @@ weight: 5
layout: learningpathall
---
+## How do I launch a virtual machine using my custom Azure image?
+
Now that your image is registered, you can launch a new VM using the Azure CLI and the custom image ID. This example creates a Linux VM on Cobalt 100 Arm-based processors using the custom image you created earlier.
## How do I create a virtual machine in Azure using a custom image?
From cd875085b63c92fe53d78e83859a08068a9188a4 Mon Sep 17 00:00:00 2001
From: Jason Andrews
Date: Thu, 7 Aug 2025 15:44:29 +0100
Subject: [PATCH 54/55] spelling and link fixes
---
.wordlist.txt | 5 ++++-
.../learning-paths/embedded-and-microcontrollers/_index.md | 2 +-
.../visualizing-ethos-u-performance/2-overview.md | 2 +-
.../visualizing-ethos-u-performance/_index.md | 2 +-
content/learning-paths/servers-and-cloud-computing/_index.md | 2 +-
.../distributed-inference-with-llama-cpp/how-to-1.md | 2 +-
6 files changed, 9 insertions(+), 6 deletions(-)
diff --git a/.wordlist.txt b/.wordlist.txt
index 5afe320b18..a8093169d0 100644
--- a/.wordlist.txt
+++ b/.wordlist.txt
@@ -4559,7 +4559,7 @@ qdisc
ras
rcu
regmap
-rgerganov’s
+rgerganov's
rotocol
rpcgss
rpmh
@@ -4588,3 +4588,6 @@ vmscan
workqueue
xdp
xhci
+JFR
+conv
+servlet
\ No newline at end of file
diff --git a/content/learning-paths/embedded-and-microcontrollers/_index.md b/content/learning-paths/embedded-and-microcontrollers/_index.md
index 8ee2672ec5..dc4f325370 100644
--- a/content/learning-paths/embedded-and-microcontrollers/_index.md
+++ b/content/learning-paths/embedded-and-microcontrollers/_index.md
@@ -49,7 +49,7 @@ tools_software_languages_filter:
- Coding: 26
- Containerd: 1
- DetectNet: 1
-- Docker: 9
+- Docker: 10
- DSTREAM: 2
- Edge AI: 1
- Edge Impulse: 1
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-overview.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-overview.md
index 23997c19c6..b087e70934 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-overview.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-overview.md
@@ -28,7 +28,7 @@ TinyML is machine learning optimized to run on low-power, resource-constrained d
This Learning Path focuses on using TinyML models with virtualized Arm hardware to simulate real-world AI workloads on microcontrollers and NPUs.
-If you're looking to build and train your own TinyML models, follow the [Introduction to TinyML on Arm using PyTorch and ExecuTorch](/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/).
+If you're looking to build and train your own TinyML models, follow the [Introduction to TinyML on Arm using PyTorch and ExecuTorch](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/).
## What is ExecuTorch?
diff --git a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/_index.md b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/_index.md
index c8bc257324..0127cde363 100644
--- a/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/_index.md
+++ b/content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/_index.md
@@ -32,7 +32,7 @@ operatingsystems:
tools_software_languages:
- Arm Virtual Hardware
- - Fixed Virtual Platform (FVP)
+ - Fixed Virtual Platform
- Python
- PyTorch
- ExecuTorch
diff --git a/content/learning-paths/servers-and-cloud-computing/_index.md b/content/learning-paths/servers-and-cloud-computing/_index.md
index 878d7bd782..792fa14883 100644
--- a/content/learning-paths/servers-and-cloud-computing/_index.md
+++ b/content/learning-paths/servers-and-cloud-computing/_index.md
@@ -47,7 +47,7 @@ tools_software_languages_filter:
- ASP.NET Core: 2
- Assembly: 4
- assembly: 1
-- Async-profiler: 1
+- async-profiler: 1
- AWS: 1
- AWS CDK: 2
- AWS CodeBuild: 1
diff --git a/content/learning-paths/servers-and-cloud-computing/distributed-inference-with-llama-cpp/how-to-1.md b/content/learning-paths/servers-and-cloud-computing/distributed-inference-with-llama-cpp/how-to-1.md
index 6838a42e06..51791d684e 100644
--- a/content/learning-paths/servers-and-cloud-computing/distributed-inference-with-llama-cpp/how-to-1.md
+++ b/content/learning-paths/servers-and-cloud-computing/distributed-inference-with-llama-cpp/how-to-1.md
@@ -10,7 +10,7 @@ layout: learningpathall
The instructions in this Learning Path are for any Arm server running Ubuntu 24.04.2 LTS. You will need at least three Arm server instances with at least 64 cores and 128GB of RAM to run this example. The instructions have been tested on an AWS Graviton4 c8g.16xlarge instance
## Overview
-llama.cpp is a C++ library that enables efficient inference of LLaMA and similar large language models on CPUs, optimized for local and embedded environments. Just over a year ago from its publication date, rgerganov’s RPC code was merged into llama.cpp, enabling distributed inference of large LLMs across multiple CPU-based machines—even when the models don’t fit into the memory of a single machine. In this learning path, we’ll explore how to run a 405B parameter model on Arm-based CPUs.
+llama.cpp is a C++ library that enables efficient inference of LLaMA and similar large language models on CPUs, optimized for local and embedded environments. Just over a year ago from its publication date, rgerganov's RPC code was merged into llama.cpp, enabling distributed inference of large LLMs across multiple CPU-based machines—even when the models don’t fit into the memory of a single machine. In this learning path, we’ll explore how to run a 405B parameter model on Arm-based CPUs.
For the purposes of this demonstration, the following experimental setup will be used:
- Total number of instances: 3
From a86ec76805e8c2e306a41f737d13cff32dc1aceb Mon Sep 17 00:00:00 2001
From: Jason Andrews
Date: Thu, 7 Aug 2025 22:52:58 +0100
Subject: [PATCH 55/55] Update Java flamegraph Learning Path
---
.../java-perf-flamegraph/1_setup.md | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md
index e18fb38d9b..5bdd5fa0ca 100644
--- a/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md
+++ b/content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md
@@ -37,12 +37,13 @@ Alternatively, you can build Tomcat [from source](https://github.com/apache/tomc
## Enable access to Tomcat examples
-To access the built-in examples from your local network or external IP, modify the `context.xml` file:
+To access the built-in examples from your local network or external IP, use a text editor to modify the `context.xml` file by updating the `RemoteAddrValve` configuration to allow all IP addresses.
+
+The file is at:
```bash
-vi apache-tomcat-11.0.9/webapps/examples/META-INF/context.xml
+apache-tomcat-11.0.9/webapps/examples/META-INF/context.xml
```
-Update the `RemoteAddrValve` configuration to allow all IPs: