Skip to content

Commit 6865b3c

Browse files
committed
Update outdated LPs
- Don't use apt docker.io for installation - Don't use vLLM abbreviation - Fix PaddlePaddle repo paths - Update ExecuTorch installation method - Add Mac installation for Ansible - Add Arm ASR Fab store link
1 parent 9863ead commit 6865b3c

File tree

10 files changed

+68
-51
lines changed

10 files changed

+68
-51
lines changed

content/install-guides/ansible.md

Lines changed: 21 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -19,29 +19,36 @@ weight: 1
1919

2020
Ansible is an open source, command-line automation used to configure systems and deploy software.
2121

22-
Ansible command-line tools can be installed on a variety of Linux distributions.
22+
Ansible command-line tools can be installed on a variety of Linux and Unix distributions.
2323

2424
[General installation information](https://docs.ansible.com/ansible/latest/installation_guide/installation_distros.html) is available which covers all supported operating systems, but it doesn't talk about Arm-based hosts.
2525

2626
## What should I do before I start installing the Ansible command line tools?
2727

28-
This article provides a quick solution to install the Ansible command line tools, such as `ansible-playbook` for Ubuntu on Arm.
28+
This article provides a quick solution to install the Ansible command line tools, such as `ansible-playbook` for macOS and Ubuntu running Arm.
2929

3030
Confirm you are using an Arm machine by running:
3131

3232
```bash
3333
uname -m
3434
```
3535

36-
The output should be:
36+
The output on Ubuntu should be:
3737

3838
```output
3939
aarch64
4040
```
4141

42+
And for macOS:
43+
44+
```output
45+
arm64
46+
```
47+
48+
4249
If you see a different result, you are not using an Arm-based machine running 64-bit Linux.
4350

44-
## How do I download and install Ansible for Ubuntu on Arm?
51+
## Install Ansible for Ubuntu on Arm
4552

4653
The easiest way to install the latest version of Ansible for Ubuntu on Arm is to use the PPA (Personal Package Archive).
4754

@@ -54,6 +61,16 @@ sudo add-apt-repository --yes --update ppa:ansible/ansible
5461
sudo apt install ansible -y
5562
```
5663

64+
## Install Ansible for macOS
65+
66+
You can use the `brew` package manager to install on `arm64`:
67+
68+
```console
69+
brew install ansible
70+
```
71+
72+
## Confirm installation
73+
5774
Confirm the Ansible command line tools are installed by running:
5875

5976
```bash

content/learning-paths/embedded-and-microcontrollers/avh_ppocr/end-to-end_workflow.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -38,10 +38,10 @@ sudo bash scripts/config_tvm.sh
3838
Now you can navigate to the text recognition example directory.
3939

4040
```bash
41-
cd ./ocr/text_recognition/
41+
cd ./OCR-example/Text-recognition-example-m85/
4242
```
4343

44-
In this directory, there is a script named [run_demo.sh](https://github.com/ArmDeveloperEcosystem/Paddle-examples-for-AVH/blob/main/OCR-example/run_demo.sh) that automates the entire process described in the End-to-end workflow diagram.
44+
In this directory, there is a script named [run_demo.sh](https://github.com/ArmDeveloperEcosystem/Paddle-examples-for-AVH/blob/main/OCR-example/Text-recognition-example-m85/run_demo.sh) that automates the entire process described in the End-to-end workflow diagram.
4545

4646
Update the FVP executable name in the `run_demo.sh` script. The `VHT_Platform` should match what is installed in the system. The executable starts with either `VHT_Corstone_SSE` or `FVP_Corstone_SSE`. Check which one is available in the `$PATH` by typing it out and using the Tab key to autocomplete. Then, using a code editor of your choice or `vim`, you can assign the correct executable:
4747

content/learning-paths/embedded-and-microcontrollers/jetson_object_detection/_index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,12 +5,12 @@ minutes_to_complete: 60
55

66
who_is_this_for: This is an introductory topic for developers interested in integrating object detection into their applications.
77

8-
learning_objectives:
8+
learning_objectives:
99
- Set up a Jetson Orin Nano with a MIPI CSI-2 camera for object detection
1010
- Detect objects from both live video and image files
1111

1212
prerequisites:
13-
- A Jetson Orin Nano (https://developer.nvidia.com/embedded/learn/jetson-orin-nano-devkit-user-guide/index.html)
13+
- A [Jetson Orin Nano](https://developer.nvidia.com/embedded/learn/jetson-orin-nano-devkit-user-guide/index.html)
1414
- A microSD card (64GB UHS-1 or larger is recommended)
1515
- A MIPI CSI-2 camera, with a 22 pin connector on at least one end
1616

content/learning-paths/embedded-and-microcontrollers/rpi-llama3/_index.md

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ minutes_to_complete: 60
66
who_is_this_for: This is an introductory topic for anyone interested in running the Llama 3 model on a Raspberry Pi 5, and learning about techniques for running large language models (LLMs) in an embedded environment.
77

88
learning_objectives:
9-
- Use Docker to run Raspberry Pi OS on an Arm Linux server.
9+
- Use Docker to run Raspberry Pi OS on an Arm Linux server.
1010
- Compile a Large Language Model (LLM) using ExecuTorch.
1111
- Deploy the Llama 3 model on an edge device.
1212
- Describe how to run Llama 3 on a Raspberry Pi 5 using ExecuTorch.
@@ -31,7 +31,7 @@ tools_software_languages:
3131
- LLM
3232
- GenAI
3333
- Raspberry Pi
34-
34+
3535

3636

3737
further_reading:
@@ -47,6 +47,10 @@ further_reading:
4747
title: ExecuTorch Examples
4848
link: https://github.com/pytorch/executorch/blob/main/examples/README.md
4949
type: website
50+
- resource:
51+
title: Run Llama3 8B on a Raspberry Pi 5 with ExecuTorch
52+
link: https://dev-discuss.pytorch.org/t/run-llama3-8b-on-a-raspberry-pi-5-with-executorch/2048
53+
type: website
5054

5155

5256

content/learning-paths/embedded-and-microcontrollers/rpi-llama3/executorch.md

Lines changed: 3 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ conda activate executorch-venv
3535

3636
## Install Clang
3737

38-
Install Clang, which is required to build ExecuTorch:
38+
Install Clang, which is required to build ExecuTorch:
3939

4040
```bash
4141
sudo apt install clang -y
@@ -52,7 +52,7 @@ sudo update-alternatives --set c++ /usr/bin/clang++
5252

5353
## Clone ExecuTorch and install the required dependencies
5454

55-
Continue in your Python virtual environment, and run the commands below to download the ExecuTorch repository and install the required packages.
55+
Continue in your Python virtual environment, and run the commands below to download the ExecuTorch repository and install the required packages.
5656

5757
After cloning the repository, the project's submodules are updated, and two scripts install additional dependencies.
5858

@@ -61,13 +61,8 @@ git clone https://github.com/pytorch/executorch.git
6161
cd executorch
6262
git submodule sync
6363
git submodule update --init
64-
./install_requirements.sh --pybind xnnpack
64+
./install_executorch.sh
6565
./examples/models/llama2/install_requirements.sh
6666
```
6767

68-
{{% notice Note %}}
69-
You can safely ignore the following error on failing to import lm_eval running the install_requirements.sh scripts:
70-
`Failed to import examples.models due to lm_eval conflict`
71-
{{% /notice %}}
72-
7368
When these scripts finish successfully, ExecuTorch is all set up. That means it's time to dive into the world of Llama models!

content/learning-paths/mobile-graphics-and-gaming/Build-Llama3-Chat-Android-App-Using-Executorch-And-XNNPACK/2-executorch-setup.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,8 +41,7 @@ git clone https://github.com/pytorch/executorch.git
4141
cd executorch
4242
git submodule sync
4343
git submodule update --init
44-
./install_requirements.sh
45-
./install_requirements.sh --pybind xnnpack
44+
./install_executorch.sh
4645
./examples/models/llama/install_requirements.sh
4746
```
4847

content/learning-paths/mobile-graphics-and-gaming/get-started-with-arm-asr/01-what-is-arm-asr.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,11 +8,11 @@ layout: learningpathall
88

99
## Introduction
1010

11-
[Arm® Accuracy Super Resolution™ (Arm ASR)](https://www.arm.com/developer-hub/mobile-graphics-and-gaming/arm-accuracy-super-resolution) is a mobile-optimized temporal upscaling technique derived from [AMD's Fidelity Super Resolution 2 v2.2.2](https://github.com/GPUOpen-LibrariesAndSDKs/FidelityFX-SDK/blob/main/docs/techniques/super-resolution-temporal.md).
11+
[Arm® Accuracy Super Resolution™ (Arm ASR)](https://www.arm.com/developer-hub/mobile-graphics-and-gaming/arm-accuracy-super-resolution) is a mobile-optimized temporal upscaling technique derived from [AMD's Fidelity Super Resolution 2 v2.2.2](https://github.com/GPUOpen-LibrariesAndSDKs/FidelityFX-SDK/blob/main/docs/techniques/super-resolution-temporal.md).
1212

1313
Arm ASR extends this technology with optimizations suited to the resource-constrained environment of mobile gaming.
1414

15-
Arm ASR is currently available as an easy-to-integrate plug-in for Unreal Engine versions 5.3, 5.4, and 5.5, with a Unity plugin coming soon. It is also available as a generic library that you can integrate into other engines.
15+
Arm ASR is currently available as an easy-to-integrate plug-in for Unreal Engine versions 5.3, 5.4, and 5.5, with a Unity plugin coming soon. It is also available as a generic library that you can integrate into other engines.
1616

1717
Using ASR, you can improve frames per second (FPS), enhance visual quality, and prevent thermal throttling for smoother and longer gameplay.
1818

@@ -35,7 +35,7 @@ You have control over a range of different settings, including:
3535

3636
## Overview of Arm ASR
3737

38-
The [Arm ASR Experience Kit](https://github.com/arm/accuracy-super-resolution) provides resources to help you evaluate and effectively utilize this technology.
38+
The [Arm ASR Experience Kit](https://github.com/arm/accuracy-super-resolution) provides resources to help you evaluate and effectively utilize this technology.
3939

4040
It includes:
4141

@@ -47,7 +47,9 @@ It includes:
4747

4848
The Arm ASR plugin for Unreal Engine 5 integrates into your project within minutes. Once installed, simply enable temporal upscaling, and the plugin automatically handles frame upscaling.
4949

50-
See [Using Arm ASR in Unreal Engine](../02-ue).
50+
The plugin for Unreal Engine is available in the [Fab store](https://www.fab.com/listings/9a75a41d-6fad-44c3-995f-646f62cd2512).
51+
52+
To set it up from source, proceed to [Using Arm ASR in Unreal Engine](../02-ue).
5153

5254
## Custom Engine Usage
5355

content/learning-paths/servers-and-cloud-computing/mariadb/docker_deployment.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -8,22 +8,22 @@ weight: 6 # 1 is first, 2 is second, etc.
88
layout: "learningpathall"
99
---
1010

11-
## Install MariaDB in a Docker container
11+
## Install MariaDB in a Docker container
1212

13-
You can deploy [MariaDB](https://mariadb.org/) in a Docker container using Ansible.
13+
You can deploy [MariaDB](https://mariadb.org/) in a Docker container using Ansible.
1414

1515
## Before you begin
1616

1717
For this section you will need a computer which has [Ansible](/install-guides/ansible/) installed. You can use the same SSH key pair. You also need a cloud instance or VM, or a physical machine with Ubuntu installed, running and ready to deploy MariaDB.
18-
19-
18+
19+
2020
## Deploy a MariaDB container using Ansible
2121

2222
Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. If you are new to Docker, consider reviewing [Learn how to use Docker](/learning-paths/cross-platform/docker/).
2323

2424
To run Ansible, you can use an Ansible playbook. The playbook uses the `community.docker` collection to deploy MariaDB in a container.
2525

26-
The playbook maps the container port to the host port, which is `3306`.
26+
The playbook maps the container port to the host port, which is `3306`.
2727

2828
1. Use a text editor to add the contents below to a new file named `playbook.yml`.
2929

@@ -37,8 +37,8 @@ The playbook maps the container port to the host port, which is `3306`.
3737
shell: |
3838
apt-get update -y
3939
apt-get -y install mariadb-client
40-
apt-get install docker.io -y
41-
usermod -aG docker ubuntu
40+
curl -fsSL get.docker.com -o get-docker.sh && sh get-docker.sh
41+
usermod -aG docker ubuntu ; newgrp docker
4242
apt-get -y install python3-pip
4343
pip3 install PyMySQL
4444
pip3 install docker
@@ -70,9 +70,9 @@ The playbook maps the container port to the host port, which is `3306`.
7070

7171
```
7272

73-
2. Edit `playbook.yml` to use your values
73+
2. Edit `playbook.yml` to use your values
7474

75-
Replace **{{your_mariadb_password}}** with your own password.
75+
Replace **{{your_mariadb_password}}** with your own password.
7676

7777
Also, replace **{{dockerhub_uname}}** and **{{dockerhub_pass}}** with your [Docker Hub](https://hub.docker.com/) credentials.
7878

@@ -81,7 +81,7 @@ Also, replace **{{dockerhub_uname}}** and **{{dockerhub_pass}}** with your [Dock
8181
[all]
8282
ansible-target1 ansible_connection=ssh ansible_host={{public_ip of VM where MariaDB to be deployed}} ansible_user={{user_of VM where MariaDB to be deployed}}
8383
```
84-
4. Edit `inventory.txt` to use your values
84+
4. Edit `inventory.txt` to use your values
8585

8686
Replace **{{public_ip of VM where MariaDB to be deployed}}** and **{{user_of VM where MariaDB to be deployed}}** with your own values.
8787

@@ -95,9 +95,9 @@ Replace **{{public_ip of VM where MariaDB to be deployed}}** and **{{user_of VM
9595
ansible-playbook playbook.yaml -i {your_inventory_file_location}
9696
```
9797

98-
2. Answer `yes` when prompted for the SSH connection.
98+
2. Answer `yes` when prompted for the SSH connection.
9999

100-
Deployment may take a few minutes.
100+
Deployment may take a few minutes.
101101

102102
The output should be similar to:
103103

@@ -128,5 +128,5 @@ ansible-target1 : ok=3 changed=4 unreachable=0 failed=0 s
128128

129129
## Connect to Database using your local machine
130130

131-
You can use the instructions from the previous topic to [connect to the database](/learning-paths/servers-and-cloud-computing/mariadb/ec2_deployment#connect-to-database-from-local-machine) and confirm the Docker container deployment is working.
131+
You can use the instructions from the previous topic to [connect to the database](/learning-paths/servers-and-cloud-computing/mariadb/ec2_deployment#connect-to-database-from-local-machine) and confirm the Docker container deployment is working.
132132

content/learning-paths/servers-and-cloud-computing/vLLM/_index.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,16 @@
11
---
2-
title: Build and Run a Virtual Large Language Model (vLLM) on Arm Servers
2+
title: Build and Run vLLM on Arm Servers
33

44
minutes_to_complete: 45
55

6-
who_is_this_for: This is an introductory topic for software developers and AI engineers interested in learning how to use a vLLM (Virtual Large Language Model) on Arm servers.
6+
who_is_this_for: This is an introductory topic for software developers and AI engineers interested in learning how to use the vLLM library on Arm servers.
77

88
learning_objectives:
9-
- Build a vLLM from source on an Arm server.
9+
- Build vLLM from source on an Arm server.
1010
- Download a Qwen LLM from Hugging Face.
11-
- Run local batch inference using a vLLM.
12-
- Create and interact with an OpenAI-compatible server provided by a vLLM on your Arm server.
13-
11+
- Run local batch inference using vLLM.
12+
- Create and interact with an OpenAI-compatible server provided by vLLM on your Arm server.
13+
1414
prerequisites:
1515
- An [Arm-based instance](/learning-paths/servers-and-cloud-computing/csp/) from a cloud service provider, or a local Arm Linux computer with at least 8 CPUs and 16 GB RAM.
1616

content/learning-paths/servers-and-cloud-computing/vLLM/vllm-setup.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,9 @@ To follow the instructions for this Learning Path, you will need an Arm server r
1212

1313
## What is vLLM?
1414

15-
[vLLM](https://github.com/vllm-project/vllm) stands for Virtual Large Language Model, and is a fast and easy-to-use library for inference and model serving.
15+
[vLLM](https://github.com/vllm-project/vllm) stands for Virtual Large Language Model, and is a fast and easy-to-use library for inference and model serving.
1616

17-
You can use vLLM in batch mode, or by running an OpenAI-compatible server.
17+
You can use vLLM in batch mode, or by running an OpenAI-compatible server.
1818

1919
In this Learning Path, you will learn how to build vLLM from source and run inference on an Arm-based server, highlighting its effectiveness.
2020

@@ -23,8 +23,8 @@ In this Learning Path, you will learn how to build vLLM from source and run infe
2323
First, ensure your system is up-to-date and install the required tools and libraries:
2424

2525
```bash
26-
sudo apt-get update -y
27-
sudo apt-get install -y curl ccache git wget vim numactl gcc-12 g++-12 python3 python3-pip python3-venv python-is-python3 libtcmalloc-minimal4 libnuma-dev ffmpeg libsm6 libxext6 libgl1 libssl-dev pkg-config
26+
sudo apt-get update -y
27+
sudo apt-get install -y curl ccache git wget vim numactl gcc-12 g++-12 python3 python3-pip python3-venv python-is-python3 libtcmalloc-minimal4 libnuma-dev ffmpeg libsm6 libxext6 libgl1 libssl-dev pkg-config
2828
```
2929

3030
Set the default GCC to version 12:
@@ -58,7 +58,7 @@ python -m venv env
5858
source env/bin/activate
5959
```
6060

61-
Your command-line prompt is prefixed by `(env)`, which indicates that you are in the Python virtual environment.
61+
Your command-line prompt is prefixed by `(env)`, which indicates that you are in the Python virtual environment.
6262

6363
Now update Pip and install Python packages:
6464

@@ -67,7 +67,7 @@ pip install --upgrade pip
6767
pip install py-cpuinfo
6868
```
6969

70-
### How do I download vLLM and build it?
70+
### How do I download vLLM and build it?
7171

7272
First, clone the vLLM repository from GitHub:
7373

@@ -78,9 +78,9 @@ git checkout 72ff3a968682e6a3f7620ab59f2baf5e8eb2777b
7878
```
7979

8080
{{% notice Note %}}
81-
The Git checkout specifies a specific hash known to work for this example.
81+
The Git checkout specifies a specific hash known to work for this example.
8282

83-
Omit this command to use the latest code on the main branch.
83+
Omit this command to use the latest code on the main branch.
8484
{{% /notice %}}
8585

8686
Install the Python packages for vLLM:

0 commit comments

Comments
 (0)