Skip to content

Commit f522f49

Browse files
authored
Merge pull request #2254 from ArmDeveloperEcosystem/main
Prod update
2 parents 98b6406 + e5c3ba3 commit f522f49

File tree

78 files changed

+4407
-836
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

78 files changed

+4407
-836
lines changed

.wordlist.txt

Lines changed: 78 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4590,4 +4590,81 @@ xdp
45904590
xhci
45914591
JFR
45924592
conv
4593-
servlet
4593+
servlet
4594+
tv
4595+
gpiozero
4596+
lgpio
4597+
TinyLlama
4598+
Superscalar
4599+
automations
4600+
gemma
4601+
tinyllama
4602+
pinout
4603+
Makatia
4604+
Omusilibwa
4605+
EdgeAI
4606+
Raspi
4607+
abyz
4608+
fidel
4609+
javascript
4610+
makatia
4611+
uk
4612+
STDDEV
4613+
Stdev
4614+
BytesToBytesMap
4615+
HashMap
4616+
LongToUnsafeRowMap
4617+
Stdev
4618+
UnsafeRow
4619+
UnsafeRowhash
4620+
agg
4621+
arrayEqual
4622+
codegen
4623+
hashmap
4624+
hugeMethodLimit
4625+
ints
4626+
kurtosis
4627+
stddev
4628+
wholestage
4629+
RDD
4630+
TEEs
4631+
paravirtualization
4632+
WholeStageCodegen
4633+
Wholestage
4634+
zshrc
4635+
hadoop
4636+
CBL
4637+
DataFrame
4638+
exitCode
4639+
Gerganov's
4640+
Radoslav
4641+
rgerganov
4642+
NSS
4643+
spatio
4644+
upsampling
4645+
UE
4646+
VGF
4647+
NNE
4648+
RDG
4649+
Configurator
4650+
RHI
4651+
RHIs
4652+
NNERuntimeRDGMLExtensionsForVulkan
4653+
Unreal's
4654+
ORT
4655+
MLEmulationLayerForVulkan
4656+
RenderDoc's
4657+
vgf
4658+
dataflow
4659+
Sandboxed
4660+
sandboxed
4661+
Termina
4662+
LXC
4663+
Crostini
4664+
ChromeOS's
4665+
crosh
4666+
Sommelier
4667+
chromeos
4668+
linuxcontainers
4669+
4670+

assets/contributors.csv

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -94,4 +94,5 @@ Peter Harris,Arm,,,,
9494
Chenying Kuo,Adlink,evshary,evshary,,
9595
William Liang,,,wyliang,,
9696
Waheed Brown,Arm,https://github.com/armwaheed,https://www.linkedin.com/in/waheedbrown/,,
97-
Aryan Bhusari,Arm,,https://www.linkedin.com/in/aryanbhusari,,
97+
Aryan Bhusari,Arm,,https://www.linkedin.com/in/aryanbhusari,,
98+
Fidel Makatia Omusilibwa,,,,,

content/install-guides/fm_fvp/fvp.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -11,15 +11,20 @@ multi_install: false # Set to true if first page of multi-page artic
1111
multitool_install_part: true # Set to true if a sub-page of a multi-page article, else false
1212
layout: installtoolsall # DO NOT MODIFY. Always true for tool install articles
1313
---
14-
Arm Fixed Virtual Platforms (FVPs) are provided as a library of ready to use platforms.
1514

16-
{{% notice Arm Development Tools%}}
17-
An appropriate subset of the FVP library is installed with [Arm Development Studio](/install-guides/armds) and [Keil MDK](/install-guides/mdk) Professional Edition.
15+
{{% notice Note %}}
16+
Arm Fixed Virtual Platforms (FVPs) were available as a library of ready to use platforms (and as a component of Arm Development Studio) up until version 11.28.
17+
18+
From 11.29 onwards the FVPs are provided solely as part of Arm Development Studio.
19+
20+
This install guide is only applicable to the legacy FVP library.
21+
22+
See the [Arm Development Studio Install Guide](/install-guides/armds) and the [Introduction to FVPs](https://developer.arm.com/documentation/110379/1129/Introduction-to-FVPs) documentation.
1823
{{% /notice %}}
1924

2025
## Download installer packages
2126

22-
You can download the FVP library installer from the [Product Download Hub](https://developer.arm.com/downloads/view/FM000A).
27+
You can download the FVP library installer from the [Product Download Hub](https://developer.arm.com/downloads/view/FMFVP).
2328

2429
Linux (AArch64 and x86) and Windows (x86 only) hosts are supported.
2530

@@ -47,11 +52,6 @@ For full list of available options, use:
4752

4853
FVPs are license managed. License setup instructions are available in the [Arm Licensing install guide](/install-guides/license).
4954

50-
{{% notice Arm Development Tools%}}
51-
The FVPs provided with Arm Development Studio and/or Keil MDK Professional Edition use the license of that product, not that of the FVP library.
52-
{{% /notice %}}
53-
54-
5555
## Verify installation
5656

5757
To verify everything is working OK, navigate to the install directory, and launch any of the supplied FVP executables. No additional command options are needed.

content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/2-env-setup.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ Run the commands below to set up the ExecuTorch internal dependencies:
5050

5151
```bash
5252
git submodule sync
53-
git submodule update --init
53+
git submodule update --init --recursive
5454
./install_executorch.sh
5555
```
5656

Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
---
2+
title: Run LLMs locally on Raspberry Pi 5 for Edge AI
3+
4+
weight: 2
5+
6+
### FIXED, DO NOT MODIFY
7+
layout: learningpathall
8+
---
9+
10+
## Overview
11+
12+
This Learning Path walks you through deploying an efficient large language model (LLM) locally on the Raspberry Pi 5, powered by an Arm Cortex-A76 CPU. This setup enables you to control your smart home using natural language without relying on cloud services. With rapid advances in generative AI and the power of Arm Cortex-A processors, you can now run advanced language models directly in your home on the Raspberry Pi 5.
13+
14+
You will create a fully local, privacy-first smart home system that leverages the strengths of Arm Cortex-A architecture. The system can achieve 15+ tokens per second inference speeds using optimized models like TinyLlama and Qwen, while maintaining the energy efficiency that makes Arm processors well suited for always-on applications.
15+
16+
## Why Arm Cortex-A76 makes Raspberry Pi 5 ideal for Edge AI
17+
18+
The Raspberry Pi 5's Arm Cortex-A76 processor can manage high-performance computing tasks like AI inference. Key architectural features include:
19+
20+
- **Superscalar architecture**: Executes multiple instructions in parallel, improving throughput for compute-heavy tasks
21+
- **128-bit NEON SIMD support**: Accelerates matrix and vector operations, common in the inner loops of language model inference
22+
- **Multi-level cache hierarchy**: Reduces memory latency and improves data access efficiency during runtime
23+
- **Thermal efficiency**: Enables sustained performance without active cooling, making it ideal for compact or always-on smart home setups
24+
25+
These characteristics make the Raspberry Pi 5 well suited for workloads like smart home assistants, where responsiveness, efficiency, and local processing are important. Running LLMs locally on Arm-based devices brings several practical benefits. Privacy is preserved, since conversations and routines never leave the device. With optimized inference, the system can offer responsiveness under 100 ms, even on resource-constrained hardware. It remains fully functional in offline scenarios, continuing to operate when internet access is unavailable. Developers also gain flexibility to customize models and automations. Additionally, software updates and an active ecosystem continue to improve performance over time.
26+
27+
## Leverage the Arm ecosystem for Raspberry Pi Edge AI
28+
29+
For the stack in this setup, Raspberry Pi 5 benefits from the extensive developer ecosystem:
30+
31+
- Optimized compilers including GCC and Clang with Arm-specific enhancements
32+
- Native libraries such as gpiozero and lgpio are optimized for Raspberry Pi
33+
- Community support from open-source projects where developers contribute Arm-optimized code
34+
- Backward compatibility in Arm architecture reduces friction when updating kernels or deploying across platforms
35+
- The same architecture powers smartphones, embedded controllers, edge devices, and cloud infrastructure—enabling consistent development practices across domains
36+
37+
## Performance benchmarks on Raspberry Pi 5
38+
39+
The table below shows inference performance for several quantized models running on a Raspberry Pi 5. Measurements reflect single-threaded CPU inference with typical prompt lengths and temperature settings suitable for command-based interaction.
40+
41+
| Model | Tokens/sec | Avg latency (ms) |
42+
| ------------------- | ---------- | ---------------- |
43+
| qwen:0.5b | 17.0 | 8,217 |
44+
| tinyllama:1.1b | 12.3 | 9,429 |
45+
| deepseek-coder:1.3b | 7.3 | 22,503 |
46+
| gemma2:2b | 4.1 | 23,758 |
47+
| deepseek-r1:7b | 1.6 | 64,797 |
48+
49+
## LLM benchmark insights on Raspberry Pi 5
50+
51+
- Qwen 0.5B and TinyLlama 1.1B deliver fast token generation and low average latency, making them suitable for real-time interactions such as voice-controlled smart home commands
52+
- DeepSeek-Coder 1.3B and Gemma 2B trade some speed for improved language understanding, which can be useful for complex tasks or context-aware prompts
53+
- DeepSeek-R1 7B offers advanced reasoning capabilities with acceptable latency, which may be viable for offline summarization, planning, or low-frequency tasks
54+
55+
## Supported Arm-powered devices
56+
57+
This Learning Path focuses on the Raspberry Pi 5, but you can adapt the concepts and code to other Arm-powered devices.
58+
59+
## Recommended platforms
60+
61+
| Platform | CPU | RAM | GPIO support | Model size suitability |
62+
| ------------------- | -------------------------------- | -------------- | ------------------------------ | --------------------------- |
63+
| **Raspberry Pi 5** | Arm Cortex-A76 quad-core @ 2.4GHz | Up to 16GB | Native `lgpio` (high-performance) | Large models (8–16GB) |
64+
| **Raspberry Pi 4** | Arm Cortex-A72 quad-core @ 1.8GHz | Up to 8GB | Compatible with `gpiozero` | Small to mid-size models |
65+
| **Other Arm devices** | Arm Cortex-A | 4GB min (8GB+ recommended) | Requires physical GPIO pins | Varies by RAM |
66+
67+
Additionally, the platform must meet the following requirements:
68+
69+
- GPIO pins available for hardware control
70+
- Python 3.8 or newer
71+
- Ability to run [Ollama](https://ollama.com/)
72+
73+
In the next section, you’ll set up the software dependencies needed to start building your privacy-first smart home system on Raspberry Pi 5.
Lines changed: 102 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,102 @@
1+
---
2+
title: Set up software dependencies on Raspberry Pi 5 for Ollama and LLMs
3+
weight: 3
4+
5+
### FIXED, DO NOT MODIFY
6+
layout: learningpathall
7+
---
8+
9+
## Overview
10+
11+
In this section, you’ll prepare your Raspberry Pi 5 by installing Python, required libraries, and Ollama, so you can run large language models (LLMs) locally.
12+
13+
{{% notice Note %}}
14+
This Learning Path assumes you have set up your Raspberry Pi with Raspberry Pi OS and network connectivity. For Raspberry Pi 5 setup support, see [Raspberry Pi Getting Started](https://www.raspberrypi.com/documentation/).
15+
{{% /notice %}}
16+
17+
## Connect to your Raspberry Pi 5
18+
19+
### Option 1: Use a display
20+
21+
The easiest way to work on your Raspberry Pi is by connecting it to an external display through one of the micro‑HDMI ports. This setup also requires a keyboard and mouse.
22+
23+
### Option 2: Use SSH
24+
25+
You can also use SSH to access the terminal. To use this approach, you need to know the IP address of your device. Ensure your Raspberry Pi 5 is on the same network as your host computer. Access your device remotely via SSH using the terminal or any SSH client.
26+
27+
Replace `<user>` with your Pi's username (typically `pi`), and `<pi-ip>` with your Raspberry Pi 5's IP address.
28+
29+
```bash
30+
ssh <user>@<pi-ip>
31+
```
32+
33+
## Install Python and system dependencies
34+
35+
Create a directory called `smart-home` in your home directory and navigate into it:
36+
37+
```bash
38+
mkdir -p "$HOME/smart-home"
39+
cd "$HOME/smart-home"
40+
```
41+
42+
The Raspberry Pi 5 includes Python 3 preinstalled, but you need additional packages:
43+
44+
```bash
45+
sudo apt update && sudo apt upgrade -y
46+
sudo apt install -y python3 python3-pip python3-venv git curl build-essential gcc python3-lgpio
47+
```
48+
49+
## Configure a virtual environment
50+
51+
Create and activate a Python virtual environment to isolate project dependencies:
52+
53+
```bash
54+
python3 -m venv venv
55+
source venv/bin/activate
56+
```
57+
58+
Install the required libraries:
59+
60+
```bash
61+
pip install ollama gpiozero lgpio psutil httpx orjson numpy fastapi uvicorn uvloop
62+
```
63+
64+
## Install Ollama
65+
66+
Install Ollama using the official installation script for Linux:
67+
68+
```bash
69+
curl -fsSL https://ollama.com/install.sh | sh
70+
```
71+
72+
Verify the installation:
73+
74+
```bash
75+
ollama --version
76+
```
77+
78+
If installation was successful, the output should be similar to:
79+
80+
```output
81+
ollama version is 0.11.4
82+
```
83+
84+
## Run a test LLM with Ollama on Raspberry Pi 5
85+
86+
Ollama supports various models. This guide uses `deepseek-r1:7b` as an example, but you can also use `tinyllama:1.1b`, `qwen:0.5b`, `gemma2:2b`, or `deepseek-coder:1.3b`.
87+
88+
The `run` command sets up the model automatically. You will see download progress in the terminal, followed by an interactive prompt when ready.
89+
90+
```bash
91+
ollama run deepseek-r1:7b
92+
```
93+
94+
{{% notice Troubleshooting %}}
95+
If you run into issues with the model download, try the following:
96+
97+
- Confirm internet access and sufficient storage space on your microSD card.
98+
- Try smaller models like `qwen:0.5b` or `tinyllama:1.1b` if you encounter memory issues. 16 GB of RAM is sufficient for small to medium models; very large models may require more memory or run slower.
99+
- Clear storage or connect to a more stable network if errors occur.
100+
{{% /notice %}}
101+
102+
With the model set up through Ollama, move on to the next section to start configuring the hardware.
Lines changed: 75 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,75 @@
1+
---
2+
title: Test Raspberry Pi 5 GPIO pins for smart home devices
3+
weight: 4
4+
5+
### FIXED, DO NOT MODIFY
6+
layout: learningpathall
7+
---
8+
9+
## Overview
10+
11+
The next step is to test the GPIO functionality. In this section, you configure an LED light to simulate a smart home device.
12+
13+
## Verify GPIO setup on Raspberry Pi 5
14+
15+
Gather your electronic components. Connect the anode (long leg) of an LED in series with a 220Ω resistor to GPIO 17 (physical pin 11). Connect the cathode (short leg) to a ground (GND) pin.
16+
17+
See the image below for the full setup:
18+
19+
![Raspberry Pi connected to a breadboard with a green LED and jumper wires alt-text#center](pin_layout.jpg "Raspberry Pi connected to a breadboard with a green LED and jumper wires")
20+
21+
Create a Python script named `testgpio.py`:
22+
23+
```bash
24+
cd $HOME/smart-home
25+
vim testgpio.py
26+
```
27+
28+
Add the following code to the file:
29+
30+
```python
31+
#!/usr/bin/env python3
32+
import time
33+
from gpiozero import Device, LED
34+
from gpiozero.pins.lgpio import LGPIOFactory
35+
36+
# Set lgpio backend for Raspberry Pi 5
37+
Device.pin_factory = LGPIOFactory()
38+
39+
# Set up GPIO pin 17
40+
pin1 = LED(17)
41+
42+
try:
43+
while True:
44+
pin1.toggle() # Switch pin 17 state
45+
time.sleep(2) # Wait 2 seconds
46+
except KeyboardInterrupt: # Ctrl+C pressed
47+
pin1.close() # Clean up pin 17
48+
```
49+
50+
Run the script:
51+
52+
```bash
53+
python testgpio.py
54+
```
55+
56+
The LED should blink every two seconds. If you observe this behavior, your GPIO setup works correctly.
57+
58+
{{% notice Troubleshooting %}}
59+
If you run into issues with the hardware setup, check the following:
60+
61+
- Fix missing dependencies with:
62+
```bash
63+
sudo apt-get install -f
64+
```
65+
- If you encounter GPIO permission issues, run Python scripts with `sudo` or add your user to the `gpio` group. Don’t forget to log out for the changes to take effect:
66+
```bash
67+
sudo usermod -a -G gpio $USER
68+
```
69+
- Double-check wiring and pin numbers using the Raspberry Pi 5 pinout diagram
70+
- Ensure proper LED and resistor connections
71+
- Verify GPIO enablement in `raspi-config` if needed
72+
- Use a high-quality power supply
73+
{{% /notice %}}
74+
75+
With GPIO pins working, you can now move on to the next section to interact with devices using language models and the user interface.

0 commit comments

Comments
 (0)