Skip to content

Commit aa6cc28

Browse files
authored
Merge pull request #2242 from madeline-underwood/privacy-first
Privacy first_PV to review
2 parents 8c0f55b + 4ee431f commit aa6cc28

File tree

5 files changed

+138
-127
lines changed

5 files changed

+138
-127
lines changed
Lines changed: 30 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
---
2-
title: Overview
2+
title: Run LLMs locally on Raspberry Pi 5 for Edge AI
3+
34
weight: 2
45

56
### FIXED, DO NOT MODIFY
@@ -8,66 +9,65 @@ layout: learningpathall
89

910
## Overview
1011

11-
This Learning Path walks you through deploying an efficient large language model (LLM) locally on the Raspberry Pi 5, powered by an Arm Cortex-A76 CPU. This will allow you to control your smart home using natural language, without relying on cloud services. With rapid advances in Generative AI and the power of Arm Cortex-A processors, you can now run advanced language models directly in your home on the Raspberry Pi 5.
12+
This Learning Path walks you through deploying an efficient large language model (LLM) locally on the Raspberry Pi 5, powered by an Arm Cortex-A76 CPU. This setup enables you to control your smart home using natural language without relying on cloud services. With rapid advances in generative AI and the power of Arm Cortex-A processors, you can now run advanced language models directly in your home on the Raspberry Pi 5.
1213

13-
You will create a fully local, privacy-first smart home system that leverages the strengths of Arm Cortex-A architecture. The system can achieve 15+ tokens per second inference speeds using optimized models like TinyLlama and Qwen, while maintaining the energy efficiency that makes Arm processors a good fit for always-on applications.
14+
You will create a fully local, privacy-first smart home system that leverages the strengths of Arm Cortex-A architecture. The system can achieve 15+ tokens per second inference speeds using optimized models like TinyLlama and Qwen, while maintaining the energy efficiency that makes Arm processors well suited for always-on applications.
1415

15-
## Why Arm Cortex-A for Edge AI?
16+
## Why Arm Cortex-A76 makes Raspberry Pi 5 ideal for Edge AI
1617

1718
The Raspberry Pi 5's Arm Cortex-A76 processor can manage high-performance computing tasks like AI inference. Key architectural features include:
1819

19-
- The **superscalar architecture** allows the processor to execute multiple instructions in parallel, improving throughput for compute-heavy tasks.
20-
- **128-bit NEON SIMD support** accelerates matrix and vector operations, which are common in the inner loops of language model inference.
21-
- The **multi-level cache hierarchy** helps reduce memory latency and improves data access efficiency during runtime.
22-
- The **thermal efficiency** enables sustained performance without active cooling, making it ideal for compact or always-on smart home setups.
20+
- **Superscalar architecture**: Executes multiple instructions in parallel, improving throughput for compute-heavy tasks
21+
- **128-bit NEON SIMD support**: Accelerates matrix and vector operations, common in the inner loops of language model inference
22+
- **Multi-level cache hierarchy**: Reduces memory latency and improves data access efficiency during runtime
23+
- **Thermal efficiency**: Enables sustained performance without active cooling, making it ideal for compact or always-on smart home setups
2324

24-
These characteristics make the Raspberry Pi 5 well-suited for workloads like smart home assistants, where responsiveness, efficiency, and local processing are important. Running LLMs locally on Arm-based devices brings several practical benefits. Privacy is preserved, since conversations and routines never leave the device. With optimized inference, the system can offer responsiveness under 100 ms, even on resource-constrained hardware. It remains fully functional in offline scenarios, continuing to operate when internet access is unavailable. Developers also gain flexibility to customize models and automations. Additionally, software updates and an active ecosystem continue to improve performance over time.
25+
These characteristics make the Raspberry Pi 5 well suited for workloads like smart home assistants, where responsiveness, efficiency, and local processing are important. Running LLMs locally on Arm-based devices brings several practical benefits. Privacy is preserved, since conversations and routines never leave the device. With optimized inference, the system can offer responsiveness under 100 ms, even on resource-constrained hardware. It remains fully functional in offline scenarios, continuing to operate when internet access is unavailable. Developers also gain flexibility to customize models and automations. Additionally, software updates and an active ecosystem continue to improve performance over time.
2526

26-
## Arm Ecosystem Advantages
27+
## Leverage the Arm ecosystem for Raspberry Pi Edge AI
2728

2829
For the stack in this setup, Raspberry Pi 5 benefits from the extensive developer ecosystem:
2930

3031
- Optimized compilers including GCC and Clang with Arm-specific enhancements
3132
- Native libraries such as gpiozero and lgpio are optimized for Raspberry Pi
32-
- Community support from open-source projects where developers are contributing Arm-optimized code
33-
- Arm maintains a strong focus on backward compatibility, which reduces friction when updating kernels or deploying across multiple Arm platforms
33+
- Community support from open-source projects where developers contribute Arm-optimized code
34+
- Backward compatibility in Arm architecture reduces friction when updating kernels or deploying across platforms
3435
- The same architecture powers smartphones, embedded controllers, edge devices, and cloud infrastructure—enabling consistent development practices across domains
3536

36-
## Performance Benchmarks on Raspberry Pi 5
37+
## Performance benchmarks on Raspberry Pi 5
3738

3839
The table below shows inference performance for several quantized models running on a Raspberry Pi 5. Measurements reflect single-threaded CPU inference with typical prompt lengths and temperature settings suitable for command-based interaction.
3940

40-
| Model | Tokens/Sec | Avg Latency (ms) |
41+
| Model | Tokens/sec | Avg latency (ms) |
4142
| ------------------- | ---------- | ---------------- |
4243
| qwen:0.5b | 17.0 | 8,217 |
4344
| tinyllama:1.1b | 12.3 | 9,429 |
4445
| deepseek-coder:1.3b | 7.3 | 22,503 |
4546
| gemma2:2b | 4.1 | 23,758 |
4647
| deepseek-r1:7b | 1.6 | 64,797 |
4748

49+
## LLM benchmark insights on Raspberry Pi 5
4850

49-
What does this table tell us? Here are some performance insights:
50-
51-
- Qwen 0.5B and TinyLlama 1.1B deliver fast token generation and low average latency, making them suitable for real-time interactions like voice-controlled smart home commands.
52-
- DeepSeek-Coder 1.3B and Gemma 2B trade off some speed for improved language understanding, which can be useful for more complex task execution or context-aware prompts.
53-
- DeepSeek-R1 7B offers advanced reasoning capabilities with acceptable latency, which may be viable for offline summarization, planning, or low-frequency tasks.
51+
- Qwen 0.5B and TinyLlama 1.1B deliver fast token generation and low average latency, making them suitable for real-time interactions such as voice-controlled smart home commands
52+
- DeepSeek-Coder 1.3B and Gemma 2B trade some speed for improved language understanding, which can be useful for complex tasks or context-aware prompts
53+
- DeepSeek-R1 7B offers advanced reasoning capabilities with acceptable latency, which may be viable for offline summarization, planning, or low-frequency tasks
5454

55-
## Supported Arm-Powered Devices
55+
## Supported Arm-powered devices
5656

57-
This Learning Path focuses on the Raspberry Pi 5, but you can adapt the concepts and code to other Arm-powered devices:
57+
This Learning Path focuses on the Raspberry Pi 5, but you can adapt the concepts and code to other Arm-powered devices.
5858

59-
### Recommended Platforms
59+
## Recommended platforms
6060

61-
| Platform | CPU | RAM | GPIO Support | Model Size Suitability |
62-
|------------------|----------------------------------|----------------|-------------------------------|-----------------------------|
63-
| **Raspberry Pi 5** | Arm Cortex-A76 quad-core @ 2.4GHz | Up to 16GB | Native `lgpio` (high-performance) | Large models (8–16GB) |
64-
| **Raspberry Pi 4** | Arm Cortex-A72 quad-core @ 1.8GHz | Up to 8GB | Compatible with `gpiozero` | Small to mid-size models |
65-
| **Other Arm Devices** | Arm Cortex-A | 4GB min (8GB+ recommended) | Requires physical GPIO pins | Varies by RAM |
61+
| Platform | CPU | RAM | GPIO support | Model size suitability |
62+
| ------------------- | -------------------------------- | -------------- | ------------------------------ | --------------------------- |
63+
| **Raspberry Pi 5** | Arm Cortex-A76 quad-core @ 2.4GHz | Up to 16GB | Native `lgpio` (high-performance) | Large models (8–16GB) |
64+
| **Raspberry Pi 4** | Arm Cortex-A72 quad-core @ 1.8GHz | Up to 8GB | Compatible with `gpiozero` | Small to mid-size models |
65+
| **Other Arm devices** | Arm Cortex-A | 4GB min (8GB+ recommended) | Requires physical GPIO pins | Varies by RAM |
6666

67-
Additionally, the platform must:
67+
Additionally, the platform must meet the following requirements:
6868

6969
- GPIO pins available for hardware control
70-
- Use Python 3.8 or newer
70+
- Python 3.8 or newer
7171
- Ability to run [Ollama](https://ollama.com/)
7272

73-
Continue to the next section to start building a smart home system that highlights how Arm-based processors can enable efficient, responsive, and private AI applications at the edge.
73+
In the next section, you’ll set up the software dependencies needed to start building your privacy-first smart home system on Raspberry Pi 5.
Lines changed: 33 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -1,63 +1,67 @@
11
---
2-
title: Set up software dependencies
2+
title: Set up software dependencies on Raspberry Pi 5 for Ollama and LLMs
33
weight: 3
44

55
### FIXED, DO NOT MODIFY
66
layout: learningpathall
77
---
88

9+
## Overview
10+
11+
In this section, you’ll prepare your Raspberry Pi 5 by installing Python, required libraries, and Ollama, so you can run large language models (LLMs) locally.
12+
913
{{% notice Note %}}
10-
This guide assumes you have set up your Raspberry Pi with Raspberry Pi OS and network connectivity. For Raspberry Pi 5 setup help, see: [Raspberry Pi Getting Started](https://www.raspberrypi.com/documentation/)
14+
This Learning Path assumes you have set up your Raspberry Pi with Raspberry Pi OS and network connectivity. For Raspberry Pi 5 setup support, see [Raspberry Pi Getting Started](https://www.raspberrypi.com/documentation/).
1115
{{% /notice %}}
1216

13-
## Connect to Your Raspberry Pi 5
17+
## Connect to your Raspberry Pi 5
1418

15-
### Option 1: Using a display
19+
### Option 1: Use a display
1620

17-
The easiest way to work on your Raspberry Pi is connecting it to an external display through one of the micro HDMI ports. This setup also requires a keyboard and mouse to navigate.
21+
The easiest way to work on your Raspberry Pi is by connecting it to an external display through one of the microHDMI ports. This setup also requires a keyboard and mouse.
1822

19-
### Option 2: Using SSH
23+
### Option 2: Use SSH
2024

21-
You can also use SSH to access the terminal. To use this approach you need to know the IP address of your device. Ensure your Raspberry Pi 5 connects to the same network as your host computer. Access your device remotely via SSH using the terminal or any SSH client.
25+
You can also use SSH to access the terminal. To use this approach, you need to know the IP address of your device. Ensure your Raspberry Pi 5 is on the same network as your host computer. Access your device remotely via SSH using the terminal or any SSH client.
2226

2327
Replace `<user>` with your Pi's username (typically `pi`), and `<pi-ip>` with your Raspberry Pi 5's IP address.
2428

2529
```bash
2630
ssh <user>@<pi-ip>
2731
```
2832

29-
## Set up the dependencies
33+
## Install Python and system dependencies
3034

3135
Create a directory called `smart-home` in your home directory and navigate into it:
3236

3337
```bash
34-
mkdir $HOME/smart-home
35-
cd $HOME/smart-home
38+
mkdir -p "$HOME/smart-home"
39+
cd "$HOME/smart-home"
3640
```
3741

38-
The Raspberry Pi 5 includes Python 3 pre-installed, but you need additional packages:
42+
The Raspberry Pi 5 includes Python 3 preinstalled, but you need additional packages:
3943

4044
```bash
41-
sudo apt update && sudo apt upgrade
42-
sudo apt install python3 python3-pip python3-venv git curl build-essential gcc python3-lgpio
45+
sudo apt update && sudo apt upgrade -y
46+
sudo apt install -y python3 python3-pip python3-venv git curl build-essential gcc python3-lgpio
4347
```
4448

45-
### Configure the virtual environment
49+
## Configure a virtual environment
4650

47-
The next step is to create and activate a Python virtual environment. This approach keeps project dependencies isolated and prevents conflicts with system-wide packages:
51+
Create and activate a Python virtual environment to isolate project dependencies:
4852

4953
```bash
5054
python3 -m venv venv
5155
source venv/bin/activate
5256
```
5357

54-
Install all required libraries and dependencies:
58+
Install the required libraries:
5559

5660
```bash
57-
pip install ollama gpiozero lgpio psutil httpx orjson numpy fastapi uvicorn uvloop numpy
61+
pip install ollama gpiozero lgpio psutil httpx orjson numpy fastapi uvicorn uvloop
5862
```
5963

60-
### Install Ollama
64+
## Install Ollama
6165

6266
Install Ollama using the official installation script for Linux:
6367

@@ -70,27 +74,29 @@ Verify the installation:
7074
```bash
7175
ollama --version
7276
```
73-
If installation was successful, the output from the command should match that below.
77+
78+
If installation was successful, the output should be similar to:
79+
7480
```output
7581
ollama version is 0.11.4
7682
```
7783

78-
## Download and Test a Language Model
84+
## Run a test LLM with Ollama on Raspberry Pi 5
7985

80-
Ollama supports various models. This guide uses deepseek-r1:7b as an example, but you can also use `tinyllama:1.1b`, `qwen:0.5b`, `gemma2:2b`, or `deepseek-coder:1.3b`.
86+
Ollama supports various models. This guide uses `deepseek-r1:7b` as an example, but you can also use `tinyllama:1.1b`, `qwen:0.5b`, `gemma2:2b`, or `deepseek-coder:1.3b`.
8187

82-
The `run` command will set up the model automatically. You will see download progress in the terminal, followed by the interactive prompt when ready.
88+
The `run` command sets up the model automatically. You will see download progress in the terminal, followed by an interactive prompt when ready.
8389

8490
```bash
8591
ollama run deepseek-r1:7b
8692
```
8793

8894
{{% notice Troubleshooting %}}
89-
If you run into issues with the model download, here are some things to check:
95+
If you run into issues with the model download, try the following:
9096

91-
- Confirm internet access and sufficient storage space on your microSD card
92-
- Try downloading smaller models like `qwen:0.5b` or `tinyllama:1.1b` if you encounter memory issues. 16 GB of RAM is sufficient for running smaller to medium-sized language models. Very large models may require more memory or run slower.
93-
- Clear storage or connect to a more stable network if errors occur
97+
- Confirm internet access and sufficient storage space on your microSD card.
98+
- Try smaller models like `qwen:0.5b` or `tinyllama:1.1b` if you encounter memory issues. 16 GB of RAM is sufficient for small to medium models; very large models may require more memory or run slower.
99+
- Clear storage or connect to a more stable network if errors occur.
94100
{{% /notice %}}
95101

96-
With the model set up through `ollama`, move on to the next section to start configuring the hardware.
102+
With the model set up through Ollama, move on to the next section to start configuring the hardware.
Lines changed: 22 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,22 @@
11
---
2-
title: Test GPIO pins
2+
title: Test Raspberry Pi 5 GPIO pins for smart home devices
33
weight: 4
44

55
### FIXED, DO NOT MODIFY
66
layout: learningpathall
77
---
88

9-
The next step is to test the GPIO functionality. In this section, you will configure a LED light to simulate a smart-home device.
9+
## Overview
1010

11-
## Verify GPIO Functionality
11+
The next step is to test the GPIO functionality. In this section, you configure an LED light to simulate a smart home device.
1212

13-
Bring out your electronics components. Connect the anode (long leg) of an LED in series with a 220Ω resistor to GPIO 17 (physical pin 11). Connect the cathode (short leg) to a ground (GND) pin. See image below for the full setup:
13+
## Verify GPIO setup on Raspberry Pi 5
1414

15-
![Raspberry Pi connected to a breadboard with a green LED and jumper wires](pin_layout.jpg "Raspberry Pi connected to a breadboard with a green LED and jumper wires")
15+
Gather your electronic components. Connect the anode (long leg) of an LED in series with a 220Ω resistor to GPIO 17 (physical pin 11). Connect the cathode (short leg) to a ground (GND) pin.
16+
17+
See the image below for the full setup:
18+
19+
![Raspberry Pi connected to a breadboard with a green LED and jumper wires alt-text#center](pin_layout.jpg "Raspberry Pi connected to a breadboard with a green LED and jumper wires")
1620

1721
Create a Python script named `testgpio.py`:
1822

@@ -21,7 +25,7 @@ cd $HOME/smart-home
2125
vim testgpio.py
2226
```
2327

24-
Copy this code into the file:
28+
Add the following code to the file:
2529

2630
```python
2731
#!/usr/bin/env python3
@@ -32,7 +36,7 @@ from gpiozero.pins.lgpio import LGPIOFactory
3236
# Set lgpio backend for Raspberry Pi 5
3337
Device.pin_factory = LGPIOFactory()
3438

35-
# Setup GPIO pin 17
39+
# Set up GPIO pin 17
3640
pin1 = LED(17)
3741

3842
try:
@@ -52,19 +56,20 @@ python testgpio.py
5256
The LED should blink every two seconds. If you observe this behavior, your GPIO setup works correctly.
5357

5458
{{% notice Troubleshooting %}}
55-
If you run into issues with the hardware setup, here are some things to check:
56-
- Try fixing missing dependencies by running the following command:
57-
```bash
58-
sudo apt-get install -f
59-
```
60-
- If you're running into GPIO permission issues, run Python scripts with `sudo` or add your user to the `gpio` group. Don't forget to log out for the changes to take effect.
61-
```bash
62-
sudo usermod -a -G gpio $USER
63-
```
59+
If you run into issues with the hardware setup, check the following:
60+
61+
- Fix missing dependencies with:
62+
```bash
63+
sudo apt-get install -f
64+
```
65+
- If you encounter GPIO permission issues, run Python scripts with `sudo` or add your user to the `gpio` group. Don’t forget to log out for the changes to take effect:
66+
```bash
67+
sudo usermod -a -G gpio $USER
68+
```
6469
- Double-check wiring and pin numbers using the Raspberry Pi 5 pinout diagram
6570
- Ensure proper LED and resistor connections
6671
- Verify GPIO enablement in `raspi-config` if needed
6772
- Use a high-quality power supply
6873
{{% /notice %}}
6974

70-
With a way to control devices using GPIO pins, you can move on to the next section to interact with them using language models and the user interface.
75+
With GPIO pins working, you can now move on to the next section to interact with devices using language models and the user interface.

0 commit comments

Comments
 (0)