Skip to content

Commit d1f5a4a

Browse files
Merge pull request #2212 from jasonrandrews/review
spelling and link fixes
2 parents af1dbbf + cd87508 commit d1f5a4a

File tree

6 files changed

+9
-6
lines changed

6 files changed

+9
-6
lines changed

.wordlist.txt

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4559,7 +4559,7 @@ qdisc
45594559
ras
45604560
rcu
45614561
regmap
4562-
rgerganovs
4562+
rgerganov's
45634563
rotocol
45644564
rpcgss
45654565
rpmh
@@ -4588,3 +4588,6 @@ vmscan
45884588
workqueue
45894589
xdp
45904590
xhci
4591+
JFR
4592+
conv
4593+
servlet

content/learning-paths/embedded-and-microcontrollers/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ tools_software_languages_filter:
4949
- Coding: 26
5050
- Containerd: 1
5151
- DetectNet: 1
52-
- Docker: 9
52+
- Docker: 10
5353
- DSTREAM: 2
5454
- Edge AI: 1
5555
- Edge Impulse: 1

content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/2-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ TinyML is machine learning optimized to run on low-power, resource-constrained d
2828

2929
This Learning Path focuses on using TinyML models with virtualized Arm hardware to simulate real-world AI workloads on microcontrollers and NPUs.
3030

31-
If you're looking to build and train your own TinyML models, follow the [Introduction to TinyML on Arm using PyTorch and ExecuTorch](/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/).
31+
If you're looking to build and train your own TinyML models, follow the [Introduction to TinyML on Arm using PyTorch and ExecuTorch](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/).
3232

3333
## What is ExecuTorch?
3434

content/learning-paths/embedded-and-microcontrollers/visualizing-ethos-u-performance/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ operatingsystems:
3232

3333
tools_software_languages:
3434
- Arm Virtual Hardware
35-
- Fixed Virtual Platform (FVP)
35+
- Fixed Virtual Platform
3636
- Python
3737
- PyTorch
3838
- ExecuTorch

content/learning-paths/servers-and-cloud-computing/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ tools_software_languages_filter:
4747
- ASP.NET Core: 2
4848
- Assembly: 4
4949
- assembly: 1
50-
- Async-profiler: 1
50+
- async-profiler: 1
5151
- AWS: 1
5252
- AWS CDK: 2
5353
- AWS CodeBuild: 1

content/learning-paths/servers-and-cloud-computing/distributed-inference-with-llama-cpp/how-to-1.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ layout: learningpathall
1010
The instructions in this Learning Path are for any Arm server running Ubuntu 24.04.2 LTS. You will need at least three Arm server instances with at least 64 cores and 128GB of RAM to run this example. The instructions have been tested on an AWS Graviton4 c8g.16xlarge instance
1111

1212
## Overview
13-
llama.cpp is a C++ library that enables efficient inference of LLaMA and similar large language models on CPUs, optimized for local and embedded environments. Just over a year ago from its publication date, rgerganovs RPC code was merged into llama.cpp, enabling distributed inference of large LLMs across multiple CPU-based machines—even when the models don’t fit into the memory of a single machine. In this learning path, we’ll explore how to run a 405B parameter model on Arm-based CPUs.
13+
llama.cpp is a C++ library that enables efficient inference of LLaMA and similar large language models on CPUs, optimized for local and embedded environments. Just over a year ago from its publication date, rgerganov's RPC code was merged into llama.cpp, enabling distributed inference of large LLMs across multiple CPU-based machines—even when the models don’t fit into the memory of a single machine. In this learning path, we’ll explore how to run a 405B parameter model on Arm-based CPUs.
1414

1515
For the purposes of this demonstration, the following experimental setup will be used:
1616
- Total number of instances: 3

0 commit comments

Comments
 (0)