Skip to content

Commit c2eb06c

Browse files
committed
spelling updates
1 parent 837dc04 commit c2eb06c

File tree

11 files changed

+120
-15
lines changed

11 files changed

+120
-15
lines changed

.wordlist.txt

Lines changed: 105 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5156,3 +5156,108 @@ transpile
51565156
tsc
51575157
typescriptlang
51585158
vmlinux
5159+
ATfE
5160+
ATfL
5161+
AlmaLinux
5162+
Asher
5163+
AsyncOpenAI
5164+
Bálint
5165+
CVE
5166+
CircleCI's
5167+
Couchbase
5168+
Couchbase's
5169+
DANDROID
5170+
DKLEIDICV
5171+
DataType
5172+
EdgeXpert
5173+
EleutherAI
5174+
Facter
5175+
GDDR
5176+
GEMMs
5177+
GSM
5178+
Gnuplot
5179+
HD
5180+
HellaSwag
5181+
Hiera
5182+
HwCaps
5183+
InceptionV
5184+
Infineon
5185+
Jett
5186+
KRaft's
5187+
Kiro
5188+
KleidiCV's
5189+
LangChain
5190+
LlamaIndex
5191+
MMLU
5192+
MULTIVERSION
5193+
MVT
5194+
Menuconfig
5195+
MobileNetV
5196+
NHWC
5197+
ORM
5198+
Phoronix
5199+
PointwiseConv
5200+
PyBind
5201+
QL
5202+
RecursiveCharacterTextSplitter
5203+
ResNet
5204+
SDOT
5205+
SMES
5206+
SentenceTransformer
5207+
Silabs
5208+
TPUs
5209+
UDOT
5210+
XDCR
5211+
XNNBatchMatrixMultiply
5212+
XNNConv
5213+
XNNFullyConnected
5214+
XnnpackBackend
5215+
Zhou
5216+
acc
5217+
agentless
5218+
aten
5219+
blockwise
5220+
cbc
5221+
couchbase
5222+
ctrl
5223+
datasheets
5224+
decltype
5225+
docx
5226+
etdump
5227+
etrecord
5228+
facter
5229+
faiss
5230+
fg
5231+
fibonacci
5232+
gemm
5233+
hiera
5234+
ipc
5235+
ivh
5236+
js's
5237+
kiro
5238+
kirocli
5239+
libclang
5240+
libopencv
5241+
llamacpp
5242+
llmcompressor
5243+
minmax
5244+
mse
5245+
multiversion
5246+
phoronix
5247+
pillowfight
5248+
pkl
5249+
pointwise
5250+
pqs
5251+
precisions
5252+
proto
5253+
pypdf
5254+
qb
5255+
qc
5256+
qp
5257+
rebalance
5258+
rustup
5259+
sSf
5260+
tcmalloc
5261+
tlsv
5262+
vLLM's
5263+
webp

content/learning-paths/cross-platform/multiplying-matrices-with-sme2/1-get-started.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ For the compiler, you can use [Clang](https://www.llvm.org/) version 18 or later
107107
At the time of writing, macOS ships with `clang` version 17.0.0, which doesn't support SME2. Use a newer version, such as 21.1.4, available through Homebrew.
108108
{{% /notice%}}
109109

110-
You can check your compiler version using the command: `clang --version` if it's alreay installed. If not, install `clang` using the instructions below, selecting either macOS or Linux/Ubuntu, depending on your setup:
110+
You can check your compiler version using the command: `clang --version` if it's already installed. If not, install `clang` using the instructions below, selecting either macOS or Linux/Ubuntu, depending on your setup:
111111

112112
{{< tabpane code=true >}}
113113

content/learning-paths/cross-platform/multiplying-matrices-with-sme2/2-check-your-environment.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -275,7 +275,7 @@ defined in `misc.c`.
275275

276276
The `sme2_check` program then displays whether SVE, SME and SME2 are supported
277277
at line 24. The checking of SVE, SME and SME2 is done differently depending on
278-
`BAREMETAL`. This platform specific behaviour is abstracted by the
278+
`BAREMETAL`. This platform specific behavior is abstracted by the
279279
`display_cpu_features()`:
280280
- In baremetal mode, our program has access to system registers and can inspect system registers for SME2 support. The program will print the SVE field of the `ID_AA64PFR0_EL1` system register and the SME field of the `ID_AA64PFR1_EL1` system register.
281281
- In non baremetal mode, on an Apple platform the program needs to use a higher

content/learning-paths/embedded-and-microcontrollers/streamline-kernel-module/3_oot_module.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -223,7 +223,7 @@ The module above receives the size of a 2D array as a string through the `char_d
223223
ssh root@<your-target-ip>
224224
```
225225
226-
4. Execute the following commads on the target to run the module:
226+
4. Execute the following commands on the target to run the module:
227227
```bash
228228
insmod /root/mychardrv.ko
229229
mknod /dev/mychardrv c 42 0

content/learning-paths/embedded-and-microcontrollers/streamline-kernel-module/4_sl_profile_oot.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ If you are using an AArch32 target, use `arm` instead of `arm64`.
7070

7171
![Streamline command#center](./images/img04_streamline_cmd.png)
7272

73-
8. In the Capture settings dialog, select Add image, add the absolut path of your kernel module file `mychardrv.ko` and click Save.
73+
8. In the Capture settings dialog, select Add image, add the absolute path of your kernel module file `mychardrv.ko` and click Save.
7474
![Capture settings#center](./images/img05_capture_settings.png)
7575

7676
9. Start the capture and enter a name and location for the capture file. Streamline will start collecting data and the charts will show activity being captured from the target.

content/learning-paths/servers-and-cloud-computing/circleci-on-aws/circleci-runner-installation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ This ensures you can install the CircleCI Runner package directly using `apt`.
2020
curl -s https://packagecloud.io/install/repositories/circleci/runner/script.deb.sh?any=true | sudo bash
2121
```
2222

23-
- The `curl` command downloads and executes the repository setup script from CircleCIs official package server.
23+
- The `curl` command downloads and executes the repository setup script from CircleCI's official package server.
2424
- It configures the repository on your system, allowing `apt` to fetch and install the CircleCI runner package.
2525
- After successful execution, the CircleCI repository will be added under `/etc/apt/sources.list.d/`.
2626

content/learning-paths/servers-and-cloud-computing/couchbase-on-gcp/baseline.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ http://<VM-Public-IP>:8091
3333

3434
![Finalize configuration](images/cluster-setup-4.png "Finalize configuration")
3535

36-
Our default cluster is now created! Please retain the passord you created for your "Administrator" account... you'll need that in the next steps.
36+
Our default cluster is now created! Please retain the password you created for your "Administrator" account... you'll need that in the next steps.
3737

3838
### Verify Cluster Nodes
3939
This command checks if your Couchbase server (called a “node”) is running properly. Replace "password" with your specified Couchbase Administrator password.
@@ -87,4 +87,4 @@ Use the admin `username` (default is "Administrator") and `password` you created
8787
- The **benchmark** bucket will be used for **load testing** and **performance benchmarking**.
8888
- Setting the **RAM Quota** ensures Couchbase allocates sufficient memory for **in-memory data operations**, improving overall speed.
8989

90-
You can now proceed to the next section for benchmarking to measure Couchbases performance.
90+
You can now proceed to the next section for benchmarking to measure Couchbase's performance.

content/learning-paths/servers-and-cloud-computing/kafka-azure/baseline.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ This configuration file sets up a single Kafka server to act as both a controlle
4747

4848
## Format the storage directory
4949

50-
Format the metadata storage directory using the kafka-storage.sh tool. This initializes KRafts internal Raft logs with a unique cluster ID.
50+
Format the metadata storage directory using the kafka-storage.sh tool. This initializes KRaft's internal Raft logs with a unique cluster ID.
5151

5252
```console
5353
bin/kafka-storage.sh format -t $(bin/kafka-storage.sh random-uuid) -c config/server.properties

content/learning-paths/servers-and-cloud-computing/vllm-acceleration/1-overview-and-build.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ You can use vLLM in two main ways:
2121

2222
vLLM supports Hugging Face Transformer models out-of-the-box and scales seamlessly from single-prompt testing to production batch inference.
2323

24-
## What'll you build
24+
## What you will build
2525

2626
In this Learning Path, you'll build a CPU-optimized version of vLLM targeting the Arm64 architecture, integrated with oneDNN and the Arm Compute Library (ACL).
2727
This build enables high-performance LLM inference on Arm servers, leveraging specialized Arm math libraries and kernel optimizations.
@@ -39,7 +39,7 @@ vLLM achieves high performance on Arm servers by combining software and hardware
3939

4040
These optimizations work together to deliver higher throughput and lower latency for LLM inference on Arm servers.
4141

42-
vLLMs performance on Arm servers is driven by both software optimization and hardware-level acceleration.
42+
vLLM's performance on Arm servers is driven by both software optimization and hardware-level acceleration.
4343
Each component of this optimized build contributes to higher throughput and lower latency during inference:
4444

4545
- Optimized kernels: the aarch64 vLLM build uses direct oneDNN with the Arm Compute Library for key operations.
@@ -74,7 +74,7 @@ sudo apt-get install -y libtcmalloc-minimal4
7474
```
7575

7676
{{% notice Note %}}
77-
On aarch64, vLLMs CPU backend automatically builds with the Arm Compute Library (ACL) through oneDNN.
77+
On aarch64, vLLM's CPU backend automatically builds with the Arm Compute Library (ACL) through oneDNN.
7878
This ensures optimized Arm kernels are used for matrix multiplications, layer normalization, and activation functions without additional configuration.
7979
{{% /notice %}}
8080

@@ -160,7 +160,7 @@ python examples/offline_inference/basic/chat.py \
160160
Explanation:
161161
--dtype=bfloat16 runs inference in bfloat16 precision. Recent Arm processors support the BFloat16 (BF16) number format in PyTorch. For example, AWS Graviton3 and Graviton3 processors support BFloat16.
162162
--model specifies a small Hugging Face model for testing (TinyLlama-1.1B-Chat), ideal for functional validation before deploying larger models.
163-
You should see token streaming in the console, followed by a generated output confirming that vLLMs inference pipeline is working correctly.
163+
You should see token streaming in the console, followed by a generated output confirming that vLLM's inference pipeline is working correctly.
164164

165165
```output
166166
Generated Outputs:

content/learning-paths/servers-and-cloud-computing/vllm-acceleration/2-quantize-model.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ layout: learningpathall
88
## Accelerate LLMs with 4-bit quantization
99

1010
You can accelerate many LLMs on Arm CPUs with 4‑bit quantization. In this section, you’ll quantize the deepseek-ai/DeepSeek-V2-Lite model to 4-bit integer (INT4) weights.
11-
The quantized model runs efficiently through vLLMs INT4 inference path, which is accelerated by Arm KleidiAI microkernels.
11+
The quantized model runs efficiently through vLLM's INT4 inference path, which is accelerated by Arm KleidiAI microkernels.
1212

1313
## Install quantization tools
1414

0 commit comments

Comments
 (0)