Skip to content

Commit cb69cdc

Browse files
Merge pull request #2201 from jasonrandrews/review
spelling updates
2 parents f0eecf9 + 511013c commit cb69cdc

File tree

5 files changed

+134
-15
lines changed

5 files changed

+134
-15
lines changed

.wordlist.txt

Lines changed: 114 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4474,4 +4474,117 @@ AssetLib
44744474
PerformanceStudio
44754475
VkThread
44764476
precompiled
4477-
rollouts
4477+
rollouts
4478+
Bhusari
4479+
DLLAMA
4480+
FlameGraph
4481+
FlameGraphs
4482+
JSP
4483+
KBC
4484+
MMIO
4485+
Paravirtualized
4486+
PreserveFramePointer
4487+
Servlet
4488+
TDISP
4489+
VirtIO
4490+
WebSocket
4491+
agentpath
4492+
alarmtimer
4493+
aoss
4494+
apb
4495+
ata
4496+
bpf
4497+
brendangregg
4498+
chipidea
4499+
clk
4500+
cma
4501+
counterintuitive
4502+
cpuhp
4503+
cros
4504+
csd
4505+
devfreq
4506+
devlink
4507+
dma
4508+
dpaa
4509+
dwc
4510+
ecurity
4511+
edma
4512+
evice
4513+
filelock
4514+
filemap
4515+
flamegraphs
4516+
fsl
4517+
glink
4518+
gpu
4519+
hcd
4520+
hns
4521+
hw
4522+
hwmon
4523+
icmp
4524+
initcall
4525+
iomap
4526+
iommu
4527+
ipi
4528+
irq
4529+
jbd
4530+
jvmti
4531+
kmem
4532+
ksm
4533+
kvm
4534+
kyber
4535+
libata
4536+
libperf
4537+
lockd
4538+
mdio
4539+
memcg
4540+
mmc
4541+
mtu
4542+
musb
4543+
napi
4544+
ncryption
4545+
netfs
4546+
netlink
4547+
nfs
4548+
ntegrity
4549+
nterface
4550+
oom
4551+
optee
4552+
pagemap
4553+
paravirtualized
4554+
percpu
4555+
printk
4556+
pwm
4557+
qcom
4558+
qdisc
4559+
ras
4560+
rcu
4561+
regmap
4562+
rgerganov’s
4563+
rotocol
4564+
rpcgss
4565+
rpmh
4566+
rseq
4567+
rtc
4568+
sched
4569+
scmi
4570+
scsi
4571+
skb
4572+
smbus
4573+
smp
4574+
spi
4575+
spmi
4576+
sunrpc
4577+
swiotlb
4578+
tegra
4579+
thp
4580+
tlb
4581+
udp
4582+
ufs
4583+
untrusted
4584+
uring
4585+
virtio
4586+
vmalloc
4587+
vmscan
4588+
workqueue
4589+
xdp
4590+
xhci

content/learning-paths/servers-and-cloud-computing/_index.md

Lines changed: 15 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,8 @@ key_ip:
88
maintopic: true
99
operatingsystems_filter:
1010
- Android: 2
11-
- Linux: 154
12-
- macOS: 10
11+
- Linux: 157
12+
- macOS: 11
1313
- Windows: 14
1414
pinned_modules:
1515
- module:
@@ -22,8 +22,8 @@ subjects_filter:
2222
- Containers and Virtualization: 29
2323
- Databases: 15
2424
- Libraries: 9
25-
- ML: 28
26-
- Performance and Architecture: 60
25+
- ML: 29
26+
- Performance and Architecture: 62
2727
- Storage: 1
2828
- Web: 10
2929
subtitle: Optimize cloud native apps on Arm for performance and cost
@@ -47,6 +47,8 @@ tools_software_languages_filter:
4747
- ASP.NET Core: 2
4848
- Assembly: 4
4949
- assembly: 1
50+
- Async-profiler: 1
51+
- AWS: 1
5052
- AWS CDK: 2
5153
- AWS CodeBuild: 1
5254
- AWS EC2: 2
@@ -65,7 +67,7 @@ tools_software_languages_filter:
6567
- C++: 8
6668
- C/C++: 2
6769
- Capstone: 1
68-
- CCA: 6
70+
- CCA: 7
6971
- Clair: 1
7072
- Clang: 10
7173
- ClickBench: 1
@@ -77,18 +79,19 @@ tools_software_languages_filter:
7779
- Daytona: 1
7880
- Demo: 3
7981
- Django: 1
80-
- Docker: 17
82+
- Docker: 18
8183
- Envoy: 2
8284
- ExecuTorch: 1
8385
- FAISS: 1
86+
- FlameGraph: 1
8487
- Flink: 1
8588
- Fortran: 1
8689
- FunASR: 1
8790
- FVP: 4
8891
- GCC: 22
8992
- gdb: 1
9093
- Geekbench: 1
91-
- GenAI: 11
94+
- GenAI: 12
9295
- GitHub: 6
9396
- GitLab: 1
9497
- Glibc: 1
@@ -114,7 +117,7 @@ tools_software_languages_filter:
114117
- Linaro Forge: 1
115118
- Litmus7: 1
116119
- Llama.cpp: 1
117-
- LLM: 9
120+
- LLM: 10
118121
- llvm-mca: 1
119122
- LSE: 1
120123
- MariaDB: 1
@@ -132,6 +135,7 @@ tools_software_languages_filter:
132135
- Ollama: 1
133136
- ONNX Runtime: 1
134137
- OpenBLAS: 1
138+
- OpenJDK-21: 1
135139
- OpenShift: 1
136140
- OrchardCore: 1
137141
- PAPI: 1
@@ -144,7 +148,7 @@ tools_software_languages_filter:
144148
- RAG: 1
145149
- Redis: 3
146150
- Remote.It: 2
147-
- RME: 6
151+
- RME: 7
148152
- Runbook: 71
149153
- Rust: 2
150154
- snappy: 1
@@ -161,6 +165,7 @@ tools_software_languages_filter:
161165
- TensorFlow: 2
162166
- Terraform: 11
163167
- ThirdAI: 1
168+
- Tomcat: 1
164169
- Trusted Firmware: 1
165170
- TSan: 1
166171
- TypeScript: 1
@@ -173,6 +178,7 @@ tools_software_languages_filter:
173178
- Whisper: 1
174179
- WindowsPerf: 1
175180
- WordPress: 3
181+
- wrk2: 1
176182
- x265: 1
177183
- zlib: 1
178184
- Zookeeper: 1

content/learning-paths/servers-and-cloud-computing/distributed-inference-with-llama-cpp/how-to-1.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ If everything was built correctly, you should see a list of all the available fl
4646

4747
Communication between the master node and the worker nodes occurs through a socket created on each worker. This socket listens for incoming data from the master—such as model parameters, tokens, hidden states, and other inference-related information.
4848
{{% notice Note %}}The RPC feature in llama.cpp is not secure by default, so you should never expose it to the open internet. To mitigate this risk, ensure that the security groups for all your EC2 instances are properly configured—restricting access to only trusted IPs or internal VPC traffic. This helps prevent unauthorized access to the RPC endpoints.{{% /notice %}}
49-
Use the following command to start the listeneing on the worker nodes:
49+
Use the following command to start the listening on the worker nodes:
5050
```bash
5151
bin/rpc-server -p 50052 -H 0.0.0.0 -t 64
5252
```

content/learning-paths/servers-and-cloud-computing/distributed-inference-with-llama-cpp/how-to-2.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -190,7 +190,7 @@ llama_perf_context_print: eval time = 77429.95 ms / 127 runs ( 609
190190
llama_perf_context_print: total time = 79394.06 ms / 132 tokens
191191
llama_perf_context_print: graphs reused = 0
192192
```
193-
That's it! You have sucessfully run the llama-3.1-8B model on CPUs with the power of llama.cpp RPC functionality. The following table provides brief description of the metrics from `llama_perf`: <br><br>
193+
That's it! You have successfully run the llama-3.1-8B model on CPUs with the power of llama.cpp RPC functionality. The following table provides brief description of the metrics from `llama_perf`: <br><br>
194194

195195
| Log Line | Description |
196196
|-------------------|-----------------------------------------------------------------------------|
@@ -200,11 +200,11 @@ That's it! You have sucessfully run the llama-3.1-8B model on CPUs with the powe
200200
| eval time | Time to generate output tokens by forward-passing through the model. |
201201
| total time | Total time for both prompt processing and token generation (excludes model load). |
202202

203-
Lastly to set up OpenAI compatible API, you can use the `llama-server` functionality. The process of implementing this is described [here](/learning-paths/servers-and-cloud-computing/llama-cpu) under the "Access the chatbot using the OpenAI-compatible API" section. Here is a snippet, for how to set up llama-server for disributed inference:
203+
Lastly to set up OpenAI compatible API, you can use the `llama-server` functionality. The process of implementing this is described [here](/learning-paths/servers-and-cloud-computing/llama-cpu) under the "Access the chatbot using the OpenAI-compatible API" section. Here is a snippet, for how to set up llama-server for distributed inference:
204204
```bash
205205
bin/llama-server -m /home/ubuntu/model.gguf --port 8080 --rpc "$worker_ips" -ngl 99
206206
```
207-
At the very end of the output to the above command, you will see somethin like the following:
207+
At the very end of the output to the above command, you will see something like the following:
208208
```output
209209
main: server is listening on http://127.0.0.1:8080 - starting the main loop
210210
srv update_slots: all slots are idle

content/learning-paths/servers-and-cloud-computing/java-perf-flamegraph/1_setup.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ Move the executable to somewhere in your PATH:
8787
sudo cp wrk /usr/local/bin
8888
```
8989

90-
3. Finally, you can run the benchamrk of Tomcat through wrk2.
90+
3. Finally, you can run the benchmark of Tomcat through wrk2.
9191
```bash
9292
wrk -c32 -t16 -R50000 -d60 http://${tomcat_ip}:8080/examples/servlets/servlet/HelloWorldExample
9393
```

0 commit comments

Comments
 (0)