Skip to content

Commit 87702d4

Browse files
Merge pull request #1828 from jasonrandrews/spelling
Spelling updates
2 parents eb547ec + bc6a865 commit 87702d4

File tree

6 files changed

+17
-9
lines changed

6 files changed

+17
-9
lines changed

.wordlist.txt

Lines changed: 11 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3879,7 +3879,6 @@ DLRMv
38793879
DeepSeek
38803880
Geremy
38813881
MERCHANTABILITY
3882-
MLPerf’s
38833882
MoE
38843883
NONINFRINGEMENT
38853884
NaN
@@ -3928,7 +3927,6 @@ HelloworldSubscriber
39283927
IMU
39293928
Jalisco
39303929
LiDAR
3931-
MLPerf’s
39323930
OTA
39333931
OpenAD
39343932
OpenADKit
@@ -3958,4 +3956,14 @@ ros
39583956
rviz
39593957
testbed
39603958
ug
3961-
vnc
3959+
vnc
3960+
Acyclic
3961+
Bedrust
3962+
MLPerf's
3963+
bedrust
3964+
darko
3965+
mesaros
3966+
multilayer
3967+
renderbuffer
3968+
rosdep
3969+
suboptimally

content/learning-paths/cross-platform/llm-fine-tuning-for-web-applications/how-to-6.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ This section runs LLM inference using the fine-tuned model.
3131

3232
Use the command below to:
3333
- Optimize the model for low-latency inference.
34-
- Use Unsloths performance improvements to speed up text generation.checkpoints in "outputs" folder.
34+
- Use Unsloth's performance improvements to speed up text generation.checkpoints in "outputs" folder.
3535

3636
```python
3737
FastLanguageModel.for_inference(model)

content/learning-paths/mobile-graphics-and-gaming/render-graph-optimization/generating-a-render-graph-for-your-application.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ Ask Frame Advisor to capture data relating to the problem areas you have observe
4747

4848
## Viewing the render graph
4949

50-
Observe that part of the Frame Advisor window is labelled “Render Graph”. This contains the render graph relating to the frames you asked Frame Advisor to analyze.
50+
Observe that part of the Frame Advisor window is labeled “Render Graph”. This contains the render graph relating to the frames you asked Frame Advisor to analyze.
5151

5252
Assume that you've captured the following render graph:
5353

content/learning-paths/mobile-graphics-and-gaming/render-graph-optimization/inefficient-transfer-workloads.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ To find which API calls your application uses to start transfer workloads:
2525

2626
- Open the render graph for your captured frames
2727
- Click a transfer node
28-
- Now move to the API Calls view (labelled “API Calls”)
28+
- Now move to the API Calls view (labeled “API Calls”)
2929
- Observe the API calls in use.
3030

3131
## Problem: inefficient clear routines

content/learning-paths/mobile-graphics-and-gaming/render-graph-optimization/understanding-your-render-graph.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ Take a closer look at what can be represented in a node on the render graph.
3535

3636
### Render passes
3737

38-
Most of the execution nodes in the sample graph above are colored green and labelled “RP…” These are [render passes](https://developer.arm.com/documentation/102479/0100/How-Render-Passes-Work).
38+
Most of the execution nodes in the sample graph above are colored green and labeled “RP…” These are [render passes](https://developer.arm.com/documentation/102479/0100/How-Render-Passes-Work).
3939

4040
![A render pass node#center](render-pass-node.png "Figure 2. A render pass node")
4141

@@ -53,7 +53,7 @@ When you click an execution node, such as a render pass, Frame Advisor navigates
5353

5454
### Other types of execution node
5555

56-
The graph also shows a transfer node, colored blue, and labelled ”Tr…”.
56+
The graph also shows a transfer node, colored blue, and labeled ”Tr…”.
5757

5858
![A transfer node#center](transfer-node.png "A transfer node")
5959

content/learning-paths/servers-and-cloud-computing/dlrm/1-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ The Arm Neoverse V2 CPU is built for high-performance computing, making it ideal
1818

1919
In this Learning Path, you'll learn how to evaluate the performance of the [DLRM using the MLPerf Inference suite](https://github.com/mlcommons/inference/tree/master/recommendation/dlrm_v2/pytorch) in the _Offline_ scenario. The Offline scenario is a test scenario where large batches of data are processed all at once, rather than in real-time. It simulates large-scale, batch-style inference tasks commonly found in recommendation systems for e-commerce, streaming, and social platforms.
2020

21-
You will run tests that measure throughput (samples per second) and latency, providing insights into how efficiently the model runs on the target system. By using MLPerfs standardized methodology, you'll gain reliable insights that help compare performance across different hardware and software configurations — highlighting the system’s ability to handle real-world, data-intensive AI workloads.
21+
You will run tests that measure throughput (samples per second) and latency, providing insights into how efficiently the model runs on the target system. By using MLPerf's standardized methodology, you'll gain reliable insights that help compare performance across different hardware and software configurations — highlighting the system’s ability to handle real-world, data-intensive AI workloads.
2222

2323
## Configure your environment
2424

0 commit comments

Comments
 (0)