Skip to content

Commit 8470dea

Browse files
Merge pull request #1684 from jasonrandrews/spelling
Spelling and link fixes
2 parents 1922cc0 + b26910f commit 8470dea

File tree

10 files changed

+100
-22
lines changed

10 files changed

+100
-22
lines changed

.wordlist.txt

Lines changed: 81 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3688,4 +3688,84 @@ ver
36883688
vit
36893689
wav
36903690
za
3691-
zh
3691+
zh
3692+
ACM
3693+
APs
3694+
ASG
3695+
AutoModelForSpeechSeq
3696+
AutoProcessor
3697+
Avin
3698+
Bioinformatics
3699+
CDK
3700+
Dropbear
3701+
DxeCore
3702+
EFI
3703+
FFTW
3704+
HMMER
3705+
ILP
3706+
IoC
3707+
Jython
3708+
Khrustalev
3709+
LCP
3710+
Maranget
3711+
Minimap
3712+
NFS
3713+
NVPL
3714+
OMP
3715+
OSR
3716+
OneAPI
3717+
OpenRNG
3718+
OpenSSH
3719+
Phttps
3720+
RNG
3721+
Runbook
3722+
SSM
3723+
Shrinkwrap
3724+
Shrinkwrap's
3725+
TSO
3726+
TSan
3727+
TSan's
3728+
ThreadSanitizer
3729+
Toolkits
3730+
ULP
3731+
ULPs
3732+
VSL
3733+
Yury
3734+
Zarlez
3735+
ada
3736+
armtest
3737+
atomicity
3738+
autoscaling
3739+
cmath
3740+
cuBLAS
3741+
cuDNN
3742+
distro's
3743+
dtype
3744+
dxecore
3745+
foir
3746+
fourier
3747+
getter
3748+
libm
3749+
miniconda
3750+
msg
3751+
nInferencing
3752+
oneAPI
3753+
oneapi
3754+
openai
3755+
pseudorandom
3756+
quasirandom
3757+
reorderings
3758+
rootfs
3759+
runbook
3760+
safetensors
3761+
superset
3762+
sysroot
3763+
testroot
3764+
threadsanitizercppmanual
3765+
toolchain's
3766+
transpiles
3767+
vectorstore
3768+
vlen
3769+
vv
3770+
webhook
3771+
xE

content/learning-paths/cross-platform/_example-learning-path/appendix-3-test.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,9 +18,7 @@ The framework allows you to parse Learning Path articles and generate instructio
1818
2. [Edit Learning Path pages](#edit-learning-path-pages)
1919
3. [Edit metadata](#edit-metadata)
2020
4. [Run the framework](#run-the-framework)
21-
5. [Result summary](#result-summary)
22-
6. [Visualize results](#visualize-results)
23-
21+
5. [Advanced usage for embedded development](#advanced-usage-for-embedded-development)
2422

2523
## Install dependencies
2624

@@ -279,7 +277,7 @@ In the example above, the summary indicates that for this Learning Path all test
279277
## Advanced usage for embedded development
280278
### Using the Corstone-300 FVP
281279

282-
By default, the framework runs instructions on the Docker images specified by the [metadata](#edit-metadata). For embedded development, it is possible to build software in a container instance and then check its behaviour on the Corstone-300 FVP.
280+
By default, the framework runs instructions on the Docker images specified by the [metadata](#edit-metadata). For embedded development, it is possible to build software in a container instance and then check its behavior on the Corstone-300 FVP.
283281

284282
For this, all container instances used by the test framework mount a volume in `/shared`. This is where software for the target FVP can be stored. To check the execution, the FVP commands just need to be identified as a `fvp` section for the framework.
285283

content/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-5.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,4 +74,4 @@ Your next steps depend on your hardware.
7474

7575
If you have the Grove Vision AI Module, proceed to [Set up the Grove Vision AI Module V2 Learning Path](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/setup-7-grove/).
7676

77-
If you do not have the Grove Vision AI Module, you can use the Corstone-320 FVP instead. See the Learning Path [Set up the Corstone-320 FVP](/learning-paths/microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-fvp/).
77+
If you do not have the Grove Vision AI Module, you can use the Corstone-320 FVP instead. See the Learning Path [Set up the Corstone-320 FVP](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm/env-setup-6-fvp/).

content/learning-paths/servers-and-cloud-computing/arm-cpp-memory-model/2.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ In the pseudo code snippet above, it's possible for operation B to precede opera
4848

4949
- `memory_order_acquire` and `memory_order_release`
5050

51-
Acquire and release are used to synchronise atomic variables. In the example below, thread A writes to memory (allocating the string and setting data) and then uses a release-store to publish these updates. Thread B repeatedly performs an acquire-load until it sees the updated pointer. The acquire ensures that once Thread B sees a non-null pointer, all writes made by Thread A (including the update to data) become visible, synchronizing the two threads.
51+
Acquire and release are used to synchronize atomic variables. In the example below, thread A writes to memory (allocating the string and setting data) and then uses a release-store to publish these updates. Thread B repeatedly performs an acquire-load until it sees the updated pointer. The acquire ensures that once Thread B sees a non-null pointer, all writes made by Thread A (including the update to data) become visible, synchronizing the two threads.
5252

5353
```cpp
5454
// Thread A

content/learning-paths/servers-and-cloud-computing/arm-cpp-memory-model/4.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ layout: learningpathall
88

99
## How can I detect infrequent race conditions?
1010

11-
ThreadSanitizer, commonly referred to as `TSan`, is a concurrency bug detection tool that identifies data races in multi-threaded programs. By instrumenting code at compile time, TSan dynamically tracks memory operations, monitoring lock usage and detecting inconsistencies in thread synchronization. When it finds a potential data race, it reports detailed information to aid debugging. TSans overhead can be significant, but it provides valuable insights into concurrency issues often missed by static analysis.
11+
ThreadSanitizer, commonly referred to as `TSan`, is a concurrency bug detection tool that identifies data races in multi-threaded programs. By instrumenting code at compile time, TSan dynamically tracks memory operations, monitoring lock usage and detecting inconsistencies in thread synchronization. When it finds a potential data race, it reports detailed information to aid debugging. TSan's overhead can be significant, but it provides valuable insights into concurrency issues often missed by static analysis.
1212

1313
TSan is available through both recent `clang` and `gcc` compilers.
1414

content/learning-paths/servers-and-cloud-computing/arm-cpp-memory-model/_index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ armips:
2727
- Neoverse
2828
tools_software_languages:
2929
- C++
30-
- ThreadSantizer (TSan)
30+
- ThreadSanitizer (TSan)
3131
operatingsystems:
3232
- Linux
3333
- Runbook
@@ -38,7 +38,7 @@ further_reading:
3838
link: https://en.cppreference.com/w/cpp/atomic/memory_order
3939
type: documentation
4040
- resource:
41-
title: Thread Santiser Manual
41+
title: Thread Sanitizer Manual
4242
link: Phttps://github.com/google/sanitizers/wiki/threadsanitizercppmanual
4343
type: documentation
4444

content/learning-paths/servers-and-cloud-computing/copilot-extension-deployment/2-cdk-services.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ layout: learningpathall
77
---
88
## Which AWS Services do I need?
99

10-
In the first GitHub Copilot Extension Learning Path, [Build a GitHub Copilot Extension in Python](learning-paths/servers-and-cloud-computing/gh-copilot-simple), you ran a GitHub Copilot Extension on a single Linux computer, with the public URL provided by an ngrok tunnel to your localhost.
10+
In the first GitHub Copilot Extension Learning Path, [Build a GitHub Copilot Extension in Python](/learning-paths/servers-and-cloud-computing/gh-copilot-simple), you ran a GitHub Copilot Extension on a single Linux computer, with the public URL provided by an ngrok tunnel to your localhost.
1111

1212
For a production environment, you require:
1313

content/learning-paths/servers-and-cloud-computing/cplusplus_compilers_flags/4.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -128,44 +128,44 @@ Average elapsed time: 0.0420332 seconds
128128
Average elapsed time: 0.0155661 seconds
129129
```
130130

131-
Here we can observe a notable performance speed up from using higher levels of optimisations.
131+
Here we can observe a notable performance speed up from using higher levels of optimizations.
132132

133-
Please Note: To understand which lower level optimisation are used by `-O1`, `-O2` and `-O3` we can use the `g++ <optimisatiob level> -Q --help=optimizers` command.
133+
Please Note: To understand which lower level optimization are used by `-O1`, `-O2` and `-O3` we can use the `g++ <optimization level> -Q --help=optimizers` command.
134134

135135

136-
### Understanding what was optimised
136+
### Understanding what was optimized
137137

138138
Naturally, the next question is to understand which part of your source code was optimized between the outputs above. Full optimization reports generated by compilers like GCC provide a detailed tree of reports through various stages of the optimization process. For beginners, these reports can be overwhelming due to the sheer volume of information they contain, covering every aspect of the code's transformation and optimization.
139139

140140
For a more manageable overview, you can enable basic optimization information (`opt-info`) reports using specific arguments such as `-fopt-info-vec`, which focuses on vectorization optimizations. The `-fopt-info` flag can be customized by changing the info bit to target different types of optimizations, making it easier to pinpoint specific areas of interest.
141141

142-
First, to see what part of our source code was optimised between levels 1 and 2 we can run the following commands to see if our vectorisable loop was indeed vectorised.
142+
First, to see what part of our source code was optimized between levels 1 and 2 we can run the following commands to see if our vectorizable loop was indeed vectorized.
143143

144144
```bash
145145
g++ -O1 vectorizable_loop.cpp -o level_1 -fopt-info-vec
146146
```
147147

148-
Running the `-O1` flag led showed no terminal output indicating no vectorisation was performed. Next, run the command below with the `-O2` flag.
148+
Running the `-O1` flag led showed no terminal output indicating no vectorization was performed. Next, run the command below with the `-O2` flag.
149149

150150
```bash
151151
g++ -O2 vectorizable_loop.cpp -o level_2 -fopt-info-vec
152152
```
153153

154-
This time the `-O2` flag enables our loop to be vectorised as can be seen from the output below.
154+
This time the `-O2` flag enables our loop to be vectorized as can be seen from the output below.
155155

156156
```output
157157
vectorizable_loop.cpp:13:30: optimized: loop vectorized using 16 byte vectors
158158
/usr/include/c++/13/bits/stl_algobase.h:930:22: optimized: loop vectorized using 16 byte vectors
159159
```
160160

161-
To see what optimisations were performed and missed between level 2 and level 3, we could direct the terminal output from all optimisations (`-fopt-info`) to a text file with the commands below.
161+
To see what optimizations were performed and missed between level 2 and level 3, we could direct the terminal output from all optimizations (`-fopt-info`) to a text file with the commands below.
162162

163163
```bash
164164
g++ -O2 vectorizable_loop.cpp -o level_2 -fopt-info 2>&1 | tee level2.txt
165165
g++ -O3 vectorizable_loop.cpp -o level_3 -fopt-info 2>&1 | tee level3.txt
166166
```
167167

168-
Comparing the outputs between different levels can highlight where in your source code opportunities to optimise code where missed, for example with the `diff` command. This can help you write source code that is more likely to be optimised. However, source code modifications are out of scope for this learning path and we will leave it to the reader to dive into the differences if they wish to learn more.
168+
Comparing the outputs between different levels can highlight where in your source code opportunities to optimize code where missed, for example with the `diff` command. This can help you write source code that is more likely to be optimized. However, source code modifications are out of scope for this learning path and we will leave it to the reader to dive into the differences if they wish to learn more.
169169

170170

171171
## Target balanced performance

content/learning-paths/servers-and-cloud-computing/glibc-linux-fvp/conventions.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,8 +29,8 @@ Table 1. Directory layout
2929
| `/home/user/workspace/linux` | Folder with the Linux kernel sources |
3030
| `/home/user/workspace/linux-headers` | Directory for installing kernel headers |
3131
| `/home/user/workspace/linux-build` | Folder for the Linux kernel build output |
32-
| `/home/user/workspace/glibc` | Foldr for the Glibc sources |
33-
| `/home/user/workspace/glibc-build` | Directory foir the Glibc build output |
32+
| `/home/user/workspace/glibc` | Folder for the Glibc sources |
33+
| `/home/user/workspace/glibc-build` | Directory for the Glibc build output |
3434

3535

3636

content/learning-paths/servers-and-cloud-computing/using-and-porting-performance-libs/3.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ Now you observe the `libamath.so` shared object is linked:
8888

8989
### What about vector operations?
9090

91-
The naming convention of the Arm Performance Library for scalar operations follows that of `libm`. Hence, you are able to simply update the header file and recompile. For vector operations, one option is to rely on the compiler autovectorisation, whereby the compiler generates the vector code. This is used in the Arm Compiler for Linux (ACfL). Alternatively, you can use vector routines, which uses name mangling. Mangling is a technique used in computer programming to modify the names of vector functions to ensure uniqueness and avoid conflicts. This is particularly important in compiled languages like C++ and in environments where multiple libraries or modules may be used together.
91+
The naming convention of the Arm Performance Library for scalar operations follows that of `libm`. Hence, you are able to simply update the header file and recompile. For vector operations, one option is to rely on the compiler autovectorization, whereby the compiler generates the vector code. This is used in the Arm Compiler for Linux (ACfL). Alternatively, you can use vector routines, which uses name mangling. Mangling is a technique used in computer programming to modify the names of vector functions to ensure uniqueness and avoid conflicts. This is particularly important in compiled languages like C++ and in environments where multiple libraries or modules may be used together.
9292

9393
In the context of Arm's AArch64 architecture, vector name mangling follows the specific convention below to differentiate between scalar and vector versions of functions.
9494

0 commit comments

Comments
 (0)