Skip to content

Commit 199856d

Browse files
committed
More typos and whitelist
1 parent c64ed68 commit 199856d

File tree

16 files changed

+115
-49
lines changed

16 files changed

+115
-49
lines changed

.github/actions/spelling/allow.txt

Lines changed: 75 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
ACLs
22
ACR
33
AMD
4-
AWS
54
Alpstein
65
Balfrin
6+
Besard
77
Broyden
88
CFLAGS
99
CHARMM
@@ -17,17 +17,16 @@ Ceph
1717
Containerfile
1818
DNS
1919
Dockerfiles
20-
EDF
21-
EDFs
22-
EDFs
20+
Dufourspitze
2321
EMPA
2422
ETHZ
2523
Ehrenfest
2624
Errigal
2725
FFT
26+
Fawzi
2827
Fock
28+
Foket
2929
GAPW
30-
GCC
3130
GGA
3231
GPFS
3332
GPG
@@ -39,29 +38,41 @@ GTL
3938
Gaussian
4039
Google
4140
HDD
41+
HDDs
4242
HPC
4343
HPCP
4444
HPE
4545
HSN
4646
Hartree
47+
Invernizzi
4748
Jax
4849
Jira
4950
Keycloak
51+
Kwasniewski
5052
LAMMPS
53+
LAPACK
5154
LDA
55+
LLM
56+
LLMs
5257
LOCALID
5358
LUMI
5459
Libc
5560
Linaro
5661
Linux
62+
MDS
63+
MDSs
5764
MFA
5865
MLP
5966
MNDO
6067
MPICH
68+
Malvoisin
6169
MeteoSwiss
6270
NAMD
6371
NICs
6472
NVMe
73+
Nordend
74+
OSS
75+
OSSs
6576
OTP
6677
OTPs
6778
PASC
@@ -71,8 +82,10 @@ PID
7182
PMPI
7283
POSIX
7384
Parrinello
85+
Pintarelli
7486
Piz
7587
Plesset
88+
Podladchikov
7689
Pulay
7790
RCCL
7891
RDMA
@@ -83,22 +96,25 @@ Roothaan
8396
SSHService
8497
STMV
8598
Scopi
99+
Signalkuppe
86100
TOTP
87101
UANs
88102
UserLab
89-
VASP
90-
Waldur
91103
Wannier
92104
XDG
105+
Zumsteinspitz
93106
aarch
94107
aarch64
95108
acl
109+
artifactory
96110
autodetection
111+
aws
97112
baremetal
98113
biomolecular
99114
bristen
100115
bytecode
101116
capstor
117+
chatbot
102118
clariden
103119
concretise
104120
concretizer
@@ -112,47 +128,79 @@ diagonalisation
112128
dimms
113129
dockerhub
114130
dotenv
131+
dropbear
132+
edf
133+
edfs
115134
eiger
116135
epyc
136+
fftw
117137
filesystems
118138
fontawesome
139+
gcc
119140
gdrcopy
141+
github
120142
gitlab
143+
gpt
121144
gpu
122145
groundstate
146+
gsl
147+
hdf
148+
huggingface
149+
hwloc
150+
iframe
123151
ijulia
124-
julia
125-
linalg
126-
linux
127-
nccl
128-
osts
129-
quantumespresso
130152
inodes
131153
iopsstor
132154
jfrog
155+
jobreport
156+
juhpc
157+
julia
158+
juliaup
133159
jupyter
160+
kokkos
134161
lexer
135162
libfabric
163+
linalg
164+
linux
165+
matlab
166+
meteo
136167
miniconda
168+
mkl
137169
mpi
138170
mps
139171
multitenancy
140172
nanotron
173+
nccl
174+
netlib
141175
netrc
142176
nsight
143177
numa
178+
nvcr
144179
nvdashboard
145180
nvidia
181+
nwp
146182
octicons
183+
ofi
184+
omlin
185+
omp
147186
oom
187+
osts
188+
osu
189+
papi
190+
pme
191+
pmi
148192
podman
149193
preinstalled
194+
prerelease
195+
prereleases
150196
prgenv
151197
prioritisation
198+
prioritise
152199
prioritised
153200
proactively
154201
pyfirecrest
155202
pytorch
203+
quantumespresso
156204
quickstart
157205
rocm
158206
runtime
@@ -162,6 +210,7 @@ sbatch
162210
screenshot
163211
slurm
164212
smartphone
213+
sourced
165214
sphericart
166215
squashfs
167216
srun
@@ -188,24 +237,36 @@ torchaudio
188237
torchvision
189238
treesitter
190239
trilinos
240+
trl
191241
uarch
192242
uenv
193243
uenvs
194244
uids
245+
utkin
195246
vCluster
196247
vClusters
248+
valgrind
249+
vasp
250+
vboost
197251
venv
198252
versioned
199253
versioning
254+
waldur
255+
wandb
200256
webhooks
201257
webinar
202258
webpage
203259
website
204260
wikipedia
261+
wikitext
262+
wlcg
205263
workaround
206264
workflows
207265
xattr
208266
xattrs
267+
xcb
268+
xfer
269+
xname
270+
xpmem
209271
youtube
210272
zstd
211-
hdf

.github/actions/spelling/block-delimiters.list

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,3 +9,7 @@
99
# ignore indented code blocks
1010
```
1111
```
12+
13+
# ignore embedded iframes
14+
<iframe
15+
</iframe>

.github/actions/spelling/patterns.txt

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,10 @@
33
FirecREST
44
RESTful
55
IPyParallel
6+
\`ENV\`ironment
67

78
# markdown figure
8-
^!\[.*\]\(.*\)$
9+
^!\[.*\]\(.*\)$
910

1011
# Most obvious URLs
1112
https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()@:%_\+.~#?&//=]*)

docs/accounts/account-create.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Clicking the "Create a new account" button will lead the user to the second step
1616

1717
After submitting personal information, users have to wait for CSCS to review and approve the submission.
1818

19-
Once accepted, you will recieve an email with a link to set your password.
19+
Once accepted, you will receive an email with a link to set your password.
2020

2121
```title="Acceptance email"
2222
Dear John Doe,

docs/alps/hardware.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ There are 24 cabinets, in 4 rows with 6 cabinets per row, and each cabinet conta
6565
!!! info "Why 7 blades per chassis?"
6666
A chassis can contain up to 8 blades, however Alps' gh200 chassis are underpopulated so that we can increase the amount of power delivered to each GPU.
6767

68-
Each node contains four Grace-Hopper modules and four corresponding network interface cards (NICS) per blade, as illustrated below:
68+
Each node contains four Grace-Hopper modules and four corresponding network interface cards (NICs) per blade, as illustrated below:
6969

7070
![](../images/alps/gh200-schematic.svg)
7171

docs/guides/mlp_tutorials/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
These tutorials solve simple MLP tasks using the [Container Engine][ref-container-engine] on the ML Platform.
55

66
1. [LLM Inference][ref-mlp-llm-inference-tutorial]
7-
2. [LLM Finetuning][ref-mlp-llm-finetuning-tutorial]
7+
2. [LLM Fine-tuning][ref-mlp-llm-finetuning-tutorial]
88
3. [Nanotron Training][ref-mlp-llm-nanotron-tutorial]
99

1010

docs/guides/mlp_tutorials/llm-finetuning.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
[](){#ref-mlp-llm-finetuning-tutorial}
22

3-
# LLM Finetuning Tutorial
3+
# LLM Fine-tuning Tutorial
44

5-
This tutorial will take the model from the [LLM Inference][ref-mlp-llm-inference-tutorial] tutorial and show you how to perform finetuning.
5+
This tutorial will take the model from the [LLM Inference][ref-mlp-llm-inference-tutorial] tutorial and show you how to perform fine-tuning.
66
This means that we take the model and train it on some new custom data to change its behavior.
77

88
To complete the tutorial, we set up some extra libraries that will help us to update the state of the machine learning model.
@@ -38,10 +38,10 @@ $ pip install -e ./trl # install in editable mode
3838

3939
When this step is complete, you can exit the shell by typing `exit`.
4040

41-
### Finetune Gemma-7B
41+
### Fine-tune Gemma-7B
4242

4343
t this point, we can set up a fine-tuning script and start training Gemma-7B.
44-
Use your favorite text editor to create the file `fine-tune-gemma.sh` just outside the trl and gemma-venv directories:
44+
Use your favorite text editor to create the file `fine-tune-gemma.sh` just outside the `trl` and `gemma-venv` directories:
4545

4646
```bash title="fine-tune-gemma.sh"
4747
#!/bin/bash
@@ -119,7 +119,7 @@ It should take about 10-15 minutes to fine-tune Gemma:
119119
$ sbatch --nodes=1 fine-tune-sft.sbatch
120120
```
121121

122-
### Compare finetuned Gemma against default Gemma
122+
### Compare fine-tuned Gemma against default Gemma
123123

124124
We can reuse our python script from the first tutorial to do inference on the Gemma model that we just fine-tuned.
125125
Let's try out a different prompt in `gemma-inference.py`:

docs/guides/mlp_tutorials/llm-inference.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ The model we will be running is Google's [Gemma-7B](https://huggingface.co/googl
1212

1313
## Gemma-7B Inference using NGC PyTorch
1414

15-
### Prequisites
15+
### Prerequisites
1616

1717
This tutorial assumes you are able to access the cluster via SSH. To set up access to CSCS systems, follow the guide [here][ref-ssh], and read through the documentation about the [ML Platform][ref-platform-mlp].
1818

@@ -39,7 +39,7 @@ RUN apt-get update && apt-get install -y python3.10-venv && apt-get clean && rm
3939
```
4040

4141
The first line specifies that we are working on top of an existing container.
42-
In this case we start `FROM` an NGC PyTorch container.
42+
In this case we start `FROM` an NGC PyTorch container.
4343
Next, we set an `ENV`ironment variable that helps us run `apt-get` in the container.
4444
Finally, we `RUN` the package installer `apt-get` to install python virtual environments.
4545
This will let us install python packages later on without having to rebuild the container again and again.
@@ -76,14 +76,14 @@ $ enroot import -x mount -o pytorch-24.01-py3-venv.sqsh podman://pytorch:24.01-p
7676

7777
where you should replace `<ACCOUNT>` with your project account ID.
7878
At this point, you can exit the Slurm allocation by typing `exit`.
79-
You should be able to see a new squashfile next to your Dockerfile:
79+
You should be able to see a new squashfs file next to your Dockerfile:
8080

8181
```console
8282
$ ls
8383
Dockerfile pytorch-24.01-py3-ven.sqsh
8484
```
8585

86-
This squashfile is essentially a compressed container image, which can be run directly by the container engine.
86+
This squashfs file is essentially a compressed container image, which can be run directly by the container engine.
8787
We will use our freshly-built container `pytorch-24.01-py3-venv.sqsh` in the following steps to run a PyTorch script that loads the Google Gemma-7B model and performs some inference with it.
8888

8989
### Set up an EDF
@@ -109,7 +109,7 @@ Make sure to replace `<USER>` with your actual CSCS username.
109109
If you've decided to build the container somewhere else, make sure to supply the correct path to the `image` variable.
110110

111111
The `image` variable defines which container we want to load.
112-
This could either be a container from an online docker repository, like `nvcr.io/nvidia/pytorch:24.01-py3`, or in our case, a local squashfile which we built ourselves.
112+
This could either be a container from an online docker repository, like `nvcr.io/nvidia/pytorch:24.01-py3`, or in our case, a local squashfs file which we built ourselves.
113113

114114
The `mounts` variable defines which directories we want to mount where in our container.
115115
In general, it's a good idea to use the scratch directory to store outputs from any scientific software.
@@ -278,7 +278,7 @@ Move on to the next tutorial or try the challenge.
278278

279279
### Challenge
280280

281-
Using the same approach as in the latter half of step 4, use pip to install the package `nvitop`. This is a tool that shows you a concise real-time summary of GPU activity. Then, run Gemma and launch nvitop at the same time:
281+
Using the same approach as in the latter half of step 4, use pip to install the package `nvitop`. This is a tool that shows you a concise real-time summary of GPU activity. Then, run Gemma and launch `nvitop` at the same time:
282282

283283
```console
284284
(gemma-venv)$ python ./gemma-inference.py > ./gemma-output.log 2>&1 & nvitop
@@ -288,7 +288,7 @@ Note the use of bash `> ./gemma-output.log 2>&1` to hide any output from Python.
288288
Note also the use of the single ampersand `'&'` which backgrounds the first command and runs `nvitop` on top.
289289

290290
After a moment, you will see your Python script spawn on all four GPUs, after which the GPU activity will increase a bit and then go back to idle.
291-
At this point, you can hit `q` to quite nvitop and you will find the output of your Python script in `./gemma-output.log`.
291+
At this point, you can hit `q` to quit `nvitop` and you will find the output of your Python script in `./gemma-output.log`.
292292

293293
### Collaborating in Git
294294

0 commit comments

Comments
 (0)