Skip to content

Commit dd1101c

Browse files
committed
More typos and whitelist
1 parent c64ed68 commit dd1101c

File tree

16 files changed

+116
-48
lines changed

16 files changed

+116
-48
lines changed

.github/actions/spelling/allow.txt

Lines changed: 74 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
ACLs
22
ACR
33
AMD
4-
AWS
54
Alpstein
65
Balfrin
6+
Besard
77
Broyden
88
CFLAGS
99
CHARMM
@@ -17,17 +17,15 @@ Ceph
1717
Containerfile
1818
DNS
1919
Dockerfiles
20-
EDF
21-
EDFs
22-
EDFs
20+
Dufourspitze
2321
EMPA
2422
ETHZ
2523
Ehrenfest
2624
Errigal
2725
FFT
2826
Fock
27+
Foket
2928
GAPW
30-
GCC
3129
GGA
3230
GPFS
3331
GPG
@@ -39,29 +37,41 @@ GTL
3937
Gaussian
4038
Google
4139
HDD
40+
HDDs
4241
HPC
4342
HPCP
4443
HPE
4544
HSN
4645
Hartree
46+
Invernizzi
4747
Jax
4848
Jira
4949
Keycloak
50+
Kwasniewski
5051
LAMMPS
52+
LAPACK
5153
LDA
54+
LLM
55+
LLMs
5256
LOCALID
5357
LUMI
5458
Libc
5559
Linaro
5660
Linux
61+
MDS
62+
MDSs
5763
MFA
5864
MLP
5965
MNDO
6066
MPICH
67+
Malvoisin
6168
MeteoSwiss
6269
NAMD
6370
NICs
6471
NVMe
72+
Nordend
73+
OSS
74+
OSSs
6575
OTP
6676
OTPs
6777
PASC
@@ -71,8 +81,10 @@ PID
7181
PMPI
7282
POSIX
7383
Parrinello
84+
Pintarelli
7485
Piz
7586
Plesset
87+
Podladchikov
7688
Pulay
7789
RCCL
7890
RDMA
@@ -83,22 +95,24 @@ Roothaan
8395
SSHService
8496
STMV
8597
Scopi
98+
Signalkuppe
8699
TOTP
87100
UANs
88101
UserLab
89-
VASP
90-
Waldur
91102
Wannier
92103
XDG
104+
Zumsteinspitz
93105
aarch
94106
aarch64
95107
acl
96108
autodetection
109+
aws
97110
baremetal
98111
biomolecular
99112
bristen
100113
bytecode
101114
capstor
115+
chatbot
102116
clariden
103117
concretise
104118
concretizer
@@ -112,47 +126,77 @@ diagonalisation
112126
dimms
113127
dockerhub
114128
dotenv
129+
dropbear
130+
edf
131+
edfs
115132
eiger
116133
epyc
134+
fftw
117135
filesystems
118136
fontawesome
137+
gcc
119138
gdrcopy
139+
github
120140
gitlab
141+
gpt
121142
gpu
122143
groundstate
144+
gsl
145+
hdf
146+
huggingface
147+
hwloc
148+
iframe
123149
ijulia
124-
julia
125-
linalg
126-
linux
127-
nccl
128-
osts
129-
quantumespresso
130150
inodes
131151
iopsstor
132152
jfrog
153+
jobreport
154+
juhpc
155+
julia
156+
juliaup
133157
jupyter
158+
kokkos
134159
lexer
135160
libfabric
161+
linalg
162+
linux
163+
matlab
136164
miniconda
165+
mkl
137166
mpi
138167
mps
139168
multitenancy
140169
nanotron
170+
nccl
171+
netlib
141172
netrc
142173
nsight
143174
numa
175+
nvcr
144176
nvdashboard
145177
nvidia
178+
nwp
146179
octicons
180+
ofi
181+
omlin
182+
omp
147183
oom
184+
osts
185+
osu
186+
papi
187+
pme
188+
pmi
148189
podman
149190
preinstalled
191+
prerelease
192+
prereleases
150193
prgenv
151194
prioritisation
152195
prioritised
153196
proactively
154197
pyfirecrest
155198
pytorch
199+
quantumespresso
156200
quickstart
157201
rocm
158202
runtime
@@ -162,6 +206,7 @@ sbatch
162206
screenshot
163207
slurm
164208
smartphone
209+
sourced
165210
sphericart
166211
squashfs
167212
srun
@@ -188,24 +233,39 @@ torchaudio
188233
torchvision
189234
treesitter
190235
trilinos
236+
trl
191237
uarch
192238
uenv
193239
uenvs
194240
uids
241+
utkin
195242
vCluster
196243
vClusters
244+
valgrind
245+
vasp
246+
vboost
197247
venv
198248
versioned
199249
versioning
250+
waldur
251+
wandb
200252
webhooks
201253
webinar
202254
webpage
203255
website
204256
wikipedia
257+
prioritise
258+
wikitext
259+
wlcg
205260
workaround
206261
workflows
207262
xattr
208263
xattrs
264+
xcb
265+
xfer
266+
xname
267+
xpmem
209268
youtube
210269
zstd
211-
hdf
270+
Fawzi
271+
artifactory

.github/actions/spelling/block-delimiters.list

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,3 +9,7 @@
99
# ignore indented code blocks
1010
```
1111
```
12+
13+
# ignore embedded iframes
14+
<iframe
15+
</iframe>

.github/actions/spelling/patterns.txt

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
FirecREST
44
RESTful
55
IPyParallel
6+
MeteoSwiss
67

78
# markdown figure
89
^!\[.*\]\(.*\)$
@@ -26,3 +27,6 @@ https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0
2627

2728
# versions
2829
[0-9]+\.[.0-9]+(\+[0-9a-z]+)?
30+
31+
# one-off whitelist
32+
`ENV`ironment

docs/accounts/account-create.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Clicking the "Create a new account" button will lead the user to the second step
1616

1717
After submitting personal information, users have to wait for CSCS to review and approve the submission.
1818

19-
Once accepted, you will recieve an email with a link to set your password.
19+
Once accepted, you will receive an email with a link to set your password.
2020

2121
```title="Acceptance email"
2222
Dear John Doe,

docs/alps/hardware.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ There are 24 cabinets, in 4 rows with 6 cabinets per row, and each cabinet conta
6565
!!! info "Why 7 blades per chassis?"
6666
A chassis can contain up to 8 blades, however Alps' gh200 chassis are underpopulated so that we can increase the amount of power delivered to each GPU.
6767

68-
Each node contains four Grace-Hopper modules and four corresponding network interface cards (NICS) per blade, as illustrated below:
68+
Each node contains four Grace-Hopper modules and four corresponding network interface cards (NICs) per blade, as illustrated below:
6969

7070
![](../images/alps/gh200-schematic.svg)
7171

docs/guides/mlp_tutorials/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
These tutorials solve simple MLP tasks using the [Container Engine][ref-container-engine] on the ML Platform.
55

66
1. [LLM Inference][ref-mlp-llm-inference-tutorial]
7-
2. [LLM Finetuning][ref-mlp-llm-finetuning-tutorial]
7+
2. [LLM Fine-tuning][ref-mlp-llm-finetuning-tutorial]
88
3. [Nanotron Training][ref-mlp-llm-nanotron-tutorial]
99

1010

docs/guides/mlp_tutorials/llm-finetuning.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
[](){#ref-mlp-llm-finetuning-tutorial}
22

3-
# LLM Finetuning Tutorial
3+
# LLM Fine-tuning Tutorial
44

5-
This tutorial will take the model from the [LLM Inference][ref-mlp-llm-inference-tutorial] tutorial and show you how to perform finetuning.
5+
This tutorial will take the model from the [LLM Inference][ref-mlp-llm-inference-tutorial] tutorial and show you how to perform fine-tuning.
66
This means that we take the model and train it on some new custom data to change its behavior.
77

88
To complete the tutorial, we set up some extra libraries that will help us to update the state of the machine learning model.
@@ -38,10 +38,10 @@ $ pip install -e ./trl # install in editable mode
3838

3939
When this step is complete, you can exit the shell by typing `exit`.
4040

41-
### Finetune Gemma-7B
41+
### Fine-tune Gemma-7B
4242

4343
t this point, we can set up a fine-tuning script and start training Gemma-7B.
44-
Use your favorite text editor to create the file `fine-tune-gemma.sh` just outside the trl and gemma-venv directories:
44+
Use your favorite text editor to create the file `fine-tune-gemma.sh` just outside the `trl` and `gemma-venv` directories:
4545

4646
```bash title="fine-tune-gemma.sh"
4747
#!/bin/bash
@@ -119,7 +119,7 @@ It should take about 10-15 minutes to fine-tune Gemma:
119119
$ sbatch --nodes=1 fine-tune-sft.sbatch
120120
```
121121

122-
### Compare finetuned Gemma against default Gemma
122+
### Compare fine-tuned Gemma against default Gemma
123123

124124
We can reuse our python script from the first tutorial to do inference on the Gemma model that we just fine-tuned.
125125
Let's try out a different prompt in `gemma-inference.py`:

docs/guides/mlp_tutorials/llm-inference.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ The model we will be running is Google's [Gemma-7B](https://huggingface.co/googl
1212

1313
## Gemma-7B Inference using NGC PyTorch
1414

15-
### Prequisites
15+
### Prerequisites
1616

1717
This tutorial assumes you are able to access the cluster via SSH. To set up access to CSCS systems, follow the guide [here][ref-ssh], and read through the documentation about the [ML Platform][ref-platform-mlp].
1818

@@ -39,7 +39,7 @@ RUN apt-get update && apt-get install -y python3.10-venv && apt-get clean && rm
3939
```
4040

4141
The first line specifies that we are working on top of an existing container.
42-
In this case we start `FROM` an NGC PyTorch container.
42+
In this case we start `FROM` an NGC PyTorch container.
4343
Next, we set an `ENV`ironment variable that helps us run `apt-get` in the container.
4444
Finally, we `RUN` the package installer `apt-get` to install python virtual environments.
4545
This will let us install python packages later on without having to rebuild the container again and again.
@@ -76,14 +76,14 @@ $ enroot import -x mount -o pytorch-24.01-py3-venv.sqsh podman://pytorch:24.01-p
7676

7777
where you should replace `<ACCOUNT>` with your project account ID.
7878
At this point, you can exit the Slurm allocation by typing `exit`.
79-
You should be able to see a new squashfile next to your Dockerfile:
79+
You should be able to see a new squashfs file next to your Dockerfile:
8080

8181
```console
8282
$ ls
8383
Dockerfile pytorch-24.01-py3-ven.sqsh
8484
```
8585

86-
This squashfile is essentially a compressed container image, which can be run directly by the container engine.
86+
This squashfs file is essentially a compressed container image, which can be run directly by the container engine.
8787
We will use our freshly-built container `pytorch-24.01-py3-venv.sqsh` in the following steps to run a PyTorch script that loads the Google Gemma-7B model and performs some inference with it.
8888

8989
### Set up an EDF
@@ -109,7 +109,7 @@ Make sure to replace `<USER>` with your actual CSCS username.
109109
If you've decided to build the container somewhere else, make sure to supply the correct path to the `image` variable.
110110

111111
The `image` variable defines which container we want to load.
112-
This could either be a container from an online docker repository, like `nvcr.io/nvidia/pytorch:24.01-py3`, or in our case, a local squashfile which we built ourselves.
112+
This could either be a container from an online docker repository, like `nvcr.io/nvidia/pytorch:24.01-py3`, or in our case, a local squashfs file which we built ourselves.
113113

114114
The `mounts` variable defines which directories we want to mount where in our container.
115115
In general, it's a good idea to use the scratch directory to store outputs from any scientific software.
@@ -278,7 +278,7 @@ Move on to the next tutorial or try the challenge.
278278

279279
### Challenge
280280

281-
Using the same approach as in the latter half of step 4, use pip to install the package `nvitop`. This is a tool that shows you a concise real-time summary of GPU activity. Then, run Gemma and launch nvitop at the same time:
281+
Using the same approach as in the latter half of step 4, use pip to install the package `nvitop`. This is a tool that shows you a concise real-time summary of GPU activity. Then, run Gemma and launch `nvitop` at the same time:
282282

283283
```console
284284
(gemma-venv)$ python ./gemma-inference.py > ./gemma-output.log 2>&1 & nvitop
@@ -288,7 +288,7 @@ Note the use of bash `> ./gemma-output.log 2>&1` to hide any output from Python.
288288
Note also the use of the single ampersand `'&'` which backgrounds the first command and runs `nvitop` on top.
289289

290290
After a moment, you will see your Python script spawn on all four GPUs, after which the GPU activity will increase a bit and then go back to idle.
291-
At this point, you can hit `q` to quite nvitop and you will find the output of your Python script in `./gemma-output.log`.
291+
At this point, you can hit `q` to quit `nvitop` and you will find the output of your Python script in `./gemma-output.log`.
292292

293293
### Collaborating in Git
294294

0 commit comments

Comments
 (0)