Skip to content

Commit 7fadc71

Browse files
committed
More typos and whitelist
1 parent e4c1cc7 commit 7fadc71

File tree

3 files changed

+27
-8
lines changed

3 files changed

+27
-8
lines changed

.github/actions/spelling/allow.txt

Lines changed: 21 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@ CXI
1616
Ceph
1717
Containerfile
1818
DNS
19+
Dockerfiles
1920
EDF
2021
EDFs
2122
EDFs
@@ -57,11 +58,9 @@ MFA
5758
MLP
5859
MNDO
5960
MPICH
60-
MPS
6161
MeteoSwiss
6262
NAMD
6363
NICs
64-
NVIDIA
6564
NVMe
6665
OTP
6766
OTPs
@@ -94,6 +93,8 @@ XDG
9493
aarch
9594
aarch64
9695
acl
96+
autodetection
97+
baremetal
9798
biomolecular
9899
bristen
99100
bytecode
@@ -104,33 +105,51 @@ concretizer
104105
containerised
105106
cpe
106107
cscs
108+
cuda
107109
customised
108110
diagonalisation
111+
dockerhub
112+
dotenv
109113
eiger
114+
epyc
110115
filesystems
116+
fontawesome
117+
gitlab
118+
gpu
111119
groundstate
112120
ijulia
113121
inodes
114122
iopsstor
123+
jfrog
115124
lexer
116125
libfabric
117126
miniconda
118127
mpi
128+
mps
119129
multitenancy
130+
netrc
120131
nsight
132+
numa
133+
nvidia
134+
octicons
135+
oom
121136
podman
137+
preinstalled
122138
prgenv
123139
prioritised
124140
proactively
141+
pyfirecrest
125142
pytorch
126143
quickstart
144+
rocm
127145
runtime
128146
runtimes
129147
santis
130148
sbatch
131149
screenshot
132150
slurm
133151
smartphone
152+
sphericart
134153
squashfs
135154
srun
136155
ssh

docs/running/slurm.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ $ sbatch --account=g123 ./job.sh
6868
!!! note
6969
The flags `--account` and `-Cmc` that were required on the old [Eiger][ref-cluster-eiger] cluster are no longer required.
7070

71-
## Prioritization and scheduling
71+
## Prioritisation and scheduling
7272

7373
Job priorities are determined based on each project's resource usage relative to its quarterly allocation, as well as in comparison to other projects.
7474
An aging factor is also applied to each job in the queue to ensure fairness over time.
@@ -219,7 +219,7 @@ The build generates the following executables:
219219

220220
1. Test GPU affinity: note how all 4 ranks see the same 4 GPUs.
221221

222-
2. Test GPU affinity: note how the `--gpus-per-task=1` parameter assings a unique GPU to each rank.
222+
2. Test GPU affinity: note how the `--gpus-per-task=1` parameter assigns a unique GPU to each rank.
223223

224224
!!! info "Quick affinity checks"
225225

@@ -491,7 +491,7 @@ rank 7 @ nid002199: thread 0 -> cores [112:127]
491491
In the above examples all threads on each -- we are effectively allowing the OS to schedule the threads on the available set of cores as it sees fit.
492492
This often gives the best performance, however sometimes it is beneficial to bind threads to explicit cores.
493493

494-
The OpenMP threading runtime provides additional options for controlling the pinning of threads to the cores assinged to each MPI rank.
494+
The OpenMP threading runtime provides additional options for controlling the pinning of threads to the cores assigned to each MPI rank.
495495

496496
Use the `--omp` flag with `affinity.mpi` to get more detailed information about OpenMP thread affinity.
497497
For example, four MPI ranks on one node with four cores and four OpenMP threads:

docs/services/cicd.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -718,7 +718,7 @@ Private projects will always get as notification a link to the CSCS pipeline ove
718718
To view the CSCS pipeline overview for a public project and restart / cancel jobs, follow these steps:
719719

720720
* Copy the web link of the CSCS CI status of your project and remove the from the link the `type=gitlab`.
721-
* Alternativily, assemble the link yourself, it has the form `https://cicd-ext-mw.cscs.ch/ci/pipeline/results/<repository_id>/<project_id>/<pipeline_nb>` (the IDs can be found on the Gitlab page of your mirror project).
721+
* Alternatively, assemble the link yourself, it has the form `https://cicd-ext-mw.cscs.ch/ci/pipeline/results/<repository_id>/<project_id>/<pipeline_nb>` (the IDs can be found on the Gitlab page of your mirror project).
722722
* Click on `Login to restart jobs` at the bottom right and login with your CSCS credentials
723723
* Click `Cancel running` or `Restart jobs` or cancel individual jobs (button next to job's name)
724724
* Everybody that has at least *Manager* access can restart / cancel jobs (access level is managed on the CI setup page in the Admin section)
@@ -819,7 +819,7 @@ Accepted variables are documented at [Slurm's srun man page](https://slurm.sched
819819

820820
!!! Warning "SLURM_TIMELIMIT"
821821
Special attention should go the variable `SLURM_TIMELIMIT`, which sets the maximum time of your Slurm job.
822-
You will be billed the nodehours that your CI jobs are spending on the cluster, i.e. you want to set the `SLURM_TIMELIMIT` to the maximum time that you expect the job to run.
822+
You will be billed the node hours that your CI jobs are spending on the cluster, i.e. you want to set the `SLURM_TIMELIMIT` to the maximum time that you expect the job to run.
823823
You should also pay attention to wrap the value in quotes, because the gitlab-runner interprets the time differently than Slurm, when it is not wrapped in quotes, i.e. This is correct:
824824
```
825825
SLURM_TIMELIMIT: "00:30:00"
@@ -1323,7 +1323,7 @@ The easiest way to use the FirecREST scheduler of ReFrame is to use the configur
13231323
In case you want to run ReFrame for a system that is not already available in this directory, please open a ticket to the Service Desk and we will add it or help you update one of the existing ones.
13241324

13251325
Something you should be aware of when running with this scheduler is that ReFrame will not have direct access to the filesystem of the cluster so the stage directory will need to be kept in sync through FirecREST.
1326-
It is recommended to try to clean the stage directory whenever possible with the [postrun_cmds](https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.postrun_cmds) and [postbuild_cmds](https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.postbuild_cmds) and to avoid [autodetection of the processor](https://reframe-hpc.readthedocs.io/en/stable/config_reference.html#config.systems.partitions.processor) in each run.
1326+
It is recommended to try to clean the stage directory whenever possible with the [`postrun_cmds`](https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.postrun_cmds) and [`postbuild_cmds`](https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.postbuild_cmds) and to avoid [autodetection of the processor](https://reframe-hpc.readthedocs.io/en/stable/config_reference.html#config.systems.partitions.processor) in each run.
13271327
Normally ReFrame stores these files in `~/.reframe/topology/{system}-{part}/processor.json`, but you get a "clean" runner every time.
13281328
You could either add them in the configuration files or store the files in the first run and copy them to the right directory before ReFrame runs.
13291329

0 commit comments

Comments
 (0)