diff --git a/.github/actions/spelling/allow.txt b/.github/actions/spelling/allow.txt new file mode 100644 index 00000000..6daa6780 --- /dev/null +++ b/.github/actions/spelling/allow.txt @@ -0,0 +1,167 @@ +ACLs +ACR +AMD +AWS +Alpstein +Balfrin +Broyden +CFLAGS +CHARMM +CHF +COSMA +CPE +CPMD +CSCS +CWP +CXI +capstor +Ceph +Containerfile +DNS +EDF +EDFs +EDFs +EMPA +ETHZ +Ehrenfest +Errigal +FFT +Fock +GAPW +GCC +GGA +GPFS +GPG +GPU +GPUs +GPW +GROMACS +GTL +Gaussian +Google +HDD +HPC +HPCP +HPE +HSN +Hartree +iopsstor +Jax +Jira +Keycloak +LAMMPS +LDA +LOCALID +LUMI +Libc +Linaro +Linux +MFA +MLP +MNDO +MPICH +MPS +MeteoSwiss +NAMD +NICs +NVIDIA +NVMe +OTP +OTPs +PASC +PBE +PDUs +PID +PMPI +POSIX +Parrinello +Piz +Plesset +Pulay +RCCL +RDMA +ROCm +RPA +Roboto +Roothaan +SSHService +STMV +Scopi +TOTP +UANs +UserLab +VASP +Waldur +Wannier +XDG +aarch +aarch64 +acl +biomolecular +bristen +bytecode +clariden +concretise +concretizer +containerised +customised +diagonalisation +eiger +filesystems +groundstate +inodes +lexer +libfabric +multitenancy +podman +prioritised +proactively +quickstart +santis +screenshot +slurm +smartphone +squashfs +srun +ssh +stackinator +stakeholders +subfolders +subtable +subtables +supercomputing +superlu +sysadmin +tcl +tcsh +testuser +timeframe +timelimit +tmpfs +todi +toolbar +toolset +torchaudio +torchvision +treesitter +trilinos +uarch +uenv +uenvs +uids +vCluster +vClusters +venv +versioned +versioning +webhooks +webinar +webpage +website +wikipedia +workaround +workflows +xattr +xattrs +youtube +zstd diff --git a/.github/actions/spelling/only.txt b/.github/actions/spelling/only.txt new file mode 100644 index 00000000..b197bbd1 --- /dev/null +++ b/.github/actions/spelling/only.txt @@ -0,0 +1 @@ +docs/.*\.md$ diff --git a/.github/actions/spelling/patterns.txt b/.github/actions/spelling/patterns.txt new file mode 100644 index 00000000..7ba7a54f --- /dev/null +++ b/.github/actions/spelling/patterns.txt @@ -0,0 +1,15 @@ +# Recognized as "Firec" and "REST" with the regular rules, so in patterns.txt +# instead of allow.txt +FirecREST +RESTful + +# markdown figure +^!\[.*\]\(.*\)$ + +# Most obvious URLs +https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()@:%_\+.~#?&//=]*) + +# Markdown references (definition and use) +^\[\]\(\){#[a-z-]+}$ +\]\(#[a-z-]+\) +\]\[[a-z-]+\] diff --git a/.github/workflows/spelling.yaml b/.github/workflows/spelling.yaml new file mode 100644 index 00000000..f2913cac --- /dev/null +++ b/.github/workflows/spelling.yaml @@ -0,0 +1,26 @@ +name: Check Spelling + +on: + pull_request: + +jobs: + spelling: + name: Check Spelling + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v3 + - name: Check spelling + id: spelling + uses: check-spelling/check-spelling@v0.0.24 + with: + check_file_names: 1 + post_comment: 0 + use_magic_file: 1 + warnings: bad-regex,binary-file,deprecated-feature,large-file,limited-references,no-newline-at-eof,noisy-file,non-alpha-in-dictionary,token-is-substring,unexpected-line-ending,whitespace-in-dictionary,minified-file,unsupported-configuration,no-files-to-check + use_sarif: 1 + extra_dictionary_limit: 20 + extra_dictionaries: + cspell:software-terms/dict/softwareTerms.txt + cspell:bash/dict/bash-words.txt + cspell:companies/dict/companies.txt + cspell:filetypes/filetypes.txt diff --git a/docs/accounts/index.md b/docs/accounts/index.md index da6c4cd5..5ba9ce80 100644 --- a/docs/accounts/index.md +++ b/docs/accounts/index.md @@ -9,7 +9,7 @@ To get an account you must be invited by a member of CSCS project adminstration CSCS issues calls for proposals that are announced via the CSCS website and e-mails. More information about upcoming calls is available on [the CSCS web site](https://www.cscs.ch/user-lab/allocation-schemes). -New PIs who have sucessfully applied for a preparatory project will receive an invitation from CSCS to get an account at CSCS. +New PIs who have successfully applied for a preparatory project will receive an invitation from CSCS to get an account at CSCS. PIs can then invite members of their groups to join their project. !!! info diff --git a/docs/alps/hardware.md b/docs/alps/hardware.md index f9ad3cce..c2bda378 100644 --- a/docs/alps/hardware.md +++ b/docs/alps/hardware.md @@ -28,7 +28,7 @@ This approach to cooling provides greater efficiency for the rack-level cooling, information about the network. * Details about SlingShot 11. - * how many NICS per node + * how many NICs per node * raw feeds and speeds * Some OSU benchmark results. * GPU-aware communication diff --git a/docs/alps/storage.md b/docs/alps/storage.md index 8d57b3f8..5681d445 100644 --- a/docs/alps/storage.md +++ b/docs/alps/storage.md @@ -2,15 +2,15 @@ # Alps Storage Alps has different storage attached, each with characteristics suited to different workloads and use cases. -HPC storage is manged in a separate cluster of nodes that host servers that manage the storage and the physical storage drives. +HPC storage is managed in a separate cluster of nodes that host servers that manage the storage and the physical storage drives. These separate clusters are on the same Slingshot 11 network as the Alps. -| | Capstor | IOPStor | Vast | +| | Capstor | Iopsstor | Vast | |--------------|------------------------|------------------------|---------------------| | Model | HPE ClusterStor E1000D | HPE ClusterStor E1000F | Vast | | Type | Lustre | Lustre | NFS | | Capacity | 129 PB raw GridRAID | 7.2 PB raw RAID 10 | 1 PB | -| Number of Drives | 8,480 16 TB HDD | 240 * 30 TB NVME SSD | N/A | +| Number of Drives | 8,480 16 TB HDD | 240 * 30 TB NVMe SSD | N/A | | Read Speed | 1.19 TB/s | 782 GB/s | 38 GB/s | | Write Speed | 1.09 TB/s | 393 GB/s | 11 GB/s | | IOPs | 1.5M | 8.6M read, 24M write | 200k read, 768k write | @@ -22,11 +22,11 @@ These separate clusters are on the same Slingshot 11 network as the Alps. Capstor is the largest file system, for storing large amounts of input and output data. It is used to provide SCRATCH and STORE for different clusters - the precise details are platform-specific. -[](){#ref-alps-iopstor} -## iopstor +[](){#ref-alps-iopsstor} +## iopsstor !!! todo - small text explaining what iopstor is designed to be used for. + small text explaining what iopsstor is designed to be used for. [](){#ref-alps-vast} ## vast @@ -34,7 +34,7 @@ It is used to provide SCRATCH and STORE for different clusters - the precise det The Vast storage is smaller capacity system that is designed for use as home folders. !!! todo - small text explaining what iopstor is designed to be used for. + small text explaining what iopsstor is designed to be used for. The mounts, and how they are used for SCRATCH, STORE, PROJECT, HOME would be in the [storage docs][ref-storage-fs] diff --git a/docs/build-install/uenv.md b/docs/build-install/uenv.md index 6552b841..b4ce3054 100644 --- a/docs/build-install/uenv.md +++ b/docs/build-install/uenv.md @@ -72,7 +72,7 @@ uenv start prgenv-gnu/24.11:v1 --view=spack ??? warning "Upstream Spack version" - It is strongly recomended that your version of Spack and the version of Spack in the uenv match when building software on top of an uenv. + It is strongly recommended that your version of Spack and the version of Spack in the uenv match when building software on top of an uenv. !!! note "Advanced Spack users" @@ -131,7 +131,7 @@ The `uenv-spack` tool can be used to create a build directory with a template [S 1. Script to build the software stack. 2. `git` clone of the required version of Spack. -3. Spack onfiguration files for the software stack. +3. Spack configuration files for the software stack. 4. Information about the uenv that was used to run `uenv-spack`. 5. Description of the software to build. 6. Template [Spack environment file]. diff --git a/docs/clusters/bristen.md b/docs/clusters/bristen.md index c40bbab6..d03211a0 100644 --- a/docs/clusters/bristen.md +++ b/docs/clusters/bristen.md @@ -74,7 +74,7 @@ See the SLURM documentation for instructions on how to run jobs on the [Grace-Ho ### FirecREST -Bristen can also be accessed using [FircREST][ref-firecrest] at the `https://api.cscs.ch/ml/firecrest/v1` API endpoint. +Bristen can also be accessed using [FirecREST][ref-firecrest] at the `https://api.cscs.ch/ml/firecrest/v1` API endpoint. ### Scheduled Maintenance diff --git a/docs/clusters/clariden.md b/docs/clusters/clariden.md index 1afaf0e4..9fd73b13 100644 --- a/docs/clusters/clariden.md +++ b/docs/clusters/clariden.md @@ -102,7 +102,7 @@ See the SLURM documentation for instructions on how to run jobs on the [Grace-Ho ### FirecREST -Clariden can also be accessed using [FircREST][ref-firecrest] at the `https://api.cscs.ch/ml/firecrest/v1` API endpoint. +Clariden can also be accessed using [FirecREST][ref-firecrest] at the `https://api.cscs.ch/ml/firecrest/v1` API endpoint. ## Maintenance and status diff --git a/docs/clusters/santis.md b/docs/clusters/santis.md index 73aafddf..b0366f0d 100644 --- a/docs/clusters/santis.md +++ b/docs/clusters/santis.md @@ -48,7 +48,7 @@ Currently, the following uenv are provided for the climate and weather community * `icon/25.1` * `climana/25.1` -In adition to the climate and weather uenv, all of the +In addition to the climate and weather uenv, all of the ??? example "using uenv provided for other clusters" You can run uenv that were built for other Alps clusters using the `@` notation. @@ -102,11 +102,11 @@ See the SLURM documentation for instructions on how to run jobs on the [Grace-Ho | normal | 1266 | 1-infinite | 1-00:00:00 | 812/371 | | xfer | 2 | 1 | 1-00:00:00 | 1/1 | ``` - The last column shows the number of nodes that have been allocted in currently running jobs (`A`) and the number of jobs that are idle (`I`). + The last column shows the number of nodes that have been allocated in currently running jobs (`A`) and the number of jobs that are idle (`I`). ### FirecREST -Santis can also be accessed using [FircREST][ref-firecrest] at the `https://api.cscs.ch/ml/firecrest/v1` API endpoint. +Santis can also be accessed using [FirecREST][ref-firecrest] at the `https://api.cscs.ch/ml/firecrest/v1` API endpoint. ## Maintenance and status diff --git a/docs/guides/terminal.md b/docs/guides/terminal.md index b131fcf3..92660933 100644 --- a/docs/guides/terminal.md +++ b/docs/guides/terminal.md @@ -48,7 +48,7 @@ Binary applications are generally not portable, for example if you compile or in A common pattern for installing local software, for example some useful command line utilities like [ripgrep](https://github.com/BurntSushi/ripgrep), is to install them in `$HOME/.local/bin`. This approach won't work if the same home directory is mounted on two different clusters with different architectures: the version of ripgrep in our example would crash with `Exec format error` on one of the clusters. -Care needs to be taken to store executables, configuration and data for different architecures in separate locations, and automatically configure the login environment to use the correct location when you log into different systems. +Care needs to be taken to store executables, configuration and data for different architectures in separate locations, and automatically configure the login environment to use the correct location when you log into different systems. The following example: diff --git a/docs/index.md b/docs/index.md index 2168b13b..55718632 100644 --- a/docs/index.md +++ b/docs/index.md @@ -32,7 +32,7 @@ The Alps Research infrastructure hosts multiple platforms and clusters targeting [:octicons-arrow-right-24: Alps Overview](alps/index.md) - Get detailed information about the main components of the infrastructre + Get detailed information about the main components of the infrastructure [:octicons-arrow-right-24: Alps Clusters](alps/clusters.md) diff --git a/docs/platforms/cwp/index.md b/docs/platforms/cwp/index.md index f0745a71..79960e61 100644 --- a/docs/platforms/cwp/index.md +++ b/docs/platforms/cwp/index.md @@ -14,7 +14,7 @@ Project administrators (PIs and deputy PIs) of projects on the CWP can to invite This is performed using the [project management tool][ref-account-waldur] -Once invited to a project, you will receive an email, which you can need to create an account and configure [multi-factor authentification][ref-mfa] (MFA). +Once invited to a project, you will receive an email, which you can need to create an account and configure [multi-factor authentication][ref-mfa] (MFA). ## Systems @@ -62,7 +62,7 @@ Scratch is per user - each user gets separate scratch path and quota. !!! warning "scratch cleanup policy" Files that have not been accessed in 30 days are automatically deleted. - **Scratch is not intended for permanant storage**: transfer files back to the capstor project storage after job runs. + **Scratch is not intended for permanent storage**: transfer files back to the capstor project storage after job runs. ### Project diff --git a/docs/platforms/mlp/index.md b/docs/platforms/mlp/index.md index c657e65d..ad17f6eb 100644 --- a/docs/platforms/mlp/index.md +++ b/docs/platforms/mlp/index.md @@ -36,7 +36,7 @@ There are three main file systems mounted on the MLP clusters Clariden and Brist | type |mount | filesystem | | -- | -- | -- | | Home | /users/$USER | [VAST][ref-alps-vast] | -| Scratch | `/iopstor/scratch/cscs/$USER` | [Iopstor][ref-alps-iopstor] | +| Scratch | `/iopsstor/scratch/cscs/$USER` | [Iopsstor][ref-alps-iopsstor] | | Project | `/capstor/store/cscs/swissai/` | [Capstor][ref-alps-capstor] | ### Home @@ -50,15 +50,15 @@ Scratch filesystems provide temporary storage for high-performance I/O for execu Use scratch to store datasets that will be accessed by jobs, and for job output. Scratch is per user - each user gets separate scratch path and quota. -* The environment variable `SCRATCH=/iopstor/scratch/cscs/$USER` is set automatically when you log into the system, and can be used as a shortcut to access scratch. +* The environment variable `SCRATCH=/iopsstor/scratch/cscs/$USER` is set automatically when you log into the system, and can be used as a shortcut to access scratch. !!! warning "scratch cleanup policy" Files that have not been accessed in 30 days are automatically deleted. - **Scratch is not intended for permanant storage**: transfer files back to the capstor project storage after job runs. + **Scratch is not intended for permanent storage**: transfer files back to the capstor project storage after job runs. !!! note - There is an additional scratch path mounted on [Capstor][ref-alps-capstor] at `/capstor/scratch/cscs/$USER`, however this is not reccomended for ML workloads for performance reasons. + There is an additional scratch path mounted on [Capstor][ref-alps-capstor] at `/capstor/scratch/cscs/$USER`, however this is not recommended for ML workloads for performance reasons. ### Project diff --git a/docs/services/cicd.md b/docs/services/cicd.md index da531580..463190bc 100644 --- a/docs/services/cicd.md +++ b/docs/services/cicd.md @@ -94,7 +94,7 @@ If you don't already know how to obtain FirecREST credentials, you can find more 1. **Default trusted users and default CI-enabled branches**: Provide the default list of trusted users and CI-enabled branches. The global configuration will apply to all pipelines that do not overwrite it explicitly. -1. **Pipeline default**: Your first pipeline has the name `default`. Click on `Pipeline default` to see the pipeline setup details. The name can be chosen freely but it cannot contain whitespaces (a short descriptive name). Update the entry point, trusted users and CI-enabled branches. +1. **Pipeline default**: Your first pipeline has the name `default`. Click on `Pipeline default` to see the pipeline setup details. The name can be chosen freely but it cannot contain whitespace (a short descriptive name). Update the entry point, trusted users and CI-enabled branches. 1. **Submit your changes** diff --git a/docs/software/sciapps/quantumespresso.md b/docs/software/sciapps/quantumespresso.md index 6697f0eb..80957ee7 100644 --- a/docs/software/sciapps/quantumespresso.md +++ b/docs/software/sciapps/quantumespresso.md @@ -35,7 +35,7 @@ The following sbatch script can be used as a template. srun -u --cpu-bind=socket /user-environment/env/default/bin/pw.x < pw.in ``` - Current observation is that best perfomance is achieved using [one MPI rank per GPU][ref-slurm-gh200-single-rank-per-gpu]. How to run multiple ranks per GPU is described [here][ref-slurm-gh200-multi-rank-per-gpu]. + Current observation is that best performance is achieved using [one MPI rank per GPU][ref-slurm-gh200-single-rank-per-gpu]. How to run multiple ranks per GPU is described [here][ref-slurm-gh200-multi-rank-per-gpu]. === "Eiger" @@ -134,7 +134,7 @@ spack -e $SCRATCH/qe-env config add packages:all:prefer:cuda_arch=90 spack -e $SCRATCH/qe-env develop -p /path/to/your/QE-src quantum-espresso@=develop spack -e $SCRATCH/qe-env concretize -f ``` -Check the output of `spack concretize -f`. All dependencies should have been picked up from spack upstream, marked eiter by a green `[^]` or `[e]`. +Check the output of `spack concretize -f`. All dependencies should have been picked up from spack upstream, marked either by a green `[^]` or `[e]`. Next we create a local filesystem view, this instructs spack to create symlinks for binaries and libraries in a local directory `view`. ```bash spack -e $SCRATCH/qe-env env view enable view diff --git a/docs/software/sciapps/vasp.md b/docs/software/sciapps/vasp.md index d5fe1ad0..7f168e08 100644 --- a/docs/software/sciapps/vasp.md +++ b/docs/software/sciapps/vasp.md @@ -195,7 +195,7 @@ Examples for makefiles that set the necessary rpath and link options on GH200: #OFLAG_IN = -fast -Mwarperf #SOURCE_IN := nonlr.o - # Software emulation of quadruple precsion (mandatory) + # Software emulation of quadruple precision (mandatory) QD ?= $(NVROOT)/compilers/extras/qd LLIBS += -L$(QD)/lib -lqdmod -lqd -Wl,-rpath,$(QD)/lib INCS += -I$(QD)/include/qd @@ -322,7 +322,7 @@ Examples for makefiles that set the necessary rpath and link options on GH200: #OFLAG_IN = -fast -Mwarperf #SOURCE_IN := nonlr.o - # Software emulation of quadruple precsion (mandatory) + # Software emulation of quadruple precision (mandatory) QD ?= $(NVROOT)/compilers/extras/qd LLIBS += -L$(QD)/lib -lqdmod -lqd -Wl,-rpath,$(QD)/lib INCS += -I$(QD)/include/qd diff --git a/docs/software/uenv.md b/docs/software/uenv.md index 05cc38e3..c6ae64d3 100644 --- a/docs/software/uenv.md +++ b/docs/software/uenv.md @@ -2,7 +2,7 @@ # uenv Uenv are user environments that provide scientific applications, libraries and tools. -This page will explain how to find, dowload and use uenv on the command line, and how to enable them in SLURM jobs. +This page will explain how to find, download and use uenv on the command line, and how to enable them in SLURM jobs. Uenv are typically application-specific, domain-specific or tool-specific - each uenv contains only what is required for the application or tools that it provides. @@ -308,7 +308,7 @@ The image can be a label, the hash/id of the uenv, or a file: # start the image using the name of the uenv $ uenv start netcdf-tools/2024:v1 - # or use the unqique id of the uenv + # or use the unique id of the uenv $ uenv start 499c886f2947538e # or provide the path to a squashfs file diff --git a/docs/storage/filesystems.md b/docs/storage/filesystems.md index b578d420..a61b3c7e 100644 --- a/docs/storage/filesystems.md +++ b/docs/storage/filesystems.md @@ -97,12 +97,12 @@ Expiration !!! warning All data will be deleted 3 months after the closure of the user account without further warning. -## Store on Capstore +## Store on Capstor The `/capstor/store` mount point of the Lustre file system `capstor` is intended for high-performance per-project storage on Alps. The mount point is accessible from the User Access Nodes (UANs) of Alps vClusters. !!! note - Capstore store is not yet mounted on Eiger. + Capstor store is not yet mounted on Eiger. !!! info `/capstor/store` is equivalent to the `/project` and `/store` GPFS mounts on the old Daint system. diff --git a/docs/storage/transfer.md b/docs/storage/transfer.md index a62c26a3..802b33c5 100644 --- a/docs/storage/transfer.md +++ b/docs/storage/transfer.md @@ -41,7 +41,7 @@ Currently Globus provide the following mount points at CSCS: ## Internal Transfer The Slurm queue `xfer` is available on Alps clusters to address data transfers between internal CSCS file systems. -The queue has been created to transfer files and folders from `/users`, `/capstor/store` or `/iopstor/store` to the `/capstor/scratch` and `/iopstor/scratch` file systems (stage-in) and vice versa (stage-out). +The queue has been created to transfer files and folders from `/users`, `/capstor/store` or `/iopsstor/store` to the `/capstor/scratch` and `/iopsstor/scratch` file systems (stage-in) and vice versa (stage-out). Currently the following commands are available on the cluster supporting the queue xfer: ```