Skip to content

Commit f687e4f

Browse files
authored
Minor fixes (#243)
1 parent 11573d7 commit f687e4f

File tree

4 files changed

+15
-15
lines changed

4 files changed

+15
-15
lines changed

docs/platforms/mlp/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ The Machine Learning Platform (MLP) provides compute, storage and expertise to t
1717
### Getting access
1818

1919
Project administrators (PIs and deputy PIs) of projects on the MLP can to invite users to join their project, before they can use the project's resources on Alps.
20-
This is performed using the [project management tool][ref-account-waldur]
20+
This is performed using the [project management tool][ref-account-waldur].
2121

2222
Once invited to a project, you will receive an email, which you need to create an account and configure [multi-factor authentication][ref-mfa] (MFA).
2323

docs/software/sciapps/cp2k.md

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -54,20 +54,19 @@ On our systems, CP2K is built with the following dependencies:
5454

5555
!!! note "GPU-aware MPI"
5656
[COSMA] and [DLA-Future] are built with [GPU-aware MPI][ref-communication-cray-mpich-gpu-aware], which requires setting `MPICH_GPU_SUPPORT_ENABLED=1`.
57-
On the HPC platform, `MPICH_GPU_SUPPORT_ENABLED=1` is set by
58-
default.
57+
On [Daint][ref-cluster-daint], `MPICH_GPU_SUPPORT_ENABLED=1` is set by default.
5958

6059
!!! note "CUDA cache path for JIT compilation"
6160
[DBCSR] uses JIT compilation for CUDA kernels.
6261
The default location is in the home directory, which can put unnecessary burden on the filesystem and lead to performance degradation.
6362
Because of this we set `CUDA_CACHE_PATH` to point to the in-memory filesystem in `/dev/shm`.
64-
On the HPC platform, `CUDA_CACHE_PATH` is set to a directory under `/dev/shm` by
65-
default.
63+
On [Daint][ref-cluster-daint], `CUDA_CACHE_PATH` is set to a directory under `/dev/shm` by default.
6664

6765
## Running CP2K
6866

69-
### Running on the HPC platform
67+
### Running on Daint
7068

69+
[Daint][ref-cluster-daint] nodes have [four GH200 GPUs][ref-alps-gh200-node] that have to be configured properly for best performance.
7170
To start a job, two bash scripts are potentially required: a [Slurm] submission script, and a wrapper to start the [CUDA
7271
MPS] daemon so that multiple MPI ranks can use the same GPU.
7372

@@ -107,10 +106,10 @@ srun --cpu-bind=socket ./mps-wrapper.sh cp2k.psmp -i <CP2K_INPUT> -o <CP2K_OUTPU
107106
`SLURM_CPUS_PER_TASK`.
108107

109108
5. [DBCSR] relies on extensive JIT compilation, and we store the cache in memory to avoid I/O overhead.
110-
This is set by default on the HPC platform, but it's set here explicitly as it's essential to avoid performance degradation.
109+
This is set by default on [Daint][ref-cluster-daint], but it's set here explicitly as it's essential to avoid performance degradation.
111110

112111
6. CP2K's dependencies use GPU-aware MPI, which requires enabling support at runtime.
113-
This is set by default on the HPC platform, but it's set here explicitly as it's a requirement in general for enabling GPU-aware MPI.
112+
This is set by default on [Daint][ref-cluster-daint], but it's set here explicitly as it's a requirement in general for enabling GPU-aware MPI.
114113

115114

116115
* Change <ACCOUNT> to your project account name
@@ -348,7 +347,7 @@ srun --cpu-bind=socket cp2k.psmp -i <CP2K_INPUT> -o <CP2K_OUTPUT>
348347
for good performance. With [Intel MKL], this is not necessary and one can set `OMP_NUM_THREADS` to
349348
`SLURM_CPUS_PER_TASK`.
350349

351-
* Change <ACCOUNT> to your project account name
350+
* Change `<ACCOUNT>` to your project account name
352351
* Change `<CP2K_UENV>` to the name (or path) of the actual CP2K uenv you want to use
353352
* Change `<PATH_TO_CP2K_DATA_DIR>` to the actual path to the CP2K data directory
354353
* Change `<CP2K_INPUT>` and `<CP2K_OUTPUT>` to the actual input and output files

docs/software/sciapps/lammps.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -69,9 +69,10 @@ A development view is also provided, which contains all libraries and command-li
6969
uenv start --view develop-gpu lammps/2024:v2
7070
```
7171

72-
### Running LAMMPS with Kokkos on the HPC Platform
72+
### Running LAMMPS with Kokkos on Daint
7373

74-
To start a job, the following bash [Slurm ] submission script is required:
74+
[Daint][ref-cluster-daint] nodes have [four GH200 GPUs][ref-alps-gh200-node] that have to be configured properly for best performance.
75+
To start a job, the following bash [Slurm] submission script is required:
7576

7677
```bash title="run_lammps_kokkos.sh"
7778
#!/bin/bash -l
@@ -150,7 +151,7 @@ sbatch run_lammps_kokkos.sh
150151
run $t
151152
```
152153

153-
### Running LAMMPS + GPU on the HPC Platform
154+
### Running LAMMPS + GPU on Daint
154155

155156
To start a job, two bash scripts are required: a [Slurm][ref-slurm] submission script, and a wrapper for [CUDA MPS][ref-slurm-gh200-multi-rank-per-gpu].
156157

@@ -257,7 +258,7 @@ srun --cpu-bind=cores lmp -k on t $OMP_NUM_THREADS -sf kk -in lj_kokkos.in
257258
5. Change `<LAMMPS_UENV>` to the name (or path) of the LAMMPS uenv you want to use.
258259
6. Enable the `kokkos` uenv view.
259260

260-
Note that the same input file `lj_kokkos.in` can be used as with running LAMMPS with Kokkos on the HPC Platform.
261+
Note that the same input file `lj_kokkos.in` can be used as with running LAMMPS with Kokkos on [Daint][ref-cluster-daint].
261262

262263
### Building LAMMPS from source
263264

docs/software/sciapps/vasp.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ These are coupled to highly efficient Broyden and Pulay density mixing schemes t
2626

2727
## Running VASP
2828

29-
### Running on the HPC platform
29+
### Running on Daint
3030
A precompiled uenv containing VASP with MPI, OpenMP, OpenACC, HDF5 and Wannier90 support is available.
3131
Due to license restrictions, the VASP images are not directly accessible in the same way as other applications.
3232

@@ -64,7 +64,7 @@ srun vasp_std
6464
This is not required when using the CUDA MPS wrapper for oversubscription of GPUs.
6565

6666
!!! note
67-
VASP relies on CUDA-aware MPI, which requires `MPICH_GPU_SUPPORT_ENABLED=1` to be set when using Cray MPICH. On the HPC platform including `daint`, this is set by default and does not have to be included in Slurm scripts.
67+
VASP relies on CUDA-aware MPI, which requires `MPICH_GPU_SUPPORT_ENABLED=1` to be set when using Cray MPICH. On [Daint][ref-cluster-daint], this is set by default and does not have to be included in Slurm scripts.
6868

6969

7070

0 commit comments

Comments
 (0)