You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The other suggestions will be implemented in the follow-up commits
Co-authored-by: Mikael Simberg <[email protected]>
Co-authored-by: Rocco Meli <[email protected]>
Copy file name to clipboardExpand all lines: docs/software/sciapps/gromacs.md
+8-10Lines changed: 8 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,8 @@ GROMACS is a joint effort, with contributions from developers around the world:
26
26
## Daint on Alps (GH200)
27
27
28
28
### Setup
29
-
On ALPS, we provide pre-built user environments containing GROMACS alongside all the required dependencies for the GH200 hardware setup. To access the `gmx_mpi` executable, we do the following:
29
+
30
+
On Alps, we provide pre-built user environments containing GROMACS alongside all the required dependencies for the GH200 hardware setup. To access the `gmx_mpi` executable, we do the following:
30
31
31
32
```bash
32
33
uenv image find # list available images
@@ -40,7 +41,7 @@ gmx_mpi --version # check GROMACS version
40
41
The images also provide two alternative views, namely `plumed` and `develop`.
41
42
After starting the pulled image using `uenv start ...` , one may do the following see the available views.
42
43
43
-
```bash
44
+
```console
44
45
$ uenv status
45
46
/user-environment:gromacs-gh200
46
47
GPU-optimised GROMACS with and without PLUMED, and the toolchain to build your own GROMACS.
@@ -51,7 +52,7 @@ $ uenv status
51
52
plumed
52
53
```
53
54
54
-
The `develop` view has all the required dependencies or GROMACS without the program itself. This is meant for those users who want to use a customized variant of GROMACS for their simulation which they build from source. This view makes it convenient for users as it provides the required compilers (GCC 12) along with the dependencies such as CMake, CUDA, hwloc, Cray MPICH, among many others which their GROMACS can use during build and installation. Users must enable this view each time they want to use their **custom GROMACS installation**.
55
+
The `develop` view has all the required dependencies or GROMACS without the program itself. This is meant for those users who want to use a customized variant of GROMACS for their simulation which they build from source. This view makes it convenient for users as it provides the required compilers (GCC) along with the dependencies such as CMake, CUDA, hwloc, Cray MPICH, among many others which their GROMACS can use during build and installation. Users must enable this view each time they want to use their **custom GROMACS installation**.
55
56
56
57
The `plumed` view contains GROMACS 2022.5 (older version) with PLUMED 2.9.0. This is due to the compatibility requirements of PLUMED. CSCS will periodically update these user environment images to feature newer versions as they are made available.
57
58
@@ -61,9 +62,7 @@ Use `exit` to leave the user environment and return to the original shell.
61
62
62
63
### How to Run
63
64
64
-
To start a job, 2 bash scripts are required: a standard SLURM submission script, and a wrapper to start the CUDA MPS daemon (in order to have multiple MPI ranks per GPU).
65
-
66
-
The CUDA MPS wrapper here: [Oversubscription of GPU cards](https://confluence.cscs.ch/spaces/KB/pages/859015189/Oversubscription+of+GPU+cards)
65
+
To start a job, 2 bash scripts are required: a standard SLURM submission script, and a [wrapper to start the CUDA MPS daemon][ref-slurm-gh200-single-rank-per-gpu] (in order to have multiple MPI ranks per GPU).
67
66
68
67
The wrapper script above needs to be made executable with `chmod +x mps-wrapper.sh`.
69
68
@@ -108,7 +107,7 @@ This submission script is only representative. Users must run their input files
108
107
109
108
## Scaling
110
109
111
-
Benchmarking done with large MD simulation problems of 1.4 million and 3 million atom systems in order to fully saturate the GPUs from the [HECBioSim Benchmark Suite](https://www.hecbiosim.ac.uk/access-hpc/benchmarks).
110
+
Benchmarking is done with large MD simulation of system of 1.4 million and 3 million atoms, in order to fully saturate the GPUs, from the [HECBioSim Benchmark Suite](https://www.hecbiosim.ac.uk/access-hpc/benchmarks).
112
111
113
112
In addition, the STMV (~1 million atom) benchmark that NVIDIA publishes on its [website](https://developer.nvidia.com/hpc-application-performance) was also tested for comparison.
114
113
@@ -156,6 +155,5 @@ Protein atoms = 86,996 Lipid atoms = 867,784 Water atoms = 2,041,230 Ions = 1
0 commit comments