|
1 | 1 | [](){#ref-uenv-lammps} |
2 | 2 | # LAMMPS |
3 | | -!!! todo |
4 | | - complete docs |
| 3 | + |
| 4 | +[LAMMPS] is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state. |
| 5 | +It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions. |
| 6 | +The current version of LAMMPS is written in C++. |
| 7 | + |
| 8 | +!!! note "uenvs" |
| 9 | + [LAMMPS](https://www.lammps.org/) is provided on [ALPS][platforms-on-alps] via [uenv][ref-uenv]. |
| 10 | + Please have a look at the [uenv documentation][ref-uenv] for more information about uenvs and how to use them. |
| 11 | + |
| 12 | + |
| 13 | +??? note "Licensing terms and conditions" |
| 14 | + [LAMMPS] is a freely-available open-source code, distributed under the terms of the [GNU Public License](http://www.gnu.org/copyleft/gpl.html). |
| 15 | + |
| 16 | +## Running LAMMPS |
| 17 | + |
| 18 | +### Loading LAMMPS Interactively |
| 19 | + |
| 20 | +On Alps, LAMMPS is precompiled and available in a [uenv][ref-uenv]. |
| 21 | +LAMMPS has been built with the [Kokkos](https://docs.lammps.org/Speed_kokkos.html) and GPU packages separately. |
| 22 | + |
| 23 | +To find which LAMMPS uenv is provided, you can use the following command: |
| 24 | + |
| 25 | +``` |
| 26 | +uenv image find lammps |
| 27 | +``` |
| 28 | + |
| 29 | +which will list several available LAMMPS uenv images. |
| 30 | +We recommend that you regularly check for the latest version. |
| 31 | +Please see the documentation here for further details: https://eth-cscs.github.io/cscs-docs/software/uenv/#finding-uenv. |
| 32 | + |
| 33 | +To obtain this image, please run: |
| 34 | + |
| 35 | +```bash |
| 36 | +uenv image pull lammps/2024:v2 |
| 37 | +``` |
| 38 | + |
| 39 | +To start the uenv for this specific version of LAMMPS, you can use: |
| 40 | + |
| 41 | +```bash |
| 42 | +uenv start --view kokkos lammps/2024:v2 |
| 43 | +``` |
| 44 | + |
| 45 | +You can load the `kokkos` or `gpu` view from the uenv to make the `lmp` executable available. |
| 46 | +The executable in both these views support GPUs: |
| 47 | + |
| 48 | +=== "Kokkos" |
| 49 | + ```bash |
| 50 | + #lammps +kokkos package |
| 51 | + uenv start --view kokkos lammps/2024:v2 |
| 52 | + ``` |
| 53 | +=== "GPU" |
| 54 | + ```bash |
| 55 | + #lammps +gpu package |
| 56 | + uenv start --view gpu lammps/2024:v2 |
| 57 | + ``` |
| 58 | + |
| 59 | +A development view is also provided, which contains all libraries and command-line tools necessary to build LAMMPS from source, without including the LAMMPS executable: |
| 60 | + |
| 61 | +=== "Kokkos" |
| 62 | + ```bash |
| 63 | + # build environment for lammps +kokkos package, without providing lmp executable |
| 64 | + uenv start --view develop-kokkos lammps/2024:v2 |
| 65 | + ``` |
| 66 | +=== "GPU" |
| 67 | + ```bash |
| 68 | + # build environment for lammps +gpu package, without providing lmp executable |
| 69 | + uenv start --view develop-gpu lammps/2024:v2 |
| 70 | + ``` |
| 71 | + |
| 72 | +### Running LAMMPS with Kokkos on the HPC Platform |
| 73 | + |
| 74 | +To start a job, two bash scripts are potentially required: a [SLURM] submission script, and a wrapper for `numactl` which sets up CPU and memory binding. |
| 75 | + |
| 76 | +The submission script is the following: |
| 77 | + |
| 78 | +```bash title="run_lammps_kokkos.sh" |
| 79 | +#!/bin/bash -l |
| 80 | +#SBATCH --job-name=<JOB_NAME> |
| 81 | +#SBATCH --time=01:00:00 (1) |
| 82 | +#SBATCH --nodes=2 |
| 83 | +#SBATCH --ntasks-per-node=4 (2) |
| 84 | +#SBATCH --gpus-per-node=4 |
| 85 | +#SBATCH --gpus-per-task=1 |
| 86 | +#SBATCH --gpu-bind=per_task:1 |
| 87 | +#SBATCH --account=<ACCOUNT> (3) |
| 88 | +#SBATCH --uenv=<LAMMPS_UENV>:/user-environment (4) |
| 89 | +#SBATCH --view=kokkos (5) |
| 90 | + |
| 91 | +export MPICH_GPU_SUPPORT_ENABLED=1 |
| 92 | + |
| 93 | +ulimit -s unlimited |
| 94 | + |
| 95 | +srun lmp -in lj_kokkos.in -k on g 1 -sf kk -pk kokkos gpu/aware on |
| 96 | +``` |
| 97 | + |
| 98 | +1. Time format: `HH:MM:SS`. |
| 99 | +2. For LAMMPS + Kokkos its typical to only use 1 MPI-rank per GPU. |
| 100 | +3. Change `<ACCOUNT>` to your project account name. |
| 101 | +4. Change `<LAMMPS_UENV>` to the name (or path) of the LAMMPS uenv you want to use. |
| 102 | +5. Load the `kokkos` uenv view. |
| 103 | + |
| 104 | +!!! Note |
| 105 | + Using `-k on g 1` specifies that we want 1 GPU per MPI-rank. |
| 106 | + This is contrary to what is mentioned in the official LAMMPS documentation, however this is required to achieve the propper configuration on Alps. |
| 107 | + |
| 108 | +With the above script, you can launch a LAMMPS + Kokkos calculation on 2 nodes, using 4 MPI ranks and 1 GPU per MPI rank with: |
| 109 | + |
| 110 | +```bash |
| 111 | +sbatch run_lammps_kokkos.sh |
| 112 | +``` |
| 113 | + |
| 114 | +??? example "LAMMPS + Kokkos input file, defining a 3d Lennard-Jones melt." |
| 115 | + |
| 116 | + The following input file for LAMMPS + Kokkos defines a 3D Lennard-Jones system |
| 117 | + melt. |
| 118 | + |
| 119 | + ``` |
| 120 | + |
| 121 | + variable x index 200 |
| 122 | + variable y index 200 |
| 123 | + variable z index 200 |
| 124 | + variable t index 1000 |
| 125 | + |
| 126 | + variable xx equal 1*$x |
| 127 | + variable yy equal 1*$y |
| 128 | + variable zz equal 1*$z |
| 129 | + |
| 130 | + variable interval equal $t/2 |
| 131 | + |
| 132 | + units lj |
| 133 | + atom_style atomic/kk |
| 134 | + |
| 135 | + lattice fcc 0.8442 |
| 136 | + region box block 0 ${xx} 0 ${yy} 0 ${zz} |
| 137 | + create_box 1 box |
| 138 | + create_atoms 1 box |
| 139 | + mass 1 1.0 |
| 140 | + |
| 141 | + velocity all create 1.44 87287 loop geom |
| 142 | + |
| 143 | + pair_style lj/cut/kk 2.5 |
| 144 | + pair_coeff 1 1 1.0 1.0 2.5 |
| 145 | + |
| 146 | + neighbor 0.3 bin |
| 147 | + neigh_modify delay 0 every 20 check no |
| 148 | + |
| 149 | + fix 1 all nve |
| 150 | + |
| 151 | + thermo ${interval} |
| 152 | + thermo_style custom step time temp press pe ke etotal density |
| 153 | + run_style verlet/kk |
| 154 | + run $t |
| 155 | + ``` |
| 156 | + |
| 157 | +### Running LAMMPS + GPU on the HPC Platform |
| 158 | + |
| 159 | +To start a job, two bash scripts are required: a [Slurm][ref-slurm] submission script, and a wrapper for [CUDA MPS][ref-slurm-gh200-multi-rank-per-gpu]. |
| 160 | + |
| 161 | +```bash title="run_lammps_gpu.sh" |
| 162 | +#!/bin/bash -l |
| 163 | +#SBATCH --job-name=<JOB_NAME> |
| 164 | +#SBATCH --time=01:00:00 (1) |
| 165 | +#SBATCH --nodes=2 (2) |
| 166 | +#SBATCH --ntasks-per-node=32 |
| 167 | +#SBATCH --gpus-per-node=4 |
| 168 | +#SBATCH --account=<ACCOUNT> (3) |
| 169 | +#SBATCH --uenv=<LAMMPS_UENV>:/user-environment (4) |
| 170 | +#SBATCH --view=gpu (5) |
| 171 | + |
| 172 | +export MPICH_GPU_SUPPORT_ENABLED=1 |
| 173 | + |
| 174 | +ulimit -s unlimited |
| 175 | + |
| 176 | +srun ./mps-wrapper.sh lmp -sf gpu -pk gpu 4 -in lj.in |
| 177 | +``` |
| 178 | + |
| 179 | +1. Time format: `HH:MM:SS`. |
| 180 | +2. For LAMMPS + GPU it is often beneficial to use more than 1 MPI rank per GPU. To enable oversubscription of MPI ranks per GPU, you'll need to use the `mps-wrapper.sh` script provided in the following section: [multiple ranks per GPU][ref-slurm-gh200-multi-rank-per-gpu]. |
| 181 | +3. Change `<ACCOUNT>` to your project account name. |
| 182 | +4. Change `<LAMMPS_UENV>` to the name (or path) of the LAMMPS uenv you want to use. |
| 183 | +5. Enable the `gpu` uenv view. |
| 184 | + |
| 185 | +To enable oversubscription of MPI ranks per GPU, you'll need to use the `mps-wrapper.sh` script provided at the following page: [NVIDIA GH200 GPU nodes: multiple ranks per GPU][ref-slurm-gh200-multi-rank-per-gpu]. |
| 186 | +??? example "LAMMPS+GPU input file" |
| 187 | + |
| 188 | + The following input file for LAMMPS + GPU defines a 3D Lennard-Jones system |
| 189 | + melt. |
| 190 | + |
| 191 | + ``` |
| 192 | + # 3d Lennard-Jones melt |
| 193 | + variable x index 200 |
| 194 | + variable y index 200 |
| 195 | + variable z index 200 |
| 196 | + variable t index 1000 |
| 197 | + |
| 198 | + variable xx equal 1*$x |
| 199 | + variable yy equal 1*$y |
| 200 | + variable zz equal 1*$z |
| 201 | + |
| 202 | + variable interval equal $t/2 |
| 203 | + |
| 204 | + units lj |
| 205 | + atom_style atomic |
| 206 | + |
| 207 | + lattice fcc 0.8442 |
| 208 | + region box block 0 ${xx} 0 ${yy} 0 ${zz} |
| 209 | + create_box 1 box |
| 210 | + create_atoms 1 box |
| 211 | + mass 1 1.0 |
| 212 | + |
| 213 | + velocity all create 1.44 87287 loop geom |
| 214 | + |
| 215 | + pair_style lj/cut 2.5 |
| 216 | + pair_coeff 1 1 1.0 1.0 2.5 |
| 217 | + |
| 218 | + neighbor 0.3 bin |
| 219 | + neigh_modify delay 0 every 20 check no |
| 220 | + |
| 221 | + fix 1 all nve |
| 222 | + |
| 223 | + thermo ${interval} |
| 224 | + thermo_style custom step time temp press pe ke etotal density |
| 225 | + run_style verlet |
| 226 | + run $t |
| 227 | + ``` |
| 228 | + |
| 229 | +### Running on Eiger |
| 230 | + |
| 231 | +On Eiger, the following sbatch script can be used: |
| 232 | + |
| 233 | +```bash title="run_lammps_eiger.sh" |
| 234 | +#!/bin/bash -l |
| 235 | +#SBATCH --job-name=<JOB_NAME> |
| 236 | +#SBATCH --time=01:00:00 (1) |
| 237 | +#SBATCH --nodes=2 |
| 238 | +#SBATCH --ntasks-per-core=1 |
| 239 | +#SBATCH --ntasks-per-node=32 (2) |
| 240 | +#SBATCH --cpus-per-task=4 (3) |
| 241 | +#SBATCH --account=<ACCOUNT> (4) |
| 242 | +#SBATCH --hint=nomultithread |
| 243 | +#SBATCH --hint=exclusive |
| 244 | +#SBATCH --constraint=mc |
| 245 | +#SBATCH --uenv=<LAMMPS_UENV>:/user-environment (5) |
| 246 | +#SBATCH --view=kokkos (6) |
| 247 | + |
| 248 | +ulimit -s unlimited |
| 249 | + |
| 250 | +export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK |
| 251 | +export OMP_PROC_BIND=spread |
| 252 | +export OMP_PLACES=threads |
| 253 | + |
| 254 | +srun --cpu-bind=cores lmp -k on t $OMP_NUM_THREADS -sf kk -in lj_kokkos.in |
| 255 | +``` |
| 256 | + |
| 257 | +1. Time format: `HH:MM:SS`. |
| 258 | +2. Number of MPI ranks per node. |
| 259 | +3. Number of threads per MPI rank. |
| 260 | +4. Change `<ACCOUNT>` to your project account name. |
| 261 | +5. Change `<LAMMPS_UENV>` to the name (or path) of the LAMMPS uenv you want to use. |
| 262 | +6. Enable the `kokkos` uenv view. |
| 263 | + |
| 264 | +Note that the same input file `lj_kokkos.in` can be used as with running LAMMPS with Kokkos on the HPC Platform. |
| 265 | + |
| 266 | +### Building LAMMPS from source |
| 267 | + |
| 268 | +#### Using CMake |
| 269 | + |
| 270 | +If you'd like to rebuild LAMMPS from source to add additional packages or to use your own customized code, you can use the develop views contained within the uenv image to provide you with all the necessary libraries and command-line tools you'll need. |
| 271 | +For the following, we'd recommend obtaining an interactive node and building inside the tmpfs directory. |
| 272 | + |
| 273 | +```bash |
| 274 | +salloc -N1 -t 60 -A <account> |
| 275 | +srun --pty bash |
| 276 | +mkdir /dev/shm/lammps_build; cd /dev/shm/lammps_build |
| 277 | +``` |
| 278 | + |
| 279 | +After you've obtained a version of LAMMPS you'd like to build, extract it in the above temporary folder and create a build directory. |
| 280 | +Load one of the two following views: |
| 281 | + |
| 282 | +=== "Kokkos" |
| 283 | + ```bash |
| 284 | + #build environment for lammps +kokkos package, without providing lmp executable |
| 285 | + uenv start --view develop-kokkos lammps/2024:v2 |
| 286 | + ``` |
| 287 | +=== "GPU" |
| 288 | + ```bash |
| 289 | + #build environment for lammps +gpu package, without providing lmp executable |
| 290 | + uenv start --view develop-gpu lammps/2024:v2 |
| 291 | + ``` |
| 292 | + |
| 293 | +and now you can build your local copy of LAMMPS. |
| 294 | +For example to build with Kokkos and the `MOLECULE` package enabled: |
| 295 | + |
| 296 | +```bash |
| 297 | +CC=mpicc CXX=mpic++ cmake \ |
| 298 | +-DCMAKE_CXX_FLAGS=-DCUDA_PROXY \ |
| 299 | +-DBUILD_MPI=yes\ |
| 300 | +-DBUILD_OMP=no \ |
| 301 | +-DPKG_MOLECULE=yes \ |
| 302 | +-DPKG_KOKKOS=yes \ |
| 303 | +-DEXTERNAL_KOKKOS=yes \ |
| 304 | +-DKokkos_ARCH_NATIVE=yes \ |
| 305 | +-DKokkos_ARCH_HOPPER90=yes \ |
| 306 | +-DKokkos_ARCH_PASCAL60=no \ |
| 307 | +-DKokkos_ENABLE_CUDA=yes \ |
| 308 | +-DKokkos_ENABLE_OPENMP=yes \ |
| 309 | +-DCUDPP_OPT=no \ |
| 310 | +-DCUDA_MPS_SUPPORT=yes \ |
| 311 | +-DCUDA_ENABLE_MULTIARCH=no \ |
| 312 | +../cmake |
| 313 | +``` |
| 314 | + |
| 315 | +!!! warning |
| 316 | + |
| 317 | + If you are downloading LAMMPS from GitHub or their website and intend to use Kokkos for acceleration, there is an issue with Cray MPICH and `Kokkos <= 4.3`. |
| 318 | + For LAMMPS to work correctly on our system, you need a LAMMPS version which provides `Kokkos >= 4.4`. |
| 319 | + Alternatively, the CMake variable `-DEXTERNAL_KOKKOS=yes` should force CMake to use the Kokkos version provided by the uenv, rather than the one contained within the lammps distribution. |
| 320 | + |
| 321 | +#### Using LAMMPS uenv as an upstream Spack Instance |
| 322 | + |
| 323 | +If you'd like to extend the existing uenv with additional packages (or your own), you can use the LAMMPS uenv to provide all dependencies needed to build your customization. See [here](https://eth-cscs.github.io/alps-uenv/tutorial-spack) for more information. |
| 324 | + |
| 325 | +[LAMMPS]: https://www.lammps.org |
| 326 | +[GNU Public License]: http://www.gnu.org/copyleft/gpl.html |
| 327 | +[uenv]: https://eth-cscs.github.io/cscs-docs/software/uenv |
| 328 | +[SLURM]: https://eth-cscs.github.io/cscs-docs/running/slurm |
0 commit comments