Skip to content

Commit 0d2f0bb

Browse files
LAMMPS docs (#80)
* added initial lammps docs. * update to codeowners
1 parent 4dd99e4 commit 0d2f0bb

File tree

2 files changed

+342
-2
lines changed

2 files changed

+342
-2
lines changed

.github/CODEOWNERS

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,3 +4,4 @@ docs/software/communication @msimberg
44
docs/software/devtools/linaro @jgphpc
55
docs/software/prgenv/linalg.md @finkandreas @msimberg
66
docs/software/sciapps/cp2k.md @abussy @RMeli
7+
docs/software/sciapps/lammps.md @nickjbrowning

docs/software/sciapps/lammps.md

Lines changed: 341 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,343 @@
11
[](){#ref-uenv-lammps}
22
# LAMMPS
3-
!!! todo
4-
complete docs
3+
4+
[LAMMPS](https://www.lammps.org/) is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state. It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions. The current version of LAMMPS is written in C++.
5+
6+
## Licensing Terms and Conditions
7+
8+
[LAMMPS] is a freely-available open-source code, distributed under the terms of the [GNU Public License](http://www.gnu.org/copyleft/gpl.html).
9+
10+
## Running LAMMPS
11+
12+
### Loading LAMMPS Interactively
13+
14+
On Alps, [LAMMPS] is precompiled and available in a user environment (uenv). LAMMPS has been built with kokkos, and the GPU package separately.
15+
16+
To find which LAMMPS uenv is provided, you can use the following command:
17+
18+
```
19+
uenv image find lammps
20+
└── uenv image find lammps
21+
uenv/version:tag uarch date id size
22+
lammps/2024:v1 gh200 daint 3483b476b75a1801 3,713 2024-06-03
23+
lammps/2024:v2-rc1 gh200 daint fc5aafe8f327553c 3,625 2025-02-05
24+
```
25+
26+
We recommend using `lammps/2024:v2-rc1` as it's the latest build. To obtain this image, please run:
27+
28+
```
29+
uenv image pull lammps/2024:v2-rc1
30+
```
31+
32+
To start the uenv for this specific version of LAMMPS, you can use:
33+
34+
```
35+
uenv start --view kokkos lammps/2024:v2-rc1
36+
```
37+
38+
You can load the `view` from the uenv which contains the `lmp` executable. The executable in both these views support GPUs:
39+
40+
```
41+
#lammps +kokkos packae
42+
uenv start --view kokkos lammps/2024:v2-rc1
43+
#lammps +gpu package, kokkos disabled
44+
uenv start --view gpu lammps/2024:v2-rc1
45+
```
46+
47+
A development view is also provided, which contains all libraries and command-line tools necessary to build LAMMPS from source, without including the LAMMPS executable:
48+
49+
```
50+
#build environment for lammps +kokkos package, without providing lmp executeable
51+
uenv start --view develop-kokkos lammps/2024:v2-rc1
52+
#build environment for lammps +gpu package, without providing lmp executeable
53+
uenv start --view develop-gpu lammps/2024:v2-rc1
54+
```
55+
56+
### Running LAMMPS+kokkos on the HPC Platform
57+
58+
To start a job, two bash scripts are potentially required: a [slurm] submission script, and a wrapper for numacontrol which sets up cpu and memory binding:
59+
60+
submission script:
61+
62+
```bash title="run_lammps_kokkos.sh"
63+
#!/bin/bash -l
64+
#SBATCH --job-name=<JOB_NAME>
65+
#SBATCH --time=01:00:00
66+
#SBATCH --nodes=2
67+
#SBATCH --ntasks-per-node=4
68+
#SBATCH --gres=gpu:4
69+
#SBATCH --account=<ACCOUNT>
70+
#SBATCH --uenv=<LAMMPS_UENV>:/user-environment
71+
#SBATCH --view=kokkos
72+
73+
export MPICH_GPU_SUPPORT_ENABLED=1
74+
75+
ulimit -s unlimited
76+
77+
srun ./wrapper.sh lmp -in lj_kokkos.in -k on g 1 -sf kk -pk kokkos gpu/aware on
78+
```
79+
80+
* Time format: `HH:MM:SS`.
81+
* For LAMMPS+kokkos its typical to only use 1 MPI-rank per GPU.
82+
* Change `<ACCOUNT>` to your project account name.
83+
* Change `<LAMMPS_UENV>` to the name (or path) of the LAMMPS uenv you want to use.
84+
85+
numacontrol wrapper:
86+
87+
```bash title="wrapper.sh"
88+
#!/bin/bash
89+
90+
export LOCAL_RANK=$SLURM_LOCALID
91+
export GLOBAL_RANK=$SLURM_PROCID
92+
export GPUS=(0 1 2 3)
93+
export NUMA_NODE=$(echo "$LOCAL_RANK % 4" | bc)
94+
export CUDA_VISIBLE_DEVICES=${GPUS[$NUMA_NODE]}
95+
96+
export MPICH_GPU_SUPPORT_ENABLED=1
97+
98+
numactl --cpunodebind=$NUMA_NODE --membind=$NUMA_NODE "$@"
99+
```
100+
101+
With the above scripts, you can launch a [LAMMPS] + kokkos calculation on 2 nodes, using 4 MPI-ranks per node and 4 GPUs per node with:
102+
103+
```bash
104+
sbatch run_lammps_kokkos.sh
105+
```
106+
107+
You may need to make the `wrapper.sh` script executeable via: `chmod +x wrapper.sh`.
108+
109+
#### LAMMPS + kokkos input file
110+
111+
Below is the input file used in the above script, defining a 3d Lennard-Jones melt.
112+
113+
``` name="lj_kokkos.in"
114+
variable x index 200
115+
variable y index 200
116+
variable z index 200
117+
variable t index 1000
118+
119+
variable xx equal 1*$x
120+
variable yy equal 1*$y
121+
variable zz equal 1*$z
122+
123+
variable interval equal $t/2
124+
125+
units lj
126+
atom_style atomic/kk
127+
128+
lattice fcc 0.8442
129+
region box block 0 ${xx} 0 ${yy} 0 ${zz}
130+
create_box 1 box
131+
create_atoms 1 box
132+
mass 1 1.0
133+
134+
velocity all create 1.44 87287 loop geom
135+
136+
pair_style lj/cut/kk 2.5
137+
pair_coeff 1 1 1.0 1.0 2.5
138+
139+
neighbor 0.3 bin
140+
neigh_modify delay 0 every 20 check no
141+
142+
fix 1 all nve
143+
144+
thermo ${interval}
145+
thermo_style custom step time temp press pe ke etotal density
146+
run_style verlet/kk
147+
run $t
148+
```
149+
150+
### Running LAMMPS+GPU on the HPC Platform
151+
152+
To start a job, 2 bash scripts are required:
153+
154+
```bash title="run_lammps_gpu.sh"
155+
#!/bin/bash -l
156+
#SBATCH --job-name=<JOB_NAME>
157+
#SBATCH --time=01:00:00
158+
#SBATCH --nodes=2
159+
#SBATCH --ntasks-per-node=32
160+
#SBATCH --gres=gpu:4
161+
#SBATCH --account=<ACCOUNT>
162+
#SBATCH --uenv=<LAMMPS_UENV>:/user-environment
163+
#SBATCH --view=gpu
164+
165+
export MPICH_GPU_SUPPORT_ENABLED=1
166+
167+
ulimit -s unlimited
168+
169+
srun ./mps-wrapper.sh lmp -sf gpu -pk gpu 4 -in lj.in
170+
```
171+
172+
* Time format: `HH:MM:SS`.
173+
* For LAMMPS+gpu its often beneficial to use more than 1 MPI rank per GPU. To enable oversubscription of MPI ranks per GPU, you'll need to use the `mps-wrapper.sh` script provided at the following page: [NVIDIA GH200 GPU nodes: multiple ranks per GPU][ref-slurm-gh200-multi-rank-per-gpu]
174+
* Change `<ACCOUNT>` to your project account name.
175+
* Change `<LAMMPS_UENV>` to the name (or path) of the LAMMPS uenv you want to use.
176+
177+
#### LAMMPS + kokkos input file
178+
179+
Below is the input file used in the above script, defining a 3d Lennard-Jones melt.
180+
181+
```
182+
# 3d Lennard-Jones melt
183+
variable x index 200
184+
variable y index 200
185+
variable z index 200
186+
variable t index 1000
187+
188+
variable xx equal 1*$x
189+
variable yy equal 1*$y
190+
variable zz equal 1*$z
191+
192+
variable interval equal $t/2
193+
194+
units lj
195+
atom_style atomic
196+
197+
lattice fcc 0.8442
198+
region box block 0 ${xx} 0 ${yy} 0 ${zz}
199+
create_box 1 box
200+
create_atoms 1 box
201+
mass 1 1.0
202+
203+
velocity all create 1.44 87287 loop geom
204+
205+
pair_style lj/cut 2.5
206+
pair_coeff 1 1 1.0 1.0 2.5
207+
208+
neighbor 0.3 bin
209+
neigh_modify delay 0 every 20 check no
210+
211+
fix 1 all nve
212+
213+
thermo ${interval}
214+
thermo_style custom step time temp press pe ke etotal density
215+
run_style verlet
216+
run $t
217+
```
218+
219+
### Running on Eiger
220+
221+
!!! TODO !!!
222+
223+
### Building LAMMPS from source
224+
225+
### Using CMake
226+
227+
```
228+
If you'd like to rebuild LAMMPS from source to add additional packages or to use your own customized code, you can use the develop views contained within the uenv image to provide you with all the necessary libraries and command-line tools you'll need. For the following, we'd recommend obtaining an interactive node and building inside the tempfs directory.
229+
```
230+
231+
```
232+
salloc -N1 -t 60 -A <account>
233+
...
234+
srun --pty bash
235+
...
236+
mkdir /dev/shm/lammps_build; cd /dev/shm/lammps_build
237+
```
238+
239+
After you've obtained a version of LAMMPS you'd like to build, extract it in the above temporary folder, and create a build directory. Load one of the two following views:
240+
241+
```
242+
#build environment for lammps +kokkos package, without providing lmp executeable
243+
uenv start --view develop-kokkos lammps/2024:v2-rc1
244+
#build environment for lammps +gpu package, without providing lmp executeable
245+
uenv start --view develop-gpu lammps/2024:v2-rc1
246+
```
247+
248+
and now you can build your local copy of LAMMPS. For example to build with kokkos and the `MOLECULE` package enabled:
249+
250+
```
251+
CC=mpicc CXX=mpic++ cmake \
252+
-DCMAKE_CXX_FLAGS=-DCUDA_PROXY \
253+
-DBUILD_MPI=yes\
254+
-DBUILD_OMP=no \
255+
-DPKG_MOLECULE=yes \
256+
-DPKG_KOKKOS=yes \
257+
-DEXTERNAL_KOKKOS=yes \
258+
-DKokkos_ARCH_NATIVE=yes \
259+
-DKokkos_ARCH_HOPPER90=yes \
260+
-DKokkos_ARCH_PASCAL60=no \
261+
-DKokkos_ENABLE_CUDA=yes \
262+
-DKokkos_ENABLE_OPENMP=yes \
263+
-DCUDPP_OPT=no \
264+
-DCUDA_MPS_SUPPORT=yes \
265+
-DCUDA_ENABLE_MULTIARCH=no \
266+
../cmake
267+
```
268+
269+
!!! `Warning` !!!
270+
271+
If you are downloading LAMMPS from github or their website and intend to use kokkos for acceleration, there is an issue with cray-mpich and kokkos versions <= 4.3. For LAMMPS to work correctly on our system, you need a LAMMPS version which provides kokkos >= 4.4. Alternatively, the cmake variable `-DEXTERNAL_KOKKOS=yes` should force cmake to use the kokkos version (4.5.01) provided by the uenv, rather than the one contained within the lammps distribution.
272+
273+
### Using LAMMPS uenv as an upstream Spack Instance
274+
275+
If you'd like to extend the existing uenv with additional packages (or your own), you can use the provide LAMMPS uenv to provide all dependencies needed to build your customization. See https://eth-cscs.github.io/alps-uenv/uenv-compilation-spack/ for more information.
276+
277+
First, set up an environment:
278+
279+
```
280+
uenv start --view develop-gpu lammps/2024:v2-rc1
281+
282+
git clone -b v0.23.0 https://github.com/spack/spack.git
283+
source spack/share/spack/setup-env.sh
284+
export SPACK_SYSTEM_CONFIG_PATH=/user-environment/config/
285+
```
286+
287+
Then create the path and file `$SCRATCH/custom_env/spack.yaml`. We'll disable the KOKKOS package (and enable the GPU package via +cuda spec), and add the CG-SPICA package (via the +cg-spica spec) as an example. You can get the full list of options here: https://packages.spack.io/package.html?name=lammps.
288+
289+
```
290+
spack:
291+
specs:
292+
- lammps@20240417 ~kokkos +cuda cuda_arch=90 +python +extra-dump +cuda_mps +cg-spica
293+
packages:
294+
all:
295+
prefer:
296+
- +cuda cuda_arch=90
297+
mpi:
298+
require: cray-mpich +cuda
299+
view: true
300+
concretizer:
301+
unify: true
302+
```
303+
304+
Then concretize and build (note, you will of course be using a different path):
305+
306+
```
307+
spack -e $SCRATCH/custom_env/ concretize -f
308+
spack -e $SCRATCH/custom_env/ install
309+
```
310+
311+
During concretization, you'll notice a hash being printed alongside the LAMMPS package name. Take note of this hash. If you now try to load LAMMPS:
312+
313+
```
314+
# naively try to load LAMMPS
315+
# it shows two versions installed (the one in the uenv, and the one we just built)
316+
spack load lammps
317+
==> Error: lammps matches multiple packages.
318+
Matching packages:
319+
rd2koe3 [email protected]%[email protected] arch=linux-sles15-neoverse_v2
320+
zoo2p63 [email protected]%[email protected] arch=linux-sles15-neoverse_v2
321+
Use a more specific spec (e.g., prepend '/' to the hash).
322+
# use the hash thats listed in the output of the build
323+
# and load using the hash
324+
spack load /zoo2p63
325+
# check the lmp executable:
326+
which lmp
327+
/capstor/scratch/cscs/browning/SD-61924/spack/opt/spack/linux-sles15-neoverse_v2/gcc-12.3.0/lammps-20240417-zoo2p63rzyuleogzn4a2h6yj7u3vhyy2/bin/lmp
328+
```
329+
330+
You should now see that the CG-SPICA package in the list of installed packages:
331+
332+
```
333+
> lmp -h
334+
...
335+
Installed packages:
336+
337+
CG-SPICA GPU KSPACE MANYBODY MOLECULE PYTHON RIGID
338+
```
339+
340+
## Scaling
341+
342+
!!! TODO !!!
343+

0 commit comments

Comments
 (0)