@@ -22,6 +22,10 @@ The multi-node build works on multiple nodes and is based on [Charm++]'s MPI bac
2222!!! note "Prefer the single-node build and exploit GPU-resident mode"
2323 Unless you have good reasons to use the multi-node build, we recommend using the single-node build with the GPU-resident mode.
2424
25+ !!! warning "Eiger"
26+
27+ The multi-node version is the only version of NAMD available on [Eiger][ref-cluster-eiger] - single-node is not provided.
28+
2529## Single-node build
2630
2731The single-node build provides the following views:
@@ -37,7 +41,7 @@ The following sbatch script shows how to run NAMD on a single node with 4 GPUs:
3741#! /bin/bash
3842# SBATCH --job-name="namd-example"
3943# SBATCH --time=00:10:00
40- # SBATCH --account=<ACCOUNT>
44+ # SBATCH --account=<ACCOUNT> (6)
4145# SBATCH --nodes=1 (1)
4246# SBATCH --ntasks-per-node=1 (2)
4347# SBATCH --cpus-per-task=288
@@ -46,19 +50,17 @@ The following sbatch script shows how to run NAMD on a single node with 4 GPUs:
4650# SBATCH --view=namd-single-node (5)
4751
4852
49- srun namd3 +p 29 +pmeps 5 +setcpuaffinity +devices 0,1,2,3 < NAMD_CONFIG_FILE>
53+ srun namd3 +p 29 +pmeps 5 +setcpuaffinity +devices 0,1,2,3 < NAMD_CONFIG_FILE> # (7)!
5054```
5155
52561 . You can only use one node with the ` single-node ` build
53572 . You can only use one task per node with the ` single-node ` build
54583 . Make all GPUs visible to NAMD (by automatically setting ` CUDA_VISIBLE_DEVICES=0,1,2,3 ` )
55- 4 . Load the NAMD UENV (UENV name or path to the UENV)
59+ 4 . Load the NAMD UENV (UENV name or path to the UENV). Change ` <NAMD_UENV> ` to the name (or path) of the actual NAMD UENV you want to use
56605 . Load the ` namd-single-node ` view
57-
58- * Change ` <ACCOUNT> ` to your project account
59- * Change ` <NAMD_UENV> ` to the name (or path) of the actual NAMD UENV you want to use
60- * Change ` <NAMD_CONFIG_FILE> ` to the name (or path) of the NAMD configuration file for your simulation
61- * Make sure you set ` +p ` , ` +pmeps ` , and other NAMD options optimally for your calculation
61+ 6 . Change ` <ACCOUNT> ` to your project account
62+ 7 . Make sure you set ` +p ` , ` +pmeps ` , and other NAMD options optimally for your calculation.
63+ Change ` <NAMD_CONFIG_FILE> ` to the name (or path) of the NAMD configuration file for your simulation
6264
6365??? example "Scaling of STMV benchmark with GPU-resident mode from 1 to 4 GPUs"
6466
@@ -205,52 +207,137 @@ The multi-node build provides the following views:
205207!!! note "GPU-resident mode"
206208 The multi-node build based on [ Charm++] 's MPI backend can't take advantage of the new GPU-resident mode. Unless you require the multi-node
207209 build or you can prove it is faster for your use case, we recommend using the single-node build with the GPU-resident mode.
210+
211+
212+ ### Running NAMD on Eiger
213+
214+ The following sbatch script shows how to run NAMD on Eiger:
215+
216+ ``` bash
217+ #! /bin/bash -l
218+ # SBATCH --job-name=namd-test
219+ # SBATCH --time=00:30:00
220+ # SBATCH --nodes=4
221+ # SBATCH --ntasks-per-core=1
222+ # SBATCH --ntasks-per-node=128
223+ # SBATCH --account=<ACCOUNT> (1)
224+ # SBATCH --hint=nomultithread
225+ # SBATCH --hint=exclusive
226+ # SBATCH --constraint=mc
227+ # SBATCH --uenv=namd/3.0:v1 (2)
228+ # SBATCH --view=namd (3)
229+
230+ export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
231+ export OMP_PROC_BIND=spread
232+ export OMP_PLACES=threads
233+
234+ srun --cpu-bind=cores namd3 +setcpuaffinity ++ppn 4 < NAMD_CONFIG_FILE> # (4)!
235+ ```
236+
237+ 1 . Change ` <ACCOUNT> ` to your project account
238+ 2 . Load the NAMD UENV (UENV name or path to the UENV). Change ` <NAMD_UENV> ` to the name (or path) of the actual NAMD UENV you want to use
239+ 3 . Load the ` namd ` view
240+ 4 . Make sure you set ` ++ppn ` , and other NAMD options optimally for your calculation.
241+ Change ` <NAMD_CONFIG_FILE> ` to the name (or path) of the NAMD configuration file for your simulation
242+
208243
209244### Building NAMD from source with Charm++'s MPI backend
210245
211246!!! warning "TCL Version"
212- According to the NAMD 3.0 release notes, TCL ` 8.6 ` is required. However, the source code for the ` 3.0 ` release still contains hard-coded
213- flags for TCL
` 8.5 ` . The UENV provides
` [email protected] ` , therefore you need to manually modify NAMD 3.0's
` arch/Linux-ARM64.tcl ` file as follows:
214- change ` -ltcl8.5 ` to ` -ltcl8.6 ` in the definition of the ` TCLLIB ` variable.
247+ According to the NAMD 3.0 release notes, TCL ` 8.6 ` is required.
248+ However, the source code for some (beta) releases still contains hard-coded flags for TCL ` 8.5 ` .
249+ The UENV provides
` [email protected] ` , therefore you need to manually modify NAMD's
` arch/Linux-<ARCH>.tcl ` file:
250+ change ` -ltcl8.5 ` to ` -ltcl8.6 ` in the definition of the ` TCLLIB ` variable, if needed.
215251
216252The [ NAMD] ` uenv ` provides all the dependencies required to build [ NAMD] from source. You can follow these steps to build [ NAMD] from source:
217253
218- ``` bash
219- export DEV_VIEW_NAME=" develop"
220- export PATH_TO_NAMD_SOURCE=< PATH_TO_NAMD_SOURCE>
254+ === "gh200 build"
221255
222- # Start uenv and load develop view
223- uenv start --view=${DEV_VIEW_NAME} < NAMD_UENV>
256+ ```bash
257+ export DEV_VIEW_NAME="develop"
258+ export PATH_TO_NAMD_SOURCE=<PATH_TO_NAMD_SOURCE> # (1)!
224259
225- # Set variable VIEW_PATH to the view
226- export DEV_VIEW_PATH=/user-environment/env/ ${DEV_VIEW_NAME}
260+ # Start uenv and load develop view
261+ uenv start --view= ${DEV_VIEW_NAME} <NAMD_UENV> # (2)!
227262
228- cd ${PATH_TO_NAMD_SOURCE}
229- ```
263+ # Set variable VIEW_PATH to the view
264+ export DEV_VIEW_PATH=/user-environment/env/${DEV_VIEW_NAME}
230265
231- !!! info "Action required"
232- Modify the ` <PATH_TO_NAMD_SOURCE>/arch/Linux-ARM64.tcl ` file now.
233- Change ` -ltcl8.5 ` with ` -ltcl8.6 ` in the definition of the ` TCLLIB ` variable.
266+ cd ${PATH_TO_NAMD_SOURCE}
267+ ```
234268
235- ``` bash
236- # Build bundled Charm++
237- tar -xvf charm-8.0.0.tar && cd charm-8.0.0
238- env MPICXX=mpicxx ./build charm++ mpi-linux-arm8 smp --with-production -j 32
239-
240- # Configure NAMD build for GPU
241- cd ..
242- ./config Linux-ARM64-g++.cuda \
243- --charm-arch mpi-linux-arm8-smp --charm-base $PWD /charm-8.0.0 \
244- --with-tcl --tcl-prefix ${DEV_VIEW_PATH} \
245- --with-fftw --with-fftw3 --fftw-prefix ${DEV_VIEW_PATH} \
246- --cuda-gencode arch=compute_90,code=sm_90 --with-single-node-cuda --with-cuda --cuda-prefix ${DEV_VIEW_PATH}
247- cd Linux-ARM64-g++.cuda && make -j 32
248-
249- # The namd3 executable (GPU-accelerated) will be built in the Linux-ARM64-g++.cuda directory
250- ```
269+ 1. Substitute `<PATH_TO_NAMD_SOURCE>` with the actual path to the NAMD source code
270+ 2. Substitute `<NAMD_UENV>` with the actual name (or path) of the NAMD UENV you want to use.
271+
272+
273+ !!! info "Action required"
274+ Modify the `${PATH_TO_NAMD_SOURCE}/arch/Linux-ARM64.tcl` file now.
275+ Change `-ltcl8.5` with `-ltcl8.6` in the definition of the `TCLLIB` variable, if needed.
276+
277+
278+ Build [Charm++] bundled with NAMD:
279+
280+ ```bash
281+ tar -xvf charm-8.0.0.tar && cd charm-8.0.0
282+ env MPICXX=mpicxx ./build charm++ mpi-linux-arm8 smp --with-production -j 32
283+ ```
284+
285+ Finally, you can configure and build NAMD (with GPU acceleration):
286+
287+ ```bash
288+ cd ..
289+ ./config Linux-ARM64-g++.cuda \
290+ --charm-arch mpi-linux-arm8-smp --charm-base $PWD/charm-8.0.0 \
291+ --with-tcl --tcl-prefix ${DEV_VIEW_PATH} \
292+ --with-fftw --with-fftw3 --fftw-prefix ${DEV_VIEW_PATH} \
293+ --cuda-gencode arch=compute_90,code=sm_90 --with-single-node-cuda --with-cuda --cuda-prefix ${DEV_VIEW_PATH}
294+ cd Linux-ARM64-g++.cuda && make -j 32
295+ ```
296+
297+ The `namd3` executable (GPU-accelerated) will be built in the `Linux-ARM64-g++.cuda` directory.
298+
299+ === "zen2 build"
300+
301+ ```bash
302+ export DEV_VIEW_NAME="develop"
303+ export PATH_TO_NAMD_SOURCE=<PATH_TO_NAMD_SOURCE> # (1)!
304+
305+ # Start uenv and load develop view
306+ uenv start --view=${DEV_VIEW_NAME} <NAMD_UENV> # (2)!
307+
308+ # Set variable VIEW_PATH to the view
309+ export DEV_VIEW_PATH=/user-environment/env/${DEV_VIEW_NAME}
310+
311+ cd ${PATH_TO_NAMD_SOURCE}
312+ ```
313+
314+ 1. Substitute `<PATH_TO_NAMD_SOURCE>` with the actual path to the NAMD source code
315+ 2. Substitute `<NAMD_UENV>` with the actual name (or path) of the NAMD UENV you want to use.
316+
317+
318+ !!! info "Action required"
319+ Modify the `${PATH_TO_NAMD_SOURCE}/arch/Linux-x86_64.tcl` file now.
320+ Change `-ltcl8.5` with `-ltcl8.6` in the definition of the `TCLLIB` variable, if needed.
321+
322+ Build [Charm++] bundled with NAMD:
323+
324+ ```bash
325+ tar -xvf charm-8.0.0.tar && cd charm-8.0.0
326+ env MPICXX=mpicxx ./build charm++ mpi-linux-x86_64 smp --with-production -j 32
327+ ```
328+
329+ Finally, you can configure and build NAMD:
330+
331+ ```bash
332+ cd ..
333+ ./config Linux-x86_64-g++ \
334+ --charm-arch mpi-linux-x86_64-smp --charm-base $PWD/charm-8.0.0 \
335+ --with-tcl --tcl-prefix ${DEV_VIEW_PATH} \
336+ --with-fftw --with-fftw3 --fftw-prefix ${DEV_VIEW_PATH}
337+ cd Linux-x86_64-g++ && make -j 32
338+ ```
251339
252- * Change ` <PATH_TO_NAMD_SOURCE> ` to the path where you have the NAMD source code
253- * Change ` <NAMD_UENV> ` to the name (or path) of the actual NAMD UENV you want to use
340+ The `namd3` executable will be built in the `Linux-x86_64-g++` directory.
254341
255342To run NAMD, make sure you load the same UENV and view you used to build NAMD, and set the following variable:
256343
0 commit comments