From 6861b46f9e2d6623d36b0ee1f996e4970fdc32b9 Mon Sep 17 00:00:00 2001 From: Rocco Meli Date: Tue, 18 Feb 2025 15:39:06 +0100 Subject: [PATCH 1/7] namd docs --- docs/software/sciapps/namd.md | 274 +++++++++++++++++++++++++++++++++- mkdocs.yml | 1 + 2 files changed, 273 insertions(+), 2 deletions(-) diff --git a/docs/software/sciapps/namd.md b/docs/software/sciapps/namd.md index 503f3965..10abc67e 100644 --- a/docs/software/sciapps/namd.md +++ b/docs/software/sciapps/namd.md @@ -1,4 +1,274 @@ [](){#uenv-namd} # NAMD -!!! todo - complete docs + +[NAMD] is a parallel molecular dynamics code based on [Charm++], designed for high-performance simulations of large biomolecular systems. + +!!! danger "Licensing Terms and Conditions" + [NAMD] is distributed free of charge for research purposes only and not for commercial use: users must agree to [NAMD license] in order to use it at [CSCS]. Users agree to acknowledge use of [NAMD] in any reports or publications of results obtained with the Software (see [NAMD Homepage] for details). + +[NAMD] is provided in two flavours on [CSCS] systems: + +* Single-node build +* Multi-node build + +The single-node build works on a single node and benefits from the new GPU-resident mode (see [NAMD 3.0b6 GPU-Resident benchmarking results] for more details). +The multi-node build works on multiple nodes and is based on [Charm++] MPI backend. + +!!! note "Prefer the single-node build and exploit GPU-resident mode" + Unless you have good reasons to use the multi-node build, we recommend using the single-node build with the GPU-resident mode. + +=== "Single-node build" + + The single-node build provides the following views: + + * `namd-single-node` (standard view, with NAMD) + * `develop-single-node` (development view, without NAMD) + + ## Running NAMD + + The following SBATCH script shows how to run NAMD on a single node with 4 GPUs: + + ```bash + #!/bin/bash + #SBATCH --job-name="namd-example" + #SBATCH --time=00:10:00 + #SBATCH --nodes=1 (1) + #SBATCH --ntasks-per-node=1 (2) + #SBATCH --cpus-per-task=288 + #SBATCH --gres=gpu:4 (3) + #SBATCH --uenv= (4) + #SBATCH --view=namd-single-node (5) + + + srun namd3 +p 29 +pmeps 5 +setcpuaffinity +devices 0,1,2,3 + ``` + + 1. You can only use one node with the `single-node` build + 2. You can only use one task per node with the `single-node` build + 3. Make all GPUs visible to NAMD (by automatically setting `CUDA_VISIBLE_DEVICES=0,1,2,3`) + 4. Load the NAMD UENV (UENV name or path to the UENV) + 5. Load the `namd-single-node` view + + * Change `` to the name (or path) of the actual NAMD UENV you want to use + * Change `` to the name (or path) of the NAMD configuration file for your simulation + * Make sure you set `+p`, `+pmeps`, and other NAMD options optimally for your calculation + + ??? example "Scaling of STMV benchmark with GPU-resident mode from 1 to 4 GPUs" + + Scaling of the tobacco mosaic virus (STMV) benchmark with GPU-resident mode on our system is the following: + + | GPUs | ns/day | Speedup | Parallel efficiency | + |:----:|:------:|:-------:|:-------------------:| + | 1 | 31.1 | - | - | + | 2 | 53.7 | 1.9 | 86% | + | 4 | 92.7 | 3.5 | 74% | + + === "1 GPU" + + ```bash + srun namd3 +p 8 +setcpuaffinity +devices 0 + ``` + === "2 GPUs" + + ```bash + srun namd3 +p 15 +pmeps 7 +setcpuaffinity +devices 0,1 + ``` + + === "4 GPUs" + + ```bash + srun namd3 +p 29 +pmeps 5 +setcpuaffinity +devices 0,1,2,3 + ``` + + ## Building NAMD from source + + !!! warning "Action required" + According to the NAMD 3.0 release notes, TCL `8.6` is required. However, the source code for the `3.0` release still contains hard-coded + flags for TCL `8.5`. The UENV provides `tcl@8.6`, therefore you need to manually modify NAMD 3.0's `arch/Linux-ARM64.tcl` file as follows: + change `-ltcl8.5` to `-ltcl8.6` in the definition of the `TCLLIB` variable. + + The [NAMD] `uenv` provides all the dependencies required to build [NAMD] from source. + + === "GPU Build" + + + Build NAMD: + + ```bash + export DEV_VIEW_NAME="develop-single-node" + export PATH_TO_NAMD_SOURCE= + + # Start uenv and load develop view + uenv start --view=${DEV_VIEW_NAME} + + # Set variable VIEW_PATH to the view + export DEV_VIEW_PATH=/user-environment/env/${DEV_VIEW_NAME} + + cd ${PATH_TO_NAMD_SOURCE} + ``` + + !!! info "Action required" + Modify the `/arch/Linux-ARM64.tcl` file now. + Change `-ltcl8.5` with `-ltcl8.6` in the definition of the "TCLLIB" variable. + + ```bash + # Build bundled Charm++ + tar -xvf charm-8.0.0.tar && cd charm-8.0.0 + ./build charm++ multicore-linux-arm8 gcc --with-production --enable-tracing -j 32 + + # Configure NAMD build for GPU + cd .. + ./config Linux-ARM64-g++.cuda \ + --charm-arch multicore-linux-arm8-gcc --charm-base $PWD/charm-8.0.0 \ + --with-tcl --tcl-prefix ${DEV_VIEW_PATH} \ + --with-fftw --with-fftw3 --fftw-prefix ${DEV_VIEW_PATH} \ + --cuda-gencode arch=compute_90,code=sm_90 --with-single-node-cuda --with-cuda --cuda-prefix ${DEV_VIEW_PATH} + cd Linux-ARM64-g++.cuda && make -j 32 + + # The namd3 executable (GPU-accelerated) will be built in the Linux-ARM64-g++.cuda directory + ``` + + * Change `` to the path where you have the NAMD source code + * Change `` to the name (or path) of the actual NAMD UENV you want to use + + To run NAMD, make sure you load the same UENV and view you used to build NAMD, and set the following variable: + + ```bash + export LD_LIBRARY_PATH="${DEV_VIEW_PATH}/lib/" + ``` + + === "CPU Build" + + Some workflows, such as constant pH MD simulations, might require a CPU-only NAMD build which is used to drive the simulation. + + !!! warning "Use the CPU-only build only if needed" + The CPU-only build is optional and should be used only if needed. You should use it in conjunction with the GPU build to drive the simulation. + Do not use the CPU-only build for actual simulations as it will be slower than the GPU build. + + You can build a CPU-only version of NAMD as follows: + + ```bash + export DEV_VIEW_NAME="develop-single-node" + export PATH_TO_NAMD_SOURCE= + + # Start uenv and load develop view + uenv start --view=${DEV_VIEW_NAME} + + # Set variable VIEW_PATH to the view + export DEV_VIEW_PATH=/user-environment/env/${DEV_VIEW_NAME} + + cd ${PATH_TO_NAMD_SOURCE} + ``` + + !!! info "Action required" + Modify the `/arch/Linux-ARM64.tcl` file now. + Change `-ltcl8.5` with `-ltcl8.6` in the definition of the "TCLLIB" variable. + + ```bash + # Build bundled Charm++ + tar -xvf charm-8.0.0.tar && cd charm-8.0.0 + ./build charm++ multicore-linux-arm8 gcc --with-production --enable-tracing -j 32 + + # Configure NAMD build for GPU + cd .. + ./config Linux-ARM64-g++ \ + --charm-arch multicore-linux-arm8-gcc --charm-base $PWD/charm-8.0.0 \ + --with-tcl --tcl-prefix ${DEV_VIEW_PATH} \ + --with-fftw --with-fftw3 --fftw-prefix ${DEV_VIEW_PATH} + cd Linux-ARM64-g++ && make -j 32 + + # The namd3 executable (CPU-only) will be built in the Linux-ARM64-g++ directory + ``` + + * Change `` to the path where you have the NAMD source code + + To run NAMD, make sure you load the same UENV and view you used to build NAMD, and set the following variable: + + ```bash + export LD_LIBRARY_PATH="${DEV_VIEW_PATH}/lib/" + ``` + +=== "Multi-node build" + + The multi-node build provides the following views: + + * `namd` + * `develop` (development view, without NAMD) + + !!! note "GPU-resident mode" + The multi-node build based on [Charm++] MPI backend can't take advantage of the new GPU-resident mode. Unless you require the multi-node + build or you can prove it is faster for your use case, we recommend using the single-node build with the GPU-resident mode. + + ## Building NAMD from source + + !!! warning "TCL Version" + According to the NAMD 3.0 release notes, TCL `8.6` is required. However, the source code for the `3.0` release still contains hard-coded + flags for TCL `8.5`. The UENV provides `tcl@8.6`, therefore you need to manually modify NAMD 3.0's `arch/Linux-ARM64.tcl` file as follows: + change `-ltcl8.5` to `-ltcl8.6` in the definition of the `TCLLIB` variable. + + The [NAMD] `uenv` provides all the dependencies required to build [NAMD] from source. You can follow these steps to build [NAMD] from source: + + ```bash + export DEV_VIEW_NAME="develop" + export PATH_TO_NAMD_SOURCE= + + # Start uenv and load develop view + uenv start --view=${DEV_VIEW_NAME} + + # Set variable VIEW_PATH to the view + export DEV_VIEW_PATH=/user-environment/env/${DEV_VIEW_NAME} + + cd ${PATH_TO_NAMD_SOURCE} + ``` + + !!! info "Action required" + Modify the `/arch/Linux-ARM64.tcl` file now. + Change `-ltcl8.5` with `-ltcl8.6` in the definition of the "TCLLIB" variable. + + ```bash + # Build bundled Charm++ + tar -xvf charm-8.0.0.tar && cd charm-8.0.0 + env MPICXX=mpicxx ./build charm++ mpi-linux-arm8 smp --with-production -j 32 + + # Configure NAMD build for GPU + cd .. + ./config Linux-ARM64-g++.cuda \ + --charm-arch mpi-linux-arm8-smp --charm-base $PWD/charm-8.0.0 \ + --with-tcl --tcl-prefix ${DEV_VIEW_PATH} \ + --with-fftw --with-fftw3 --fftw-prefix ${DEV_VIEW_PATH} \ + --cuda-gencode arch=compute_90,code=sm_90 --with-single-node-cuda --with-cuda --cuda-prefix ${DEV_VIEW_PATH} + cd Linux-ARM64-g++.cuda && make -j 32 + + # The namd3 executable (GPU-accelerated) will be built in the Linux-ARM64-g++.cuda directory + ``` + + * Change `` to the path where you have the NAMD source code + * Change `` to the name (or path) of the actual NAMD UENV you want to use + + To run NAMD, make sure you load the same UENV and view you used to build NAMD, and set the following variable: + + ```bash + export LD_LIBRARY_PATH="${DEV_VIEW_PATH}/lib/" + ``` + +## Useful Links + +* [NAMD Homepage] +* [NAMD Tutorials] +* [Running Charm++ Programs] +* [What you should know about NAMD and Charm++ but were hoping to ignore] by J. C. Phillips +* [NAMD Spack package] +* [Charm++ Spack package] + +[Charm++]: https://charm.cs.uiuc.edu/ +[Charm++ Spack package]: https://packages.spack.io/package.html?name=charmpp +[CSCS]: https://www.cscs.ch +[NAMD]: http://www.ks.uiuc.edu/Research/namd/ +[NAMD Homepage]: http://www.ks.uiuc.edu/Research/namd/ +[NAMD license]: http://www.ks.uiuc.edu/Research/namd/license.html +[NAMD Tutorials]: http://www.ks.uiuc.edu/Training/Tutorials/index.html#namd +[NAMD Spack package]: https://packages.spack.io/package.html?name=namd +[Running Charm++ Programs]: https://charm.readthedocs.io/en/latest/charm++/manual.html#running-charm-programs +[What you should know about NAMD and Charm++ but were hoping to ignore]: https://dl.acm.org/doi/pdf/10.1145/3219104.3219134 +[NAMD 3.0 new features]: https://www.ks.uiuc.edu/Research/namd/3.0/features.html +[NAMD 3.0b6 GPU-Resident benchmarking results]: https://www.ks.uiuc.edu/Research/namd/benchmarks/ diff --git a/mkdocs.yml b/mkdocs.yml index 866a8329..45f3e1c6 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -86,6 +86,7 @@ theme: - navigation.tracking - navigation.path - navigation.top + - content.code.annotate icon: # set the icon for "edit this page" edit: material/pencil From ca081436feb16568645edf820dd03cf70c18028e Mon Sep 17 00:00:00 2001 From: Rocco Meli Date: Tue, 18 Feb 2025 15:47:55 +0100 Subject: [PATCH 2/7] small fix --- docs/software/sciapps/namd.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/software/sciapps/namd.md b/docs/software/sciapps/namd.md index 10abc67e..d9aa3e03 100644 --- a/docs/software/sciapps/namd.md +++ b/docs/software/sciapps/namd.md @@ -109,7 +109,7 @@ The multi-node build works on multiple nodes and is based on [Charm++] MPI backe !!! info "Action required" Modify the `/arch/Linux-ARM64.tcl` file now. - Change `-ltcl8.5` with `-ltcl8.6` in the definition of the "TCLLIB" variable. + Change `-ltcl8.5` with `-ltcl8.6` in the definition of the `TCLLIB` variable. ```bash # Build bundled Charm++ @@ -162,7 +162,7 @@ The multi-node build works on multiple nodes and is based on [Charm++] MPI backe !!! info "Action required" Modify the `/arch/Linux-ARM64.tcl` file now. - Change `-ltcl8.5` with `-ltcl8.6` in the definition of the "TCLLIB" variable. + Change `-ltcl8.5` with `-ltcl8.6` in the definition of the `TCLLIB` variable. ```bash # Build bundled Charm++ @@ -223,7 +223,7 @@ The multi-node build works on multiple nodes and is based on [Charm++] MPI backe !!! info "Action required" Modify the `/arch/Linux-ARM64.tcl` file now. - Change `-ltcl8.5` with `-ltcl8.6` in the definition of the "TCLLIB" variable. + Change `-ltcl8.5` with `-ltcl8.6` in the definition of the `TCLLIB` variable. ```bash # Build bundled Charm++ From 8fa317bb767999b2af432386f8e5e7a6c2e68c93 Mon Sep 17 00:00:00 2001 From: Rocco Meli Date: Wed, 19 Feb 2025 09:54:08 +0100 Subject: [PATCH 3/7] Update docs/software/sciapps/namd.md Co-authored-by: Mikael Simberg --- docs/software/sciapps/namd.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/software/sciapps/namd.md b/docs/software/sciapps/namd.md index d9aa3e03..9c9f2650 100644 --- a/docs/software/sciapps/namd.md +++ b/docs/software/sciapps/namd.md @@ -26,7 +26,7 @@ The multi-node build works on multiple nodes and is based on [Charm++] MPI backe ## Running NAMD - The following SBATCH script shows how to run NAMD on a single node with 4 GPUs: + The following sbatch script shows how to run NAMD on a single node with 4 GPUs: ```bash #!/bin/bash From 033a4ceba6fa46c7ac193ccd9bcb399b526a0caf Mon Sep 17 00:00:00 2001 From: Rocco Meli Date: Wed, 19 Feb 2025 09:54:22 +0100 Subject: [PATCH 4/7] Update docs/software/sciapps/namd.md Co-authored-by: Mikael Simberg --- docs/software/sciapps/namd.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/software/sciapps/namd.md b/docs/software/sciapps/namd.md index 9c9f2650..1913ac33 100644 --- a/docs/software/sciapps/namd.md +++ b/docs/software/sciapps/namd.md @@ -192,7 +192,7 @@ The multi-node build works on multiple nodes and is based on [Charm++] MPI backe The multi-node build provides the following views: - * `namd` + * `namd` (standard view, with NAMD) * `develop` (development view, without NAMD) !!! note "GPU-resident mode" From f7991ded82bd2afa9c5caca757a8b2fff16bc005 Mon Sep 17 00:00:00 2001 From: Rocco Meli Date: Wed, 19 Feb 2025 09:54:56 +0100 Subject: [PATCH 5/7] Update docs/software/sciapps/namd.md Co-authored-by: Mikael Simberg --- docs/software/sciapps/namd.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/software/sciapps/namd.md b/docs/software/sciapps/namd.md index 1913ac33..5eab669b 100644 --- a/docs/software/sciapps/namd.md +++ b/docs/software/sciapps/namd.md @@ -4,7 +4,7 @@ [NAMD] is a parallel molecular dynamics code based on [Charm++], designed for high-performance simulations of large biomolecular systems. !!! danger "Licensing Terms and Conditions" - [NAMD] is distributed free of charge for research purposes only and not for commercial use: users must agree to [NAMD license] in order to use it at [CSCS]. Users agree to acknowledge use of [NAMD] in any reports or publications of results obtained with the Software (see [NAMD Homepage] for details). + [NAMD] is distributed free of charge for research purposes only and not for commercial use: users must agree to the [NAMD license] in order to use it at [CSCS]. Users agree to acknowledge use of [NAMD] in any reports or publications of results obtained with the Software (see [NAMD Homepage] for details). [NAMD] is provided in two flavours on [CSCS] systems: From f5e00d6f7a8134e7ccfa6bbf6176243158111154 Mon Sep 17 00:00:00 2001 From: Rocco Meli Date: Wed, 19 Feb 2025 11:23:36 +0100 Subject: [PATCH 6/7] fix --- docs/software/sciapps/namd.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/software/sciapps/namd.md b/docs/software/sciapps/namd.md index 5eab669b..6476cf72 100644 --- a/docs/software/sciapps/namd.md +++ b/docs/software/sciapps/namd.md @@ -12,7 +12,7 @@ * Multi-node build The single-node build works on a single node and benefits from the new GPU-resident mode (see [NAMD 3.0b6 GPU-Resident benchmarking results] for more details). -The multi-node build works on multiple nodes and is based on [Charm++] MPI backend. +The multi-node build works on multiple nodes and is based on [Charm++]'s MPI backend. !!! note "Prefer the single-node build and exploit GPU-resident mode" Unless you have good reasons to use the multi-node build, we recommend using the single-node build with the GPU-resident mode. @@ -196,7 +196,7 @@ The multi-node build works on multiple nodes and is based on [Charm++] MPI backe * `develop` (development view, without NAMD) !!! note "GPU-resident mode" - The multi-node build based on [Charm++] MPI backend can't take advantage of the new GPU-resident mode. Unless you require the multi-node + The multi-node build based on [Charm++]'s MPI backend can't take advantage of the new GPU-resident mode. Unless you require the multi-node build or you can prove it is faster for your use case, we recommend using the single-node build with the GPU-resident mode. ## Building NAMD from source From cdd5d4e843104c95a0c5bfe068666ca6e485077c Mon Sep 17 00:00:00 2001 From: Rocco Meli Date: Wed, 19 Feb 2025 14:48:52 +0100 Subject: [PATCH 7/7] use sections --- docs/software/sciapps/namd.md | 324 +++++++++++++++++----------------- 1 file changed, 162 insertions(+), 162 deletions(-) diff --git a/docs/software/sciapps/namd.md b/docs/software/sciapps/namd.md index 6476cf72..1b981fff 100644 --- a/docs/software/sciapps/namd.md +++ b/docs/software/sciapps/namd.md @@ -17,199 +17,138 @@ The multi-node build works on multiple nodes and is based on [Charm++]'s MPI bac !!! note "Prefer the single-node build and exploit GPU-resident mode" Unless you have good reasons to use the multi-node build, we recommend using the single-node build with the GPU-resident mode. -=== "Single-node build" +## Single-node build - The single-node build provides the following views: +The single-node build provides the following views: - * `namd-single-node` (standard view, with NAMD) - * `develop-single-node` (development view, without NAMD) +* `namd-single-node` (standard view, with NAMD) +* `develop-single-node` (development view, without NAMD) - ## Running NAMD +### Running NAMD on a single node - The following sbatch script shows how to run NAMD on a single node with 4 GPUs: +The following sbatch script shows how to run NAMD on a single node with 4 GPUs: - ```bash - #!/bin/bash - #SBATCH --job-name="namd-example" - #SBATCH --time=00:10:00 - #SBATCH --nodes=1 (1) - #SBATCH --ntasks-per-node=1 (2) - #SBATCH --cpus-per-task=288 - #SBATCH --gres=gpu:4 (3) - #SBATCH --uenv= (4) - #SBATCH --view=namd-single-node (5) - - - srun namd3 +p 29 +pmeps 5 +setcpuaffinity +devices 0,1,2,3 - ``` - - 1. You can only use one node with the `single-node` build - 2. You can only use one task per node with the `single-node` build - 3. Make all GPUs visible to NAMD (by automatically setting `CUDA_VISIBLE_DEVICES=0,1,2,3`) - 4. Load the NAMD UENV (UENV name or path to the UENV) - 5. Load the `namd-single-node` view - - * Change `` to the name (or path) of the actual NAMD UENV you want to use - * Change `` to the name (or path) of the NAMD configuration file for your simulation - * Make sure you set `+p`, `+pmeps`, and other NAMD options optimally for your calculation - - ??? example "Scaling of STMV benchmark with GPU-resident mode from 1 to 4 GPUs" - - Scaling of the tobacco mosaic virus (STMV) benchmark with GPU-resident mode on our system is the following: - - | GPUs | ns/day | Speedup | Parallel efficiency | - |:----:|:------:|:-------:|:-------------------:| - | 1 | 31.1 | - | - | - | 2 | 53.7 | 1.9 | 86% | - | 4 | 92.7 | 3.5 | 74% | - - === "1 GPU" - - ```bash - srun namd3 +p 8 +setcpuaffinity +devices 0 - ``` - === "2 GPUs" - - ```bash - srun namd3 +p 15 +pmeps 7 +setcpuaffinity +devices 0,1 - ``` +```bash +#!/bin/bash +#SBATCH --job-name="namd-example" +#SBATCH --time=00:10:00 +#SBATCH --nodes=1 (1) +#SBATCH --ntasks-per-node=1 (2) +#SBATCH --cpus-per-task=288 +#SBATCH --gres=gpu:4 (3) +#SBATCH --uenv= (4) +#SBATCH --view=namd-single-node (5) - === "4 GPUs" - ```bash - srun namd3 +p 29 +pmeps 5 +setcpuaffinity +devices 0,1,2,3 - ``` +srun namd3 +p 29 +pmeps 5 +setcpuaffinity +devices 0,1,2,3 +``` - ## Building NAMD from source +1. You can only use one node with the `single-node` build +2. You can only use one task per node with the `single-node` build +3. Make all GPUs visible to NAMD (by automatically setting `CUDA_VISIBLE_DEVICES=0,1,2,3`) +4. Load the NAMD UENV (UENV name or path to the UENV) +5. Load the `namd-single-node` view - !!! warning "Action required" - According to the NAMD 3.0 release notes, TCL `8.6` is required. However, the source code for the `3.0` release still contains hard-coded - flags for TCL `8.5`. The UENV provides `tcl@8.6`, therefore you need to manually modify NAMD 3.0's `arch/Linux-ARM64.tcl` file as follows: - change `-ltcl8.5` to `-ltcl8.6` in the definition of the `TCLLIB` variable. +* Change `` to the name (or path) of the actual NAMD UENV you want to use +* Change `` to the name (or path) of the NAMD configuration file for your simulation +* Make sure you set `+p`, `+pmeps`, and other NAMD options optimally for your calculation - The [NAMD] `uenv` provides all the dependencies required to build [NAMD] from source. +??? example "Scaling of STMV benchmark with GPU-resident mode from 1 to 4 GPUs" - === "GPU Build" + Scaling of the tobacco mosaic virus (STMV) benchmark with GPU-resident mode on our system is the following: + | GPUs | ns/day | Speedup | Parallel efficiency | + |:----:|:------:|:-------:|:-------------------:| + | 1 | 31.1 | - | - | + | 2 | 53.7 | 1.9 | 86% | + | 4 | 92.7 | 3.5 | 74% | - Build NAMD: + === "1 GPU" ```bash - export DEV_VIEW_NAME="develop-single-node" - export PATH_TO_NAMD_SOURCE= - - # Start uenv and load develop view - uenv start --view=${DEV_VIEW_NAME} - - # Set variable VIEW_PATH to the view - export DEV_VIEW_PATH=/user-environment/env/${DEV_VIEW_NAME} - - cd ${PATH_TO_NAMD_SOURCE} + srun namd3 +p 8 +setcpuaffinity +devices 0 ``` + === "2 GPUs" - !!! info "Action required" - Modify the `/arch/Linux-ARM64.tcl` file now. - Change `-ltcl8.5` with `-ltcl8.6` in the definition of the `TCLLIB` variable. - ```bash - # Build bundled Charm++ - tar -xvf charm-8.0.0.tar && cd charm-8.0.0 - ./build charm++ multicore-linux-arm8 gcc --with-production --enable-tracing -j 32 - - # Configure NAMD build for GPU - cd .. - ./config Linux-ARM64-g++.cuda \ - --charm-arch multicore-linux-arm8-gcc --charm-base $PWD/charm-8.0.0 \ - --with-tcl --tcl-prefix ${DEV_VIEW_PATH} \ - --with-fftw --with-fftw3 --fftw-prefix ${DEV_VIEW_PATH} \ - --cuda-gencode arch=compute_90,code=sm_90 --with-single-node-cuda --with-cuda --cuda-prefix ${DEV_VIEW_PATH} - cd Linux-ARM64-g++.cuda && make -j 32 - - # The namd3 executable (GPU-accelerated) will be built in the Linux-ARM64-g++.cuda directory + srun namd3 +p 15 +pmeps 7 +setcpuaffinity +devices 0,1 ``` - * Change `` to the path where you have the NAMD source code - * Change `` to the name (or path) of the actual NAMD UENV you want to use - - To run NAMD, make sure you load the same UENV and view you used to build NAMD, and set the following variable: + === "4 GPUs" ```bash - export LD_LIBRARY_PATH="${DEV_VIEW_PATH}/lib/" + srun namd3 +p 29 +pmeps 5 +setcpuaffinity +devices 0,1,2,3 ``` - === "CPU Build" +### Building NAMD from source with Charm++'s multicore backend - Some workflows, such as constant pH MD simulations, might require a CPU-only NAMD build which is used to drive the simulation. +!!! warning "Action required" + According to the NAMD 3.0 release notes, TCL `8.6` is required. However, the source code for the `3.0` release still contains hard-coded + flags for TCL `8.5`. The UENV provides `tcl@8.6`, therefore you need to manually modify NAMD 3.0's `arch/Linux-ARM64.tcl` file as follows: + change `-ltcl8.5` to `-ltcl8.6` in the definition of the `TCLLIB` variable. - !!! warning "Use the CPU-only build only if needed" - The CPU-only build is optional and should be used only if needed. You should use it in conjunction with the GPU build to drive the simulation. - Do not use the CPU-only build for actual simulations as it will be slower than the GPU build. +The [NAMD] `uenv` provides all the dependencies required to build [NAMD] from source. - You can build a CPU-only version of NAMD as follows: - - ```bash - export DEV_VIEW_NAME="develop-single-node" - export PATH_TO_NAMD_SOURCE= +=== "GPU Build" - # Start uenv and load develop view - uenv start --view=${DEV_VIEW_NAME} - # Set variable VIEW_PATH to the view - export DEV_VIEW_PATH=/user-environment/env/${DEV_VIEW_NAME} + Build NAMD: - cd ${PATH_TO_NAMD_SOURCE} - ``` + ```bash + export DEV_VIEW_NAME="develop-single-node" + export PATH_TO_NAMD_SOURCE= - !!! info "Action required" - Modify the `/arch/Linux-ARM64.tcl` file now. - Change `-ltcl8.5` with `-ltcl8.6` in the definition of the `TCLLIB` variable. - - ```bash - # Build bundled Charm++ - tar -xvf charm-8.0.0.tar && cd charm-8.0.0 - ./build charm++ multicore-linux-arm8 gcc --with-production --enable-tracing -j 32 - - # Configure NAMD build for GPU - cd .. - ./config Linux-ARM64-g++ \ - --charm-arch multicore-linux-arm8-gcc --charm-base $PWD/charm-8.0.0 \ - --with-tcl --tcl-prefix ${DEV_VIEW_PATH} \ - --with-fftw --with-fftw3 --fftw-prefix ${DEV_VIEW_PATH} - cd Linux-ARM64-g++ && make -j 32 - - # The namd3 executable (CPU-only) will be built in the Linux-ARM64-g++ directory - ``` + # Start uenv and load develop view + uenv start --view=${DEV_VIEW_NAME} - * Change `` to the path where you have the NAMD source code + # Set variable VIEW_PATH to the view + export DEV_VIEW_PATH=/user-environment/env/${DEV_VIEW_NAME} - To run NAMD, make sure you load the same UENV and view you used to build NAMD, and set the following variable: + cd ${PATH_TO_NAMD_SOURCE} + ``` - ```bash - export LD_LIBRARY_PATH="${DEV_VIEW_PATH}/lib/" - ``` + !!! info "Action required" + Modify the `/arch/Linux-ARM64.tcl` file now. + Change `-ltcl8.5` with `-ltcl8.6` in the definition of the `TCLLIB` variable. + + ```bash + # Build bundled Charm++ + tar -xvf charm-8.0.0.tar && cd charm-8.0.0 + ./build charm++ multicore-linux-arm8 gcc --with-production --enable-tracing -j 32 + + # Configure NAMD build for GPU + cd .. + ./config Linux-ARM64-g++.cuda \ + --charm-arch multicore-linux-arm8-gcc --charm-base $PWD/charm-8.0.0 \ + --with-tcl --tcl-prefix ${DEV_VIEW_PATH} \ + --with-fftw --with-fftw3 --fftw-prefix ${DEV_VIEW_PATH} \ + --cuda-gencode arch=compute_90,code=sm_90 --with-single-node-cuda --with-cuda --cuda-prefix ${DEV_VIEW_PATH} + cd Linux-ARM64-g++.cuda && make -j 32 + + # The namd3 executable (GPU-accelerated) will be built in the Linux-ARM64-g++.cuda directory + ``` -=== "Multi-node build" + * Change `` to the path where you have the NAMD source code + * Change `` to the name (or path) of the actual NAMD UENV you want to use - The multi-node build provides the following views: + To run NAMD, make sure you load the same UENV and view you used to build NAMD, and set the following variable: - * `namd` (standard view, with NAMD) - * `develop` (development view, without NAMD) + ```bash + export LD_LIBRARY_PATH="${DEV_VIEW_PATH}/lib/" + ``` - !!! note "GPU-resident mode" - The multi-node build based on [Charm++]'s MPI backend can't take advantage of the new GPU-resident mode. Unless you require the multi-node - build or you can prove it is faster for your use case, we recommend using the single-node build with the GPU-resident mode. - - ## Building NAMD from source +=== "CPU Build" - !!! warning "TCL Version" - According to the NAMD 3.0 release notes, TCL `8.6` is required. However, the source code for the `3.0` release still contains hard-coded - flags for TCL `8.5`. The UENV provides `tcl@8.6`, therefore you need to manually modify NAMD 3.0's `arch/Linux-ARM64.tcl` file as follows: - change `-ltcl8.5` to `-ltcl8.6` in the definition of the `TCLLIB` variable. + Some workflows, such as constant pH MD simulations, might require a CPU-only NAMD build which is used to drive the simulation. - The [NAMD] `uenv` provides all the dependencies required to build [NAMD] from source. You can follow these steps to build [NAMD] from source: + !!! warning "Use the CPU-only build only if needed" + The CPU-only build is optional and should be used only if needed. You should use it in conjunction with the GPU build to drive the simulation. + Do not use the CPU-only build for actual simulations as it will be slower than the GPU build. + You can build a CPU-only version of NAMD as follows: + ```bash - export DEV_VIEW_NAME="develop" + export DEV_VIEW_NAME="develop-single-node" export PATH_TO_NAMD_SOURCE= # Start uenv and load develop view @@ -224,26 +163,24 @@ The multi-node build works on multiple nodes and is based on [Charm++]'s MPI bac !!! info "Action required" Modify the `/arch/Linux-ARM64.tcl` file now. Change `-ltcl8.5` with `-ltcl8.6` in the definition of the `TCLLIB` variable. - + ```bash # Build bundled Charm++ tar -xvf charm-8.0.0.tar && cd charm-8.0.0 - env MPICXX=mpicxx ./build charm++ mpi-linux-arm8 smp --with-production -j 32 + ./build charm++ multicore-linux-arm8 gcc --with-production --enable-tracing -j 32 # Configure NAMD build for GPU - cd .. - ./config Linux-ARM64-g++.cuda \ - --charm-arch mpi-linux-arm8-smp --charm-base $PWD/charm-8.0.0 \ + cd .. + ./config Linux-ARM64-g++ \ + --charm-arch multicore-linux-arm8-gcc --charm-base $PWD/charm-8.0.0 \ --with-tcl --tcl-prefix ${DEV_VIEW_PATH} \ - --with-fftw --with-fftw3 --fftw-prefix ${DEV_VIEW_PATH} \ - --cuda-gencode arch=compute_90,code=sm_90 --with-single-node-cuda --with-cuda --cuda-prefix ${DEV_VIEW_PATH} - cd Linux-ARM64-g++.cuda && make -j 32 - - # The namd3 executable (GPU-accelerated) will be built in the Linux-ARM64-g++.cuda directory + --with-fftw --with-fftw3 --fftw-prefix ${DEV_VIEW_PATH} + cd Linux-ARM64-g++ && make -j 32 + + # The namd3 executable (CPU-only) will be built in the Linux-ARM64-g++ directory ``` * Change `` to the path where you have the NAMD source code - * Change `` to the name (or path) of the actual NAMD UENV you want to use To run NAMD, make sure you load the same UENV and view you used to build NAMD, and set the following variable: @@ -251,6 +188,69 @@ The multi-node build works on multiple nodes and is based on [Charm++]'s MPI bac export LD_LIBRARY_PATH="${DEV_VIEW_PATH}/lib/" ``` +## Multi-node build + +The multi-node build provides the following views: + +* `namd` (standard view, with NAMD) +* `develop` (development view, without NAMD) + +!!! note "GPU-resident mode" + The multi-node build based on [Charm++]'s MPI backend can't take advantage of the new GPU-resident mode. Unless you require the multi-node + build or you can prove it is faster for your use case, we recommend using the single-node build with the GPU-resident mode. + +### Building NAMD from source with Charm++'s MPI backend + +!!! warning "TCL Version" + According to the NAMD 3.0 release notes, TCL `8.6` is required. However, the source code for the `3.0` release still contains hard-coded + flags for TCL `8.5`. The UENV provides `tcl@8.6`, therefore you need to manually modify NAMD 3.0's `arch/Linux-ARM64.tcl` file as follows: + change `-ltcl8.5` to `-ltcl8.6` in the definition of the `TCLLIB` variable. + +The [NAMD] `uenv` provides all the dependencies required to build [NAMD] from source. You can follow these steps to build [NAMD] from source: + +```bash +export DEV_VIEW_NAME="develop" +export PATH_TO_NAMD_SOURCE= + +# Start uenv and load develop view +uenv start --view=${DEV_VIEW_NAME} + +# Set variable VIEW_PATH to the view +export DEV_VIEW_PATH=/user-environment/env/${DEV_VIEW_NAME} + +cd ${PATH_TO_NAMD_SOURCE} +``` + +!!! info "Action required" + Modify the `/arch/Linux-ARM64.tcl` file now. + Change `-ltcl8.5` with `-ltcl8.6` in the definition of the `TCLLIB` variable. + +```bash +# Build bundled Charm++ +tar -xvf charm-8.0.0.tar && cd charm-8.0.0 +env MPICXX=mpicxx ./build charm++ mpi-linux-arm8 smp --with-production -j 32 + +# Configure NAMD build for GPU +cd .. +./config Linux-ARM64-g++.cuda \ + --charm-arch mpi-linux-arm8-smp --charm-base $PWD/charm-8.0.0 \ + --with-tcl --tcl-prefix ${DEV_VIEW_PATH} \ + --with-fftw --with-fftw3 --fftw-prefix ${DEV_VIEW_PATH} \ + --cuda-gencode arch=compute_90,code=sm_90 --with-single-node-cuda --with-cuda --cuda-prefix ${DEV_VIEW_PATH} +cd Linux-ARM64-g++.cuda && make -j 32 + +# The namd3 executable (GPU-accelerated) will be built in the Linux-ARM64-g++.cuda directory +``` + +* Change `` to the path where you have the NAMD source code +* Change `` to the name (or path) of the actual NAMD UENV you want to use + +To run NAMD, make sure you load the same UENV and view you used to build NAMD, and set the following variable: + +```bash +export LD_LIBRARY_PATH="${DEV_VIEW_PATH}/lib/" +``` + ## Useful Links * [NAMD Homepage]