|
1 | 1 | [](){#uenv-namd} |
2 | 2 | # NAMD |
3 | | -!!! todo |
4 | | - complete docs |
| 3 | + |
| 4 | +[NAMD] is a parallel molecular dynamics code based on [Charm++], designed for high-performance simulations of large biomolecular systems. |
| 5 | + |
| 6 | +!!! danger "Licensing Terms and Conditions" |
| 7 | + [NAMD] is distributed free of charge for research purposes only and not for commercial use: users must agree to the [NAMD license] in order to use it at [CSCS]. Users agree to acknowledge use of [NAMD] in any reports or publications of results obtained with the Software (see [NAMD Homepage] for details). |
| 8 | + |
| 9 | +[NAMD] is provided in two flavours on [CSCS] systems: |
| 10 | + |
| 11 | +* Single-node build |
| 12 | +* Multi-node build |
| 13 | + |
| 14 | +The single-node build works on a single node and benefits from the new GPU-resident mode (see [NAMD 3.0b6 GPU-Resident benchmarking results] for more details). |
| 15 | +The multi-node build works on multiple nodes and is based on [Charm++]'s MPI backend. |
| 16 | + |
| 17 | +!!! note "Prefer the single-node build and exploit GPU-resident mode" |
| 18 | + Unless you have good reasons to use the multi-node build, we recommend using the single-node build with the GPU-resident mode. |
| 19 | + |
| 20 | +## Single-node build |
| 21 | + |
| 22 | +The single-node build provides the following views: |
| 23 | + |
| 24 | +* `namd-single-node` (standard view, with NAMD) |
| 25 | +* `develop-single-node` (development view, without NAMD) |
| 26 | + |
| 27 | +### Running NAMD on a single node |
| 28 | + |
| 29 | +The following sbatch script shows how to run NAMD on a single node with 4 GPUs: |
| 30 | + |
| 31 | +```bash |
| 32 | +#!/bin/bash |
| 33 | +#SBATCH --job-name="namd-example" |
| 34 | +#SBATCH --time=00:10:00 |
| 35 | +#SBATCH --nodes=1 (1) |
| 36 | +#SBATCH --ntasks-per-node=1 (2) |
| 37 | +#SBATCH --cpus-per-task=288 |
| 38 | +#SBATCH --gres=gpu:4 (3) |
| 39 | +#SBATCH --uenv=<NAMD_UENV> (4) |
| 40 | +#SBATCH --view=namd-single-node (5) |
| 41 | + |
| 42 | + |
| 43 | +srun namd3 +p 29 +pmeps 5 +setcpuaffinity +devices 0,1,2,3 <NAMD_CONFIG_FILE> |
| 44 | +``` |
| 45 | + |
| 46 | +1. You can only use one node with the `single-node` build |
| 47 | +2. You can only use one task per node with the `single-node` build |
| 48 | +3. Make all GPUs visible to NAMD (by automatically setting `CUDA_VISIBLE_DEVICES=0,1,2,3`) |
| 49 | +4. Load the NAMD UENV (UENV name or path to the UENV) |
| 50 | +5. Load the `namd-single-node` view |
| 51 | + |
| 52 | +* Change `<NAMD_UENV>` to the name (or path) of the actual NAMD UENV you want to use |
| 53 | +* Change `<NAMD_CONFIG_FILE>` to the name (or path) of the NAMD configuration file for your simulation |
| 54 | +* Make sure you set `+p`, `+pmeps`, and other NAMD options optimally for your calculation |
| 55 | + |
| 56 | +??? example "Scaling of STMV benchmark with GPU-resident mode from 1 to 4 GPUs" |
| 57 | + |
| 58 | + Scaling of the tobacco mosaic virus (STMV) benchmark with GPU-resident mode on our system is the following: |
| 59 | + |
| 60 | + | GPUs | ns/day | Speedup | Parallel efficiency | |
| 61 | + |:----:|:------:|:-------:|:-------------------:| |
| 62 | + | 1 | 31.1 | - | - | |
| 63 | + | 2 | 53.7 | 1.9 | 86% | |
| 64 | + | 4 | 92.7 | 3.5 | 74% | |
| 65 | + |
| 66 | + === "1 GPU" |
| 67 | + |
| 68 | + ```bash |
| 69 | + srun namd3 +p 8 +setcpuaffinity +devices 0 <NAMD_CONFIG_FILE> |
| 70 | + ``` |
| 71 | + === "2 GPUs" |
| 72 | + |
| 73 | + ```bash |
| 74 | + srun namd3 +p 15 +pmeps 7 +setcpuaffinity +devices 0,1 <NAMD_CONFIG_FILE> |
| 75 | + ``` |
| 76 | + |
| 77 | + === "4 GPUs" |
| 78 | + |
| 79 | + ```bash |
| 80 | + srun namd3 +p 29 +pmeps 5 +setcpuaffinity +devices 0,1,2,3 <NAMD_CONFIG_FILE> |
| 81 | + ``` |
| 82 | + |
| 83 | +### Building NAMD from source with Charm++'s multicore backend |
| 84 | + |
| 85 | +!!! warning "Action required" |
| 86 | + According to the NAMD 3.0 release notes, TCL `8.6` is required. However, the source code for the `3.0` release still contains hard-coded |
| 87 | + flags for TCL `8.5`. The UENV provides `[email protected]`, therefore you need to manually modify NAMD 3.0's `arch/Linux-ARM64.tcl` file as follows: |
| 88 | + change `-ltcl8.5` to `-ltcl8.6` in the definition of the `TCLLIB` variable. |
| 89 | + |
| 90 | +The [NAMD] `uenv` provides all the dependencies required to build [NAMD] from source. |
| 91 | + |
| 92 | +=== "GPU Build" |
| 93 | + |
| 94 | + |
| 95 | + Build NAMD: |
| 96 | + |
| 97 | + ```bash |
| 98 | + export DEV_VIEW_NAME="develop-single-node" |
| 99 | + export PATH_TO_NAMD_SOURCE=<PATH_TO_NAMD_SOURCE> |
| 100 | + |
| 101 | + # Start uenv and load develop view |
| 102 | + uenv start --view=${DEV_VIEW_NAME} <NAMD_UENV> |
| 103 | + |
| 104 | + # Set variable VIEW_PATH to the view |
| 105 | + export DEV_VIEW_PATH=/user-environment/env/${DEV_VIEW_NAME} |
| 106 | + |
| 107 | + cd ${PATH_TO_NAMD_SOURCE} |
| 108 | + ``` |
| 109 | + |
| 110 | + !!! info "Action required" |
| 111 | + Modify the `<PATH_TO_NAMD_SOURCE>/arch/Linux-ARM64.tcl` file now. |
| 112 | + Change `-ltcl8.5` with `-ltcl8.6` in the definition of the `TCLLIB` variable. |
| 113 | + |
| 114 | + ```bash |
| 115 | + # Build bundled Charm++ |
| 116 | + tar -xvf charm-8.0.0.tar && cd charm-8.0.0 |
| 117 | + ./build charm++ multicore-linux-arm8 gcc --with-production --enable-tracing -j 32 |
| 118 | + |
| 119 | + # Configure NAMD build for GPU |
| 120 | + cd .. |
| 121 | + ./config Linux-ARM64-g++.cuda \ |
| 122 | + --charm-arch multicore-linux-arm8-gcc --charm-base $PWD/charm-8.0.0 \ |
| 123 | + --with-tcl --tcl-prefix ${DEV_VIEW_PATH} \ |
| 124 | + --with-fftw --with-fftw3 --fftw-prefix ${DEV_VIEW_PATH} \ |
| 125 | + --cuda-gencode arch=compute_90,code=sm_90 --with-single-node-cuda --with-cuda --cuda-prefix ${DEV_VIEW_PATH} |
| 126 | + cd Linux-ARM64-g++.cuda && make -j 32 |
| 127 | + |
| 128 | + # The namd3 executable (GPU-accelerated) will be built in the Linux-ARM64-g++.cuda directory |
| 129 | + ``` |
| 130 | + |
| 131 | + * Change `<PATH_TO_NAMD_SOURCE>` to the path where you have the NAMD source code |
| 132 | + * Change `<NAMD_UENV>` to the name (or path) of the actual NAMD UENV you want to use |
| 133 | + |
| 134 | + To run NAMD, make sure you load the same UENV and view you used to build NAMD, and set the following variable: |
| 135 | + |
| 136 | + ```bash |
| 137 | + export LD_LIBRARY_PATH="${DEV_VIEW_PATH}/lib/" |
| 138 | + ``` |
| 139 | + |
| 140 | +=== "CPU Build" |
| 141 | + |
| 142 | + Some workflows, such as constant pH MD simulations, might require a CPU-only NAMD build which is used to drive the simulation. |
| 143 | + |
| 144 | + !!! warning "Use the CPU-only build only if needed" |
| 145 | + The CPU-only build is optional and should be used only if needed. You should use it in conjunction with the GPU build to drive the simulation. |
| 146 | + Do not use the CPU-only build for actual simulations as it will be slower than the GPU build. |
| 147 | + |
| 148 | + You can build a CPU-only version of NAMD as follows: |
| 149 | + |
| 150 | + ```bash |
| 151 | + export DEV_VIEW_NAME="develop-single-node" |
| 152 | + export PATH_TO_NAMD_SOURCE=<PATH_TO_NAMD_SOURCE> |
| 153 | + |
| 154 | + # Start uenv and load develop view |
| 155 | + uenv start --view=${DEV_VIEW_NAME} <NAMD_UENV> |
| 156 | + |
| 157 | + # Set variable VIEW_PATH to the view |
| 158 | + export DEV_VIEW_PATH=/user-environment/env/${DEV_VIEW_NAME} |
| 159 | + |
| 160 | + cd ${PATH_TO_NAMD_SOURCE} |
| 161 | + ``` |
| 162 | + |
| 163 | + !!! info "Action required" |
| 164 | + Modify the `<PATH_TO_NAMD_SOURCE>/arch/Linux-ARM64.tcl` file now. |
| 165 | + Change `-ltcl8.5` with `-ltcl8.6` in the definition of the `TCLLIB` variable. |
| 166 | + |
| 167 | + ```bash |
| 168 | + # Build bundled Charm++ |
| 169 | + tar -xvf charm-8.0.0.tar && cd charm-8.0.0 |
| 170 | + ./build charm++ multicore-linux-arm8 gcc --with-production --enable-tracing -j 32 |
| 171 | + |
| 172 | + # Configure NAMD build for GPU |
| 173 | + cd .. |
| 174 | + ./config Linux-ARM64-g++ \ |
| 175 | + --charm-arch multicore-linux-arm8-gcc --charm-base $PWD/charm-8.0.0 \ |
| 176 | + --with-tcl --tcl-prefix ${DEV_VIEW_PATH} \ |
| 177 | + --with-fftw --with-fftw3 --fftw-prefix ${DEV_VIEW_PATH} |
| 178 | + cd Linux-ARM64-g++ && make -j 32 |
| 179 | + |
| 180 | + # The namd3 executable (CPU-only) will be built in the Linux-ARM64-g++ directory |
| 181 | + ``` |
| 182 | + |
| 183 | + * Change `<PATH_TO_NAMD_SOURCE>` to the path where you have the NAMD source code |
| 184 | + |
| 185 | + To run NAMD, make sure you load the same UENV and view you used to build NAMD, and set the following variable: |
| 186 | + |
| 187 | + ```bash |
| 188 | + export LD_LIBRARY_PATH="${DEV_VIEW_PATH}/lib/" |
| 189 | + ``` |
| 190 | + |
| 191 | +## Multi-node build |
| 192 | + |
| 193 | +The multi-node build provides the following views: |
| 194 | + |
| 195 | +* `namd` (standard view, with NAMD) |
| 196 | +* `develop` (development view, without NAMD) |
| 197 | + |
| 198 | +!!! note "GPU-resident mode" |
| 199 | + The multi-node build based on [Charm++]'s MPI backend can't take advantage of the new GPU-resident mode. Unless you require the multi-node |
| 200 | + build or you can prove it is faster for your use case, we recommend using the single-node build with the GPU-resident mode. |
| 201 | + |
| 202 | +### Building NAMD from source with Charm++'s MPI backend |
| 203 | + |
| 204 | +!!! warning "TCL Version" |
| 205 | + According to the NAMD 3.0 release notes, TCL `8.6` is required. However, the source code for the `3.0` release still contains hard-coded |
| 206 | + flags for TCL `8.5`. The UENV provides `[email protected]`, therefore you need to manually modify NAMD 3.0's `arch/Linux-ARM64.tcl` file as follows: |
| 207 | + change `-ltcl8.5` to `-ltcl8.6` in the definition of the `TCLLIB` variable. |
| 208 | + |
| 209 | +The [NAMD] `uenv` provides all the dependencies required to build [NAMD] from source. You can follow these steps to build [NAMD] from source: |
| 210 | + |
| 211 | +```bash |
| 212 | +export DEV_VIEW_NAME="develop" |
| 213 | +export PATH_TO_NAMD_SOURCE=<PATH_TO_NAMD_SOURCE> |
| 214 | + |
| 215 | +# Start uenv and load develop view |
| 216 | +uenv start --view=${DEV_VIEW_NAME} <NAMD_UENV> |
| 217 | + |
| 218 | +# Set variable VIEW_PATH to the view |
| 219 | +export DEV_VIEW_PATH=/user-environment/env/${DEV_VIEW_NAME} |
| 220 | + |
| 221 | +cd ${PATH_TO_NAMD_SOURCE} |
| 222 | +``` |
| 223 | + |
| 224 | +!!! info "Action required" |
| 225 | + Modify the `<PATH_TO_NAMD_SOURCE>/arch/Linux-ARM64.tcl` file now. |
| 226 | + Change `-ltcl8.5` with `-ltcl8.6` in the definition of the `TCLLIB` variable. |
| 227 | + |
| 228 | +```bash |
| 229 | +# Build bundled Charm++ |
| 230 | +tar -xvf charm-8.0.0.tar && cd charm-8.0.0 |
| 231 | +env MPICXX=mpicxx ./build charm++ mpi-linux-arm8 smp --with-production -j 32 |
| 232 | + |
| 233 | +# Configure NAMD build for GPU |
| 234 | +cd .. |
| 235 | +./config Linux-ARM64-g++.cuda \ |
| 236 | + --charm-arch mpi-linux-arm8-smp --charm-base $PWD/charm-8.0.0 \ |
| 237 | + --with-tcl --tcl-prefix ${DEV_VIEW_PATH} \ |
| 238 | + --with-fftw --with-fftw3 --fftw-prefix ${DEV_VIEW_PATH} \ |
| 239 | + --cuda-gencode arch=compute_90,code=sm_90 --with-single-node-cuda --with-cuda --cuda-prefix ${DEV_VIEW_PATH} |
| 240 | +cd Linux-ARM64-g++.cuda && make -j 32 |
| 241 | + |
| 242 | +# The namd3 executable (GPU-accelerated) will be built in the Linux-ARM64-g++.cuda directory |
| 243 | +``` |
| 244 | + |
| 245 | +* Change `<PATH_TO_NAMD_SOURCE>` to the path where you have the NAMD source code |
| 246 | +* Change `<NAMD_UENV>` to the name (or path) of the actual NAMD UENV you want to use |
| 247 | + |
| 248 | +To run NAMD, make sure you load the same UENV and view you used to build NAMD, and set the following variable: |
| 249 | + |
| 250 | +```bash |
| 251 | +export LD_LIBRARY_PATH="${DEV_VIEW_PATH}/lib/" |
| 252 | +``` |
| 253 | + |
| 254 | +## Useful Links |
| 255 | + |
| 256 | +* [NAMD Homepage] |
| 257 | +* [NAMD Tutorials] |
| 258 | +* [Running Charm++ Programs] |
| 259 | +* [What you should know about NAMD and Charm++ but were hoping to ignore] by J. C. Phillips |
| 260 | +* [NAMD Spack package] |
| 261 | +* [Charm++ Spack package] |
| 262 | + |
| 263 | +[Charm++]: https://charm.cs.uiuc.edu/ |
| 264 | +[Charm++ Spack package]: https://packages.spack.io/package.html?name=charmpp |
| 265 | +[CSCS]: https://www.cscs.ch |
| 266 | +[NAMD]: http://www.ks.uiuc.edu/Research/namd/ |
| 267 | +[NAMD Homepage]: http://www.ks.uiuc.edu/Research/namd/ |
| 268 | +[NAMD license]: http://www.ks.uiuc.edu/Research/namd/license.html |
| 269 | +[NAMD Tutorials]: http://www.ks.uiuc.edu/Training/Tutorials/index.html#namd |
| 270 | +[NAMD Spack package]: https://packages.spack.io/package.html?name=namd |
| 271 | +[Running Charm++ Programs]: https://charm.readthedocs.io/en/latest/charm++/manual.html#running-charm-programs |
| 272 | +[What you should know about NAMD and Charm++ but were hoping to ignore]: https://dl.acm.org/doi/pdf/10.1145/3219104.3219134 |
| 273 | +[NAMD 3.0 new features]: https://www.ks.uiuc.edu/Research/namd/3.0/features.html |
| 274 | +[NAMD 3.0b6 GPU-Resident benchmarking results]: https://www.ks.uiuc.edu/Research/namd/benchmarks/ |
0 commit comments