You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -28,128 +28,20 @@ svMultiPhysics is a C++ implementation of the Fortran [svFSI](https://github.com
28
28
29
29
The [SimVascular svMultiPhysics Documentation](https://simvascular.github.io/documentation/multi_physics.html) provides documentation describing how to use the svMultiPhysics solver. It also has developer guide describing the code organization and some implementation details.
30
30
31
-
The [svMultiPhysics Internal Code Documentation](https://simvascular.github.io/multi_physics/index.html) provides documentation of the svMultiPhysics source code. It is automatically generated using [Doxygen](https://www.doxygen.nl).
31
+
The [svMultiPhysics Developer Gude Documentation](https://simvascular.github.io/documentation/multi_physics.html#developer-guide) describe some of the implementation details of the svMultiPhysics code.
The preferred way to use svMultiPhysics on an HPC system, is to take advantage of the provided Docker container, which include the latest version of svMultiPhysics pre-compiled. To use this option, Docker must be installed first. Please refer to [Docker webpage](https://www.docker.com/products/docker-desktop/) to know more about Docker and how to install it on your machine. The following steps describe how to build a Docker image or pull an existent one from DockerHub, and how to run a Docker container. The last section is a brief guide to perform the same steps but in Singularity, since HPC systems usually use Singularity to handle containers.
33
+
## Test Data
34
+
The `svMultiPhysics/test/cases` directory contains mesh, data and solver XML files that are used for testing changes made to the svMultiPhysics code base. The VTK files located in these directories don't initially contain data after cloning the svMultiPhysics repository. They are stored using `git lfs` and must be downloaded separately. See
35
+
[Testing](tests/README.md) for details about how to use `git lfs`.
39
36
40
-
## Docker image
41
-
A Docker image is a read-only template that may contain dependencies, libraries, and everything needed to run a program. It is like a snapshot of a particular environment.
42
-
A Docker image can be created directly from a [dockerfile](https://docs.docker.com/reference/dockerfile/#:~:text=A%20Dockerfile%20is%20a%20text,can%20use%20in%20a%20Dockerfile.) or an existent image can be pulled from [DockerHub](https://hub.docker.com). For this repository, both options are available.
43
-
The latest version of svMultiPhysics program is pre-compiled in a Docker image, built from a dockerfile provided in Docker/solver. The Docker image includes two different type of builds, one where the solver is compiled with Trilinos and the other one where the solver is compiled with PETSc.
44
-
This Docker image can be downloaded (pulled) from the dockerhub simvascular repository [simvascular/solver](https://registry.hub.docker.com/u/simvascular). To pull an image, run the command:
45
-
```
46
-
docker pull simvascular/solver:latest
47
-
```
48
-
Note that this image was built for AMD64 (x86) architecture, and it will not work on other architectures such as ARM64 (AArch64, also note that the Apple M-series processors are based on ARM-type architectures). In this case, the image has to be built from the provided dockerfile. Please refer to the README inside Docker/ folder for more information on how to build images from the provided dockerfiles.
49
-
50
-
## Docker container
51
-
A Docker container is a running instance of a Docker image. It is a lightweight, isolated, and executable unit.
52
-
Once the image is created, it can be run interactively by running the following command:
53
-
```
54
-
docker run -it -v FolderToUpload:/NameOfFolder solver:latest
55
-
```
56
-
In this command:
57
-
- -it: means run interactively Docker image
58
-
- -v: mounts a directory 'FolderToUpload' from the host machine in the container where the directory has the name '/NameOfFolder'. For example the folder containing the mesh and the input file necessary to run a simulation should be mounted. Once inside the container we can move into the folder jsut mounted and run the simulation, for example with the following command:
The previous command will run the solver on 4 processors using the input file solver.xml and the mesh in the folder 'FolderToUpload' mounted inside the container.
63
-
As an example if we want to run the test case in tests/cases/fluid/pipe_RCR_3d we can proceed as follows:
64
-
```
65
-
docker run -it -v ~/full_path_to/tests/cases/fluid/pipe_RCR_3d:/case solver:latest
66
-
```
67
-
Now we are inside the container and we run the simulation:
Most of the HPC systems (if not all) are based on AMD64 architecture and the solver image can be directly pulled from [simvascular/solver](https://hub.docker.com/r/simvascular/solver). First of all, make sure the singularity module is loaded on the HPC system. Then, pull the solver image (it is recommended to run the following command on the compute node for example through an interactive job):
After the pull is complete, you should have a file with extension .sif (solver image). This image contains the two executables of the svMultiPhysics program build with PETSc and Trilinos support, respectively.
84
-
In the following, we provide two example of job submission's scripts that can be used as a reference to run a simulation using the svMultiPhysics solver on an HPC cluster.
85
-
1) single-node job script:
86
-
```
87
-
#!/bin/bash
88
-
#SBATCH --job-name
89
-
#SBATCH --output
90
-
#SBATCH --partition
91
-
#SBATCH --nodes=1
92
-
#SBATCH --ntasks-per-node=
93
-
#SBATCH --mem=0
94
-
#SBATCH -t 48:00:00
95
-
96
-
NTASKS = # number of tasks
97
-
FOLDER_TO_BIND1 = # path to folder to bind to the container (it will be accessible to the container)
98
-
FOLDER_TO_BIND2 = # path to folder to bind to the container (it will be accessible to the container)
99
-
PATH_TO_IMAGE = # full path to image, including the image name (*.sif file)
100
-
101
-
# For single node, no modules should be loaded to avoid incongruences between HPC and containers environments
102
-
module purge
103
-
104
-
singularity run --bind $FOLDER_TO_BIND1, $FOLDER_TO_BIND2, # and so on \
Since the multi-node relies on both MPI, the one on the HPC and the one inside the container, there may be some problems. In the following, we give a solution (workaround) for two common problems:
138
-
- if the HPC OpenMPI was built with cuda support, then it may happen that it is expecting that OpenMPI inside the container to be built with cuda support too, which is not the case. Possible solution is to add --mca mpi_cuda_support 0:
<h1id="building"> Building the svMultiPhysics Program from Source </h1>
152
-
The svMultiPhysics program can be compiled and linked from the GitHub source using a CMake build process. The build process creates a binary executable file named <b>svmultiphysics</b>.
43
+
44
+
The svMultiPhysics source code can be downloaded from this GitHub repository by anyone who wants to inspect, modify, and enhance the code. The svMultiPhysics program is built using the [CMake](https://cmake.org) build system.
153
45
154
46
## Supported Platforms
155
47
@@ -172,17 +64,16 @@ The following software packages are required to be installed in order to build s
172
64
-[LAPACK](https://www.netlib.org/lapack/) - Used for solving systems of simultaneous linear equations (optional but may be needed for external linear algebra packages)
173
65
174
66
175
-
176
67
These software packages are installed using a package-management system
Installing VTK on a high-performance computing (HPC) cluster is typically not supported and may require building it from source. See [Building Visualization Toolkit (VTK) Libraries](#building_vtk).
183
74
184
75
185
-
## Building svMultiPhysics
76
+
## Build Process
186
77
187
78
svMultiPhysics is built using the following steps
188
79
@@ -210,10 +101,14 @@ svMultiPhysics is built using the following steps
210
101
build/svMultiPhysics-build/bin/svmultiphysics
211
102
```
212
103
104
+
To rebuild the program after making code changes
105
+
106
+
1) cd `build/svMultiPhysics-build/`
107
+
2)`make`
213
108
214
109
<h2id="building_vtk"> Building Visualization Toolkit (VTK) Libraries </h2>
215
110
216
-
svMultiPhysics uses VTK to read finite element mesh data (created by the SimVascular mesh generation software), fiber geometry, initial conditions and write simulation results. Building the complete VTK library requires certain graphics libraries to be installed (e.g. OpenGL, X11) which make it difficult to build on an HPC cluster.
111
+
svMultiPhysics uses VTK to read finite element mesh data (created by the SimVascular mesh generation software), fiber geometry, initial conditions and write simulation results. Building the complete VTK library requires certain graphics libraries to be installed (e.g. OpenGL, X11) which makes it difficult to build on an HPC cluster.
217
112
218
113
However, a subset of the complete VTK library can be built to just include reading/writing functionality without graphics.
219
114
@@ -389,3 +284,117 @@ A simulation can be run in parallel on four processors using
389
284
mpiexec -np 4 svmultiphysics fluid3.xml
390
285
```
391
286
In this case a directory named `4-procs` containing the simulation results output will be created. Results from different processors will be combined into a single file for a given time step.
The preferred way to use svMultiPhysics on an HPC system, is to take advantage of the provided Docker container, which include the latest version of svMultiPhysics pre-compiled. To use this option, Docker must be installed first. Please refer to [Docker webpage](https://www.docker.com/products/docker-desktop/) to know more about Docker and how to install it on your machine. The following steps describe how to build a Docker image or pull an existent one from DockerHub, and how to run a Docker container. The last section is a brief guide to perform the same steps but in Singularity, since HPC systems usually use Singularity to handle containers.
294
+
295
+
## Docker image
296
+
A Docker image is a read-only template that may contain dependencies, libraries, and everything needed to run a program. It is like a snapshot of a particular environment.
297
+
A Docker image can be created directly from a [dockerfile](https://docs.docker.com/reference/dockerfile/#:~:text=A%20Dockerfile%20is%20a%20text,can%20use%20in%20a%20Dockerfile.) or an existent image can be pulled from [DockerHub](https://hub.docker.com). For this repository, both options are available.
298
+
The latest version of svMultiPhysics program is pre-compiled in a Docker image, built from a dockerfile provided in Docker/solver. The Docker image includes two different type of builds, one where the solver is compiled with Trilinos and the other one where the solver is compiled with PETSc.
299
+
This Docker image can be downloaded (pulled) from the dockerhub simvascular repository [simvascular/solver](https://registry.hub.docker.com/u/simvascular). To pull an image, run the command:
300
+
```
301
+
docker pull simvascular/solver:latest
302
+
```
303
+
Note that this image was built for AMD64 (x86) architecture, and it will not work on other architectures such as ARM64 (AArch64, also note that the Apple M-series processors are based on ARM-type architectures). In this case, the image has to be built from the provided dockerfile. Please refer to the README inside Docker/ folder for more information on how to build images from the provided dockerfiles.
304
+
305
+
## Docker container
306
+
A Docker container is a running instance of a Docker image. It is a lightweight, isolated, and executable unit.
307
+
Once the image is created, it can be run interactively by running the following command:
308
+
```
309
+
docker run -it -v FolderToUpload:/NameOfFolder solver:latest
310
+
```
311
+
In this command:
312
+
- -it: means run interactively Docker image
313
+
- -v: mounts a directory 'FolderToUpload' from the host machine in the container where the directory has the name '/NameOfFolder'. For example the folder containing the mesh and the input file necessary to run a simulation should be mounted. Once inside the container we can move into the folder jsut mounted and run the simulation, for example with the following command:
The previous command will run the solver on 4 processors using the input file solver.xml and the mesh in the folder 'FolderToUpload' mounted inside the container.
318
+
As an example if we want to run the test case in tests/cases/fluid/pipe_RCR_3d we can proceed as follows:
319
+
```
320
+
docker run -it -v ~/full_path_to/tests/cases/fluid/pipe_RCR_3d:/case solver:latest
321
+
```
322
+
Now we are inside the container and we run the simulation:
Most of the HPC systems (if not all) are based on AMD64 architecture and the solver image can be directly pulled from [simvascular/solver](https://hub.docker.com/r/simvascular/solver). First of all, make sure the singularity module is loaded on the HPC system. Then, pull the solver image (it is recommended to run the following command on the compute node for example through an interactive job):
After the pull is complete, you should have a file with extension .sif (solver image). This image contains the two executables of the svMultiPhysics program build with PETSc and Trilinos support, respectively.
339
+
In the following, we provide two example of job submission's scripts that can be used as a reference to run a simulation using the svMultiPhysics solver on an HPC cluster.
340
+
1) single-node job script:
341
+
```
342
+
#!/bin/bash
343
+
#SBATCH --job-name
344
+
#SBATCH --output
345
+
#SBATCH --partition
346
+
#SBATCH --nodes=1
347
+
#SBATCH --ntasks-per-node=
348
+
#SBATCH --mem=0
349
+
#SBATCH -t 48:00:00
350
+
351
+
NTASKS = # number of tasks
352
+
FOLDER_TO_BIND1 = # path to folder to bind to the container (it will be accessible to the container)
353
+
FOLDER_TO_BIND2 = # path to folder to bind to the container (it will be accessible to the container)
354
+
PATH_TO_IMAGE = # full path to image, including the image name (*.sif file)
355
+
356
+
# For single node, no modules should be loaded to avoid incongruences between HPC and containers environments
357
+
module purge
358
+
359
+
singularity run --bind $FOLDER_TO_BIND1, $FOLDER_TO_BIND2, # and so on \
Since the multi-node relies on both MPI, the one on the HPC and the one inside the container, there may be some problems. In the following, we give a solution (workaround) for two common problems:
393
+
- if the HPC OpenMPI was built with cuda support, then it may happen that it is expecting that OpenMPI inside the container to be built with cuda support too, which is not the case. Possible solution is to add --mca mpi_cuda_support 0:
0 commit comments