You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This will create the `HADDOCK3-antibody-antigen` directory with all necessary data and scripts and job examples ready for submission to the batch system.
228
230
229
-
HADDOCK3 has been pre-installed on the compute nodes. To test the installation, first create an interactive session on a node with:
231
+
HADDOCK3 has been pre-installed. To activate the HADDOCK3 environment type:
@@ -965,33 +961,81 @@ In in the first section of the workflow above we have a parameter `mode` definin
965
961
966
962
<hr>
967
963
968
-
#### Execution of HADDOCK3 on Fugaku (ASEAN HPC school)
964
+
#### Execution of HADDOCK3 on the TRUBA resources (EuroCC Istanbul April 2024 workshop)
969
965
970
-
To execute the workflow on Fugaku, you can either start an interactive session or create a job file that will execute HADDOCK3 on a node, with HADDOCK3 running in local mode (the setup in the above configuration file with `mode="local"`) and harvesting all core of that node (`ncores=48`).
966
+
To execute the HADDOCK3 workflow on the computational resources provided for this workshop,
967
+
you should create an execution script contain specific requirements for the queueing system and the HADDOCK3 configuration and execution.
968
+
Two scripts are provided with the data you unzipped, one for execution on the hamsri cluster and one for the barbun cluster:
This file should be submitted to the batch system using the `sbatch` command:
985
992
986
993
<aclass="prompt prompt-cmd">
987
-
haddock3 ./workflows/docking-antibody-antigen.cfg
994
+
sbatch run-haddock3-hamsri.sh
988
995
</a>
989
996
997
+
**_Note_** that batch submission is only possible from the `scratch` partition (`/arf/scratch/<my-home-directory>`)
990
998
991
-
**_Job submission to the batch system:_**
999
+
And you can check the status in the queue using the `squeue`command.
992
1000
1001
+
This example run should take about 20 minutes to complete on a single node using 50 cores.
993
1002
994
-
For this execution mode you should create an execution script contain specific requirements for the queueing system and the HADDOCK3 configuration and execution. Here is an example of such an execution script (also provided in the `HADDOCK3-antibody-antigen` directory as `run-haddock3-fugaku.sh`):
1003
+
1004
+
<hr>
1005
+
1006
+
#### Execution of HADDOCK3 on Fugaku (ASEAN 2025 HPC school)
1007
+
1008
+
<detailsstyle="background-color:#DAE4E7">
1009
+
<summarystyle="bold">
1010
+
<b><i>Execution instructions for running HADDOCK3 on Fugaku</i></b> <i class="material-icons">expand_more</i>
1011
+
</summary>
1012
+
1013
+
To execute the workflow on Fugaku, you can either start an interactive session or create a job file that will execute HADDOCK3 on a node,
1014
+
with HADDOCK3 running in local mode (the setup in the above configuration file with <i>mode="local"</i>) and harvesting all core of that node (<i>ncores=48</i>).
For this execution mode you should create an execution script contain specific requirements for the queueing system and the HADDOCK3 configuration and execution.
1038
+
Here is an example of such an execution script (also provided in the `HADDOCK3-antibody-antigen` directory as `run-haddock3-fugaku.sh`):
In this mode HADDOCK3 will run on the current system, using the defined number of cores (`ncores`) in the config file to a maximum of the total number of available cores on the system minus one. An example of the relevant parameters to be defined in the first section of the config file is:
1079
+
In this mode HADDOCK3 will run on the current system, using the defined number of cores (<i>ncores</i>)
1080
+
in the config file to a maximum of the total number of available cores on the system minus one.
1081
+
An example of the relevant parameters to be defined in the first section of the config file is:
1032
1082
1033
1083
{% highlight toml %}
1034
1084
# compute mode
@@ -1039,21 +1089,21 @@ ncores = 50
1039
1089
1040
1090
In this mode HADDOCK3 can be started from the command line with as argument the configuration file of the defined workflow.
1041
1091
1042
-
<aclass="prompt prompt-cmd">
1043
-
haddock3 \<my-workflow-configuration-file\>
1044
-
</a>
1092
+
{% highlight shell %}
1093
+
haddock3 <my-workflow-configuration-file>
1094
+
{% endhighlight %}
1045
1095
1046
1096
Alternatively redirect the output to a log file and send haddock3 to the background.
1047
1097
1048
1098
1049
1099
As an indication, running locally on an Apple M2 laptop using 10 cores, this workflow completed in 7 minutes.
_**Note**_: This is also the execution mode that should be used for example when submitting the HADDOCK3 job to a node of a cluster, requesting X number of cores.
1106
+
<b>Note</b>: This is also the execution mode that should be used for example when submitting the HADDOCK3 job to a node of a cluster, requesting X number of cores.
1057
1107
1058
1108
</details>
1059
1109
@@ -1081,15 +1131,18 @@ _**Note**_: This is also the execution mode that should be used for example when
1081
1131
cd $HOME/HADDOCK3-antibody-antigen
1082
1132
1083
1133
# execute
1084
-
haddock3 \<my-workflow-configuration-file\>
1134
+
haddock3 <my-workflow-configuration-file>
1085
1135
{% endhighlight %}
1086
1136
<br>
1087
1137
1088
1138
1089
-
In this mode HADDOCK3 will typically be started on your local server (e.g. the login node) and will dispatch jobs to the batch system of your cluster. Two batch systems are currently supported: `slurm` and `torque` (defined by the `batch_type` parameter). In the configuration file you will
1090
-
have to define the `queue` name and the maximum number of concurrent jobs sent to the queue (`queue_limit`).
1139
+
In this mode HADDOCK3 will typically be started on your local server (e.g. the login node) and will dispatch jobs to the batch system of your cluster.
1140
+
Two batch systems are currently supported: <i>slurm</i> and <i>torque</i> (defined by the <i>batch_type</i> parameter).
1141
+
In the configuration file you will have to define the <i>queue</i> name and the maximum number of concurrent jobs sent to the queue (<i>queue_limit</i>).
1091
1142
1092
-
Since HADDOCK3 single model calculations are quite fast, it is recommended to calculate multiple models within one job submitted to the batch system. The number of model per job is defined by the `concat` parameter in the configuration file. You want to avoid sending thousands of very short jobs to the batch system if you want to remain friend with your system administrators...
1143
+
Since HADDOCK3 single model calculations are quite fast, it is recommended to calculate multiple models within one job submitted to the batch system.
1144
+
he number of model per job is defined by the <i>concat</i> parameter in the configuration file.
1145
+
You want to avoid sending thousands of very short jobs to the batch system if you want to remain friend with your system administrators...
1093
1146
1094
1147
An example of the relevant parameters to be defined in the first section of the config file is:
1095
1148
@@ -1117,8 +1170,8 @@ have to define the `queue` name and the maximum number of concurrent jobs sent t
1117
1170
</summary>
1118
1171
1119
1172
1120
-
HADDOCK3 supports a parallel pseudo-MPI implementation. For this to work, the `mpi4py` library must have been installed at installation time.
1121
-
Refer to the [MPI-related instructions](https://www.bonvinlab.org/haddock3/tutorials/mpi.html){:target="_blank"}.
1173
+
HADDOCK3 supports a parallel pseudo-MPI implementation. For this to work, the <i>mpi4py</i> library must have been installed at installation time.
1174
+
Refer to the (<ahref="https://www.bonvinlab.org/haddock3/tutorials/mpi.html"target=new>MPI-related instructions</a>).
1122
1175
1123
1176
The execution mode should be set to `mpi` and the total number of cores should match the requested resources when submitting to the batch system.
0 commit comments