@@ -71,7 +71,7 @@ a local system.
71
71
72
72
Optional arguments::
73
73
74
- n_procs : Number of processes to launch in parallel, if not set number of
74
+ n_procs : Number of processes to launch in parallel, if not set number of
75
75
processors/threads will be automatically detected
76
76
77
77
To distribute processing on a multicore machine, simply call::
@@ -118,7 +118,7 @@ Optional arguments::
118
118
119
119
For example, the following snippet executes the workflow on myqueue with
120
120
a custom template::
121
-
121
+
122
122
workflow.run(plugin='SGE',
123
123
plugin_args=dict(template='mytemplate.sh', qsub_args='-q myqueue')
124
124
@@ -136,18 +136,18 @@ particular node might use more resources than other nodes in a workflow.
136
136
this local configuration::
137
137
138
138
node.plugin_args = {'qsub_args': '-l nodes=1:ppn=3', 'overwrite': True}
139
-
139
+
140
140
SGEGraph
141
141
~~~~~~~~
142
142
SGEGraph _ is an execution plugin working with Sun Grid Engine that allows for
143
- submitting entire graph of dependent jobs at once. This way Nipype does not
143
+ submitting entire graph of dependent jobs at once. This way Nipype does not
144
144
need to run a monitoring process - SGE takes care of this. The use of SGEGraph _
145
- is preferred over SGE _ since the latter adds unnecessary load on the submit
145
+ is preferred over SGE _ since the latter adds unnecessary load on the submit
146
146
machine.
147
147
148
148
.. note ::
149
149
150
- When rerunning unfinished workflows using SGEGraph you may decide not to
150
+ When rerunning unfinished workflows using SGEGraph you may decide not to
151
151
submit jobs for Nodes that previously finished running. This can speed up
152
152
execution, but new or modified inputs that would previously trigger a Node
153
153
to rerun will be ignored. The following option turns on this functionality::
@@ -177,20 +177,20 @@ Optional arguments::
177
177
178
178
template: custom template file to use
179
179
sbatch_args: any other command line args to be passed to bsub.
180
-
181
-
180
+
181
+
182
182
SLURMGraph
183
183
~~~~~~~~~~
184
184
SLURMGraph _ is an execution plugin working with SLURM that allows for
185
- submitting entire graph of dependent jobs at once. This way Nipype does not
185
+ submitting entire graph of dependent jobs at once. This way Nipype does not
186
186
need to run a monitoring process - SLURM takes care of this. The use of SLURMGraph _
187
- plugin is preferred over the vanilla SLURM _ plugin since the latter adds
188
- unnecessary load on the submit machine.
187
+ plugin is preferred over the vanilla SLURM _ plugin since the latter adds
188
+ unnecessary load on the submit machine.
189
189
190
190
191
191
.. note ::
192
192
193
- When rerunning unfinished workflows using SLURMGraph you may decide not to
193
+ When rerunning unfinished workflows using SLURMGraph you may decide not to
194
194
submit jobs for Nodes that previously finished running. This can speed up
195
195
execution, but new or modified inputs that would previously trigger a Node
196
196
to rerun will be ignored. The following option turns on this functionality::
@@ -205,11 +205,11 @@ DAGMan
205
205
~~~~~~
206
206
207
207
With its DAGMan _ component HTCondor _ (previously Condor) allows for submitting
208
- entire graphs of dependent jobs at once (similar to SGEGraph _ and SLURMGaaoh _ ).
209
- With the ``CondorDAGMan `` plug-in Nipype can utilize this functionality to
210
- submit complete workflows directly and in a single step. Consequently, and
211
- in contrast to other plug-ins, workflow execution returns almost
212
- instantaneously -- Nipype is only used to generate the workflow graph,
208
+ entire graphs of dependent jobs at once (similar to SGEGraph _ and SLURMGraph _ ).
209
+ With the ``CondorDAGMan `` plug-in Nipype can utilize this functionality to
210
+ submit complete workflows directly and in a single step. Consequently, and
211
+ in contrast to other plug-ins, workflow execution returns almost
212
+ instantaneously -- Nipype is only used to generate the workflow graph,
213
213
while job scheduling and dependency resolution are entirely managed by HTCondor _.
214
214
215
215
Please note that although DAGMan _ supports specification of data dependencies
@@ -320,4 +320,3 @@ Optional arguments::
320
320
.. _HTCondor documentation : http://research.cs.wisc.edu/htcondor/manual
321
321
.. _DMTCP : http://dmtcp.sourceforge.net
322
322
.. _SLURM : http://slurm.schedmd.com/
323
-
0 commit comments