@@ -9,7 +9,7 @@ available plugins allow local and distributed execution of workflows and
9
9
debugging. Each available plugin is described below.
10
10
11
11
Current plugins are available for Linear, Multiprocessing, IPython _ distributed
12
- processing platforms and for direct processing on SGE _, PBS _, HTCondor _, LSF _, and SLURM _. We
12
+ processing platforms and for direct processing on SGE _, PBS _, HTCondor _, LSF _, OAR _, and SLURM _. We
13
13
anticipate future plugins for the Soma _ workflow.
14
14
15
15
.. note ::
@@ -276,6 +276,34 @@ for all nodes could look like this::
276
276
wrapper_args=shim_args)
277
277
)
278
278
279
+ OAR
280
+ ---
281
+
282
+ In order to use nipype with OAR _ you simply need to call::
283
+
284
+ workflow.run(plugin='OAR')
285
+
286
+ Optional arguments::
287
+
288
+ template: custom template file to use
289
+ oar_args: any other command line args to be passed to qsub.
290
+ max_jobname_len: (PBS only) maximum length of the job name. Default 15.
291
+
292
+ For example, the following snippet executes the workflow on myqueue with
293
+ a custom template::
294
+
295
+ workflow.run(plugin='oar',
296
+ plugin_args=dict(template='mytemplate.sh', oarsub_args='-q myqueue')
297
+
298
+ In addition to overall workflow configuration, you can use node level
299
+ configuration for OAR::
300
+
301
+ node.plugin_args = {'oarsub_args': '-l "nodes=1/cores=3"'}
302
+
303
+ this would apply only to the node and is useful in situations, where a
304
+ particular node might use more resources than other nodes in a workflow.
305
+
306
+
279
307
``qsub `` emulation
280
308
~~~~~~~~~~~~~~~~~~
281
309
0 commit comments