Skip to content

Commit d017734

Browse files
committed
further doc fixes - removed parallelization for now.
1 parent f33fa27 commit d017734

File tree

6 files changed

+28
-43
lines changed

6 files changed

+28
-43
lines changed

circle.yml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -21,17 +21,17 @@ test:
2121
override:
2222
- set -o pipefail && cd doc && make html 2>&1 | tee ~/log.txt
2323
- cat ~/log.txt && if grep -q "ERROR" ~/log.txt; then false; else true; fi
24-
- if [ "$CIRCLE_NODE_INDEX" = "0" ]; then . /usr/share/fsl/5.0/etc/fslconf/fsl.sh && python ~/nipype/tools/run_examples.py fmri_fsl_feeds Linear l1pipeline; fi:
24+
- . /usr/share/fsl/5.0/etc/fslconf/fsl.sh && python ~/nipype/tools/run_examples.py fmri_fsl_feeds Linear l1pipeline:
2525
pwd: ../examples
26-
- if [ "$CIRCLE_NODE_INDEX" = "1" ]; then . /usr/share/fsl/5.0/etc/fslconf/fsl.sh && python ~/nipype/tools/run_examples.py fmri_spm_dartel Linear level1 l2pipeline; fi:
26+
- . /usr/share/fsl/5.0/etc/fslconf/fsl.sh && python ~/nipype/tools/run_examples.py fmri_spm_dartel Linear level1 l2pipeline:
2727
pwd: ../examples
2828
environment:
2929
SPMMCRCMD: "$HOME/spm12/run_spm12.sh $HOME/mcr/v85/ script"
3030
FORCE_SPMMCR: 1
3131
timeout: 1600
32-
- if [ "$CIRCLE_NODE_INDEX" = "0" ]; then . /usr/share/fsl/5.0/etc/fslconf/fsl.sh && python ~/nipype/tools/run_examples.py fmri_fsl_reuse Linear level1_workflow; fi:
32+
- . /usr/share/fsl/5.0/etc/fslconf/fsl.sh && python ~/nipype/tools/run_examples.py fmri_fsl_reuse Linear level1_workflow:
3333
pwd: ../examples
34-
- if [ "$CIRCLE_NODE_INDEX" = "0" ]; then . /usr/share/fsl/5.0/etc/fslconf/fsl.sh && python ~/nipype/tools/run_examples.py fmri_spm_nested Linear level1 l2pipeline; fi:
34+
- . /usr/share/fsl/5.0/etc/fslconf/fsl.sh && python ~/nipype/tools/run_examples.py fmri_spm_nested Linear level1 l2pipeline:
3535
pwd: ../examples
3636
environment:
3737
SPMMCRCMD: "$HOME/spm12/run_spm12.sh $HOME/mcr/v85/ script"

doc/users/config_file.rst

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -80,8 +80,8 @@ Execution
8080
*remove_unnecessary_outputs*
8181
This will remove any interface outputs not needed by the workflow. If the
8282
required outputs from a node changes, rerunning the workflow will rerun the
83-
node. Outputs of leaf nodes (nodes whose outputs are not connected to any
84-
other nodes) will never be deleted independent of this parameter. (possible
83+
node. Outputs of leaf nodes (nodes whose outputs are not connected to any
84+
other nodes) will never be deleted independent of this parameter. (possible
8585
values: ``true`` and ``false``; default value: ``true``)
8686

8787
*try_hard_link_datasink*
@@ -129,7 +129,7 @@ Execution
129129
If this is set to True, the node's output directory will contain full
130130
parameterization of any iterable, otherwise parameterizations over 32
131131
characters will be replaced by their hash. (possible values: ``true`` and
132-
``false``; default value: ``true``)
132+
``false``; default value: ``true``)
133133

134134
*poll_sleep_duration*
135135
This controls how long the job submission loop will sleep between submitting
@@ -146,7 +146,7 @@ Example
146146

147147
[logging]
148148
workflow_level = DEBUG
149-
149+
150150
[execution]
151151
stop_on_first_crash = true
152152
hash_method = timestamp
@@ -156,9 +156,9 @@ Workflow.config property has a form of a nested dictionary reflecting the
156156
structure of the .cfg file.
157157

158158
::
159-
159+
160160
myworkflow = pe.Workflow()
161-
myworkflow.config['execution'] = {'stop_on_first_rerun': 'True',
161+
myworkflow.config['execution'] = {'stop_on_first_rerun': 'True',
162162
'hash_method': 'timestamp'}
163163

164164
You can also directly set global config options in your workflow script. An

doc/users/plugins.rst

Lines changed: 17 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@ a local system.
7171

7272
Optional arguments::
7373

74-
n_procs : Number of processes to launch in parallel, if not set number of
74+
n_procs : Number of processes to launch in parallel, if not set number of
7575
processors/threads will be automatically detected
7676

7777
To distribute processing on a multicore machine, simply call::
@@ -118,7 +118,7 @@ Optional arguments::
118118

119119
For example, the following snippet executes the workflow on myqueue with
120120
a custom template::
121-
121+
122122
workflow.run(plugin='SGE',
123123
plugin_args=dict(template='mytemplate.sh', qsub_args='-q myqueue')
124124

@@ -136,18 +136,18 @@ particular node might use more resources than other nodes in a workflow.
136136
this local configuration::
137137

138138
node.plugin_args = {'qsub_args': '-l nodes=1:ppn=3', 'overwrite': True}
139-
139+
140140
SGEGraph
141141
~~~~~~~~
142142
SGEGraph_ is an execution plugin working with Sun Grid Engine that allows for
143-
submitting entire graph of dependent jobs at once. This way Nipype does not
143+
submitting entire graph of dependent jobs at once. This way Nipype does not
144144
need to run a monitoring process - SGE takes care of this. The use of SGEGraph_
145-
is preferred over SGE_ since the latter adds unnecessary load on the submit
145+
is preferred over SGE_ since the latter adds unnecessary load on the submit
146146
machine.
147147

148148
.. note::
149149

150-
When rerunning unfinished workflows using SGEGraph you may decide not to
150+
When rerunning unfinished workflows using SGEGraph you may decide not to
151151
submit jobs for Nodes that previously finished running. This can speed up
152152
execution, but new or modified inputs that would previously trigger a Node
153153
to rerun will be ignored. The following option turns on this functionality::
@@ -177,20 +177,20 @@ Optional arguments::
177177

178178
template: custom template file to use
179179
sbatch_args: any other command line args to be passed to bsub.
180-
181-
180+
181+
182182
SLURMGraph
183183
~~~~~~~~~~
184184
SLURMGraph_ is an execution plugin working with SLURM that allows for
185-
submitting entire graph of dependent jobs at once. This way Nipype does not
185+
submitting entire graph of dependent jobs at once. This way Nipype does not
186186
need to run a monitoring process - SLURM takes care of this. The use of SLURMGraph_
187-
plugin is preferred over the vanilla SLURM_ plugin since the latter adds
188-
unnecessary load on the submit machine.
187+
plugin is preferred over the vanilla SLURM_ plugin since the latter adds
188+
unnecessary load on the submit machine.
189189

190190

191191
.. note::
192192

193-
When rerunning unfinished workflows using SLURMGraph you may decide not to
193+
When rerunning unfinished workflows using SLURMGraph you may decide not to
194194
submit jobs for Nodes that previously finished running. This can speed up
195195
execution, but new or modified inputs that would previously trigger a Node
196196
to rerun will be ignored. The following option turns on this functionality::
@@ -205,11 +205,11 @@ DAGMan
205205
~~~~~~
206206

207207
With its DAGMan_ component HTCondor_ (previously Condor) allows for submitting
208-
entire graphs of dependent jobs at once (similar to SGEGraph_ and SLURMGaaoh_).
209-
With the ``CondorDAGMan`` plug-in Nipype can utilize this functionality to
210-
submit complete workflows directly and in a single step. Consequently, and
211-
in contrast to other plug-ins, workflow execution returns almost
212-
instantaneously -- Nipype is only used to generate the workflow graph,
208+
entire graphs of dependent jobs at once (similar to SGEGraph_ and SLURMGraph_).
209+
With the ``CondorDAGMan`` plug-in Nipype can utilize this functionality to
210+
submit complete workflows directly and in a single step. Consequently, and
211+
in contrast to other plug-ins, workflow execution returns almost
212+
instantaneously -- Nipype is only used to generate the workflow graph,
213213
while job scheduling and dependency resolution are entirely managed by HTCondor_.
214214

215215
Please note that although DAGMan_ supports specification of data dependencies
@@ -320,4 +320,3 @@ Optional arguments::
320320
.. _HTCondor documentation: http://research.cs.wisc.edu/htcondor/manual
321321
.. _DMTCP: http://dmtcp.sourceforge.net
322322
.. _SLURM: http://slurm.schedmd.com/
323-

examples/fmri_ants_openfmri.py

Lines changed: 1 addition & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -106,8 +106,6 @@ def create_reg_workflow(name='registration'):
106106
outputspec.transformed_files : transformed files in target space
107107
outputspec.transformed_mean : mean image in target space
108108
109-
Example
110-
-------
111109
"""
112110

113111
register = pe.Workflow(name=name)
@@ -304,8 +302,6 @@ def create_fs_reg_workflow(name='registration'):
304302
Parameters
305303
----------
306304
307-
::
308-
309305
name : name of workflow (default: 'registration')
310306
311307
Inputs::
@@ -321,9 +317,6 @@ def create_fs_reg_workflow(name='registration'):
321317
outputspec.transformed_files : transformed files in target space
322318
outputspec.transformed_mean : mean image in target space
323319
324-
Example
325-
-------
326-
327320
"""
328321

329322
register = Workflow(name=name)
@@ -1106,7 +1099,7 @@ def get_subs(subject_id, conds, run_id, model_id, task_id):
11061099
'task%03d' % int(args.task))
11071100
derivatives = args.derivatives
11081101
if derivatives is None:
1109-
derivatives = False
1102+
derivatives = False
11101103
wf = analyze_openfmri_dataset(data_dir=os.path.abspath(args.datasetdir),
11111104
subject=args.subject,
11121105
model_id=int(args.model),

examples/rsfmri_vol_surface_preprocessing.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -370,8 +370,6 @@ def create_reg_workflow(name='registration'):
370370
Parameters
371371
----------
372372
373-
::
374-
375373
name : name of workflow (default: 'registration')
376374
377375
Inputs::

examples/rsfmri_vol_surface_preprocessing_nipy.py

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -348,8 +348,6 @@ def create_reg_workflow(name='registration'):
348348
Parameters
349349
----------
350350
351-
::
352-
353351
name : name of workflow (default: 'registration')
354352
355353
Inputs::
@@ -366,9 +364,6 @@ def create_reg_workflow(name='registration'):
366364
outputspec.transformed_files : transformed files in target space
367365
outputspec.transformed_mean : mean image in target space
368366
369-
Example
370-
-------
371-
372367
"""
373368

374369
register = Workflow(name=name)

0 commit comments

Comments
 (0)