Skip to content

Commit 3c3bb8f

Browse files
author
bpinsard
committed
Merge branch 'master' into enh/freesurfer
* master: (178 commits) fix: ridiculous mistakes fix: make sure mandatory=False items are displayed fix: mesh import to enable doc building doctest fixes docs and changelog changelog Fixed tests enh: added options to execute script from command line fix: fix env again fix: proper environment access fix: tests and added new data files fix: updated checking script fix: updating fsl metadata fix: updated all inappropriate metadata (fsl update in separate commit) Added nocheck option in fielmap_correction enh: added script to check metadata Fixed cmdline commands in docstrings Revert "Run autopep8 on misc.py" Fixed failed test Fixed errors in Doctest (?) ...
2 parents d0e4612 + 509b5de commit 3c3bb8f

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

101 files changed

+5294
-1866
lines changed

CHANGES

Lines changed: 26 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,12 +3,36 @@ Next release
33

44
* ENH: SelectFiles: a streamlined version of DataGrabber
55
* ENH: New interfaces: spm.ResliceToReference, FuzzyOverlap, afni.AFNItoNIFTI
6-
spm.DicomImport
6+
spm.DicomImport, P2PDistance
77
* ENH: W3C PROV support with optional RDF export built into Nipype
8-
8+
* ENH: Added support for Simple Linux Utility Resource Management (SLURM)
9+
10+
* ENH: Several new interfaces related to Camino were added:
11+
- camino.SFPICOCalibData
12+
- camino.Conmat
13+
- camino.QBallMX
14+
- camino.LinRecon
15+
- camino.SFPeaks
16+
One outdated interface no longer part of Camino was removed:
17+
- camino.Conmap
18+
19+
* FIX: Several fixes related to Camino interfaces:
20+
- ProcStreamlines would ignore many arguments silently (target, waypoint, exclusion ROIS, etc.)
21+
- DTLUTGen would silently round the "step", "snr" and "trace" parameters to integers
22+
- PicoPDFs would not accept more than one lookup table
23+
- PicoPDFs default pdf did not correspond to Camino default
24+
- Track input model names were outdated (and would generate an error)
25+
- Track numpds parameter could not be set for deterministic tractography
26+
- FA created output files with erroneous extension
27+
928
* FIX: Deals properly with 3d files in SPM Realign
29+
* FIX: SPM with MCR fixed
1030

1131
* API: 'name' is now a positional argument for Workflow, Node, and MapNode constructors
32+
* API: SPM now defaults to SPM8 or SPM12b job format
33+
34+
* ENH: New FSL interfaces: fsl.PrepareFieldmap, fsl.TOPUP, fsl.ApplyTOPUP, fsl.Eddy
35+
* ENH: New workflows: nipype.workflows.dmri.fsl.epi.[fieldmap_correction&topup_correction]
1236

1337
Release 0.8.0 (May 8, 2013)
1438
===========================

doc/devel/provenance.rst

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,4 +18,9 @@ write out a provenance of the workflow if instructed.
1818

1919
This is very much an experimental feature as we continue to refine how exactly
2020
the provenance should be stored and how such information can be used for
21-
reporting or reconstituting workflows.
21+
reporting or reconstituting workflows. By default provenance writing is disabled
22+
for the 0.9 release, to enable insert the following code at the top of your
23+
script::
24+
25+
>>> from nipype import config
26+
>>> config.enable_provenance()

doc/users/index.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,9 +31,10 @@
3131
select_files
3232
function_interface
3333
mapnode_and_iterables
34+
joinnode_and_itersource
3435
model_specification
3536
saving_workflows
36-
37+
spmmcr
3738

3839

3940

doc/users/joinnode_and_itersource.rst

Lines changed: 175 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,175 @@
1+
.. _joinnode_and_itersource:
2+
3+
====================================
4+
JoinNode, synchronize and itersource
5+
====================================
6+
The previous :doc:`mapnode_and_iterables` chapter described how to
7+
fork and join nodes using MapNode and iterables. In this chapter, we
8+
introduce features which build on these concepts to add workflow
9+
flexibility.
10+
11+
JoinNode, joinsource and joinfield
12+
==================================
13+
14+
A :class:`nipype.pipeline.engine.JoinNode` generalizes MapNode to
15+
operate in conjunction with an upstream iterable node to reassemble
16+
downstream results, e.g.:
17+
18+
.. digraph:: joinnode_ex
19+
20+
"A" -> "B1" -> "C1" -> "D";
21+
"A" -> "B2" -> "C2" -> "D";
22+
"A" -> "B3" -> "C3" -> "D";
23+
24+
The code to achieve this is as follows:
25+
26+
::
27+
28+
import nipype.pipeline.engine as pe
29+
a = pe.Node(interface=A(), name="a")
30+
b = pe.Node(interface=B(), name="b")
31+
b.iterables = ("in_file", images)
32+
c = pe.Node(interface=C(), name="c")
33+
d = pe.JoinNode(interface=D(), joinsource="b",
34+
joinfield="in_files", name="d")
35+
36+
my_workflow = pe.Workflow(name="my_workflow")
37+
my_workflow.connect([(a,b,[('subject','subject')]),
38+
(b,c,[('out_file','in_file')])
39+
(c,d,[('out_file','in_files')])
40+
])
41+
42+
This example assumes that interface "A" has one output *subject*,
43+
interface "B" has two inputs *subject* and *in_file* and one output
44+
*out_file*, interface "C" has one input *in_file* and one output
45+
*out_file*, and interface D has one list input *in_files*. The
46+
*images* variable is a list of three input image file names.
47+
48+
As with *iterables* and the MapNode *iterfield*, the *joinfield*
49+
can be a list of fields. Thus, the declaration in the previous example
50+
is equivalent to the following:
51+
52+
::
53+
54+
d = pe.JoinNode(interface=D(), joinsource="b",
55+
joinfield=["in_files"], name="d")
56+
57+
The *joinfield* defaults to all of the JoinNode input fields, so the
58+
declaration is also equivalent to the following:
59+
60+
::
61+
62+
d = pe.JoinNode(interface=D(), joinsource="b", name="d")
63+
64+
In this example, the node "c" *out_file* outputs are collected into
65+
the JoinNode "d" *in_files* input list. The *in_files* order is the
66+
same as the upstream "b" node iterables order.
67+
68+
The JoinNode input can be filtered for unique values by specifying
69+
the *unique* flag, e.g.:
70+
71+
::
72+
73+
d = pe.JoinNode(interface=D(), joinsource="b", unique=True, name="d")
74+
75+
synchronize
76+
===========
77+
78+
The :class:`nipype.pipeline.engine.Node` *iterables* parameter can be
79+
be a single field or a list of fields. If it is a list, then execution
80+
is performed over all permutations of the list items. For example:
81+
82+
::
83+
84+
b.iterables = [("m", [1, 2]), ("n", [3, 4])]
85+
86+
results in the execution graph:
87+
88+
.. digraph:: multiple_iterables_ex
89+
90+
"A" -> "B13" -> "C";
91+
"A" -> "B14" -> "C";
92+
"A" -> "B23" -> "C";
93+
"A" -> "B24" -> "C";
94+
95+
where "B13" has inputs *m* = 1, *n* = 3, "B14" has inputs *m* = 1,
96+
*n* = 4, etc.
97+
98+
The *synchronize* parameter synchronizes the iterables lists, e.g.:
99+
100+
::
101+
102+
b.iterables = [("m", [1, 2]), ("n", [3, 4])]
103+
b.synchronize = True
104+
105+
results in the execution graph:
106+
107+
.. digraph:: synchronize_ex
108+
109+
"A" -> "B13" -> "C";
110+
"A" -> "B24" -> "C";
111+
112+
where the iterable inputs are selected in lock-step by index, i.e.:
113+
114+
(*m*, *n*) = (1, 3) and (2, 4)
115+
116+
for "B13" and "B24", resp.
117+
118+
itersource
119+
==========
120+
121+
The *itersource* feature allows you to expand a downstream iterable
122+
based on a mapping of an upstream iterable. For example:
123+
124+
::
125+
126+
a = pe.Node(interface=A(), name="a")
127+
b = pe.Node(interface=B(), name="b")
128+
b.iterables = ("m", [1, 2])
129+
c = pe.Node(interface=C(), name="c")
130+
d = pe.Node(interface=D(), name="d")
131+
d.itersource = ("b", "m")
132+
d.iterables = [("n", {1:[3,4], 2:[5,6]})]
133+
my_workflow = pe.Workflow(name="my_workflow")
134+
my_workflow.connect([(a,b,[('out_file','in_file')]),
135+
(b,c,[('out_file','in_file')])
136+
(c,d,[('out_file','in_file')])
137+
])
138+
139+
results in the execution graph:
140+
141+
.. digraph:: itersource_ex
142+
143+
"A" -> "B1" -> "C1" -> "D13";
144+
"C1" -> "D14";
145+
"A" -> "B2" -> "C2" -> "D25";
146+
"C2" -> "D26";
147+
148+
In this example, all interfaces have input *in_file* and output
149+
*out_file*. In addition, interface "B" has input *m* and interface "D"
150+
has input *n*. A Python dictionary associates the "b" node input
151+
value with the downstream "d" node *n* iterable values.
152+
153+
This example can be extended with a summary JoinNode:
154+
155+
::
156+
157+
e = pe.JoinNode(interface=E(), joinsource="d",
158+
joinfield="in_files", name="e")
159+
my_workflow.connect(d, 'out_file',
160+
e, 'in_files')
161+
162+
resulting in the graph:
163+
164+
.. digraph:: itersource_with_join_ex
165+
166+
"A" -> "B1" -> "C1" -> "D13" -> "E";
167+
"C1" -> "D14" -> "E";
168+
"A" -> "B2" -> "C2" -> "D25" -> "E";
169+
"C2" -> "D26" -> "E";
170+
171+
The combination of iterables, MapNode, JoinNode, synchronize and
172+
itersource enables the creation of arbitrarily complex workflow graphs.
173+
The astute workflow builder will recognize that this flexibility is
174+
both a blessing and a curse. These advanced features are handy additions
175+
to the Nipype toolkit when used judiciously.

doc/users/plugins.rst

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ available plugins allow local and distributed execution of workflows and
99
debugging. Each available plugin is described below.
1010

1111
Current plugins are available for Linear, Multiprocessing, IPython_ distributed
12-
processing platforms and for direct processing on SGE_, PBS_, HTCondor_, and LSF_. We
12+
processing platforms and for direct processing on SGE_, PBS_, HTCondor_, LSF_, and SLURM_. We
1313
anticipate future plugins for the Soma_ workflow.
1414

1515
.. note::
@@ -270,3 +270,5 @@ Optional arguments::
270270
.. _DAGMan: http://research.cs.wisc.edu/htcondor/dagman/dagman.html
271271
.. _HTCondor documentation: http://research.cs.wisc.edu/htcondor/manual
272272
.. _DMTCP: http://dmtcp.sourceforge.net
273+
.. _SLURM: http://slurm.schedmd.com/
274+

doc/users/spmmcr.rst

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
.. _spmmcr:
2+
3+
====================================
4+
Using SPM with MATLAB Common Runtime
5+
====================================
6+
7+
In order to use the standalone MCR version of spm, you need to ensure that
8+
the following commands are executed at the beginning of your script:
9+
10+
.. testcode::
11+
12+
from nipype import spm
13+
matlab_cmd = '/path/to/run_spm8.sh /path/to/Compiler_Runtime/v713/ script'
14+
spm.SPMCommand.set_mlab_paths(matlab_cmd=matlab_cmd, use_mcr=True)
15+
16+
you can test by calling:
17+
18+
.. testcode::
19+
20+
spm.SPMCommand().version
21+
22+
Information about the MCR version of SPM8 can be found at:
23+
24+
http://en.wikibooks.org/wiki/SPM/Standalone

examples/fmri_openfmri.py

Lines changed: 33 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,9 @@
1212
python fmri_openfmri.py --datasetdir ds107
1313
"""
1414

15+
from nipype import config
16+
config.enable_provenance()
17+
1518
from glob import glob
1619
import os
1720

@@ -84,7 +87,7 @@ def get_subjectinfo(subject_id, base_dir, task_id, model_id):
8487

8588

8689
def analyze_openfmri_dataset(data_dir, subject=None, model_id=None,
87-
task_id=None, work_dir=None):
90+
task_id=None, output_dir=None):
8891
"""Analyzes an open fmri dataset
8992
9093
Parameters
@@ -379,21 +382,8 @@ def get_subs(subject_id, conds, model_id, task_id):
379382
modelfit.inputs.inputspec.model_serial_correlations = True
380383
modelfit.inputs.inputspec.film_threshold = 1000
381384

382-
if work_dir is None:
383-
work_dir = os.path.join(os.getcwd(), 'working')
384-
wf.base_dir = work_dir
385-
datasink.inputs.base_directory = os.path.join(work_dir, 'output')
386-
wf.config['execution'] = dict(crashdump_dir=os.path.join(work_dir,
387-
'crashdumps'),
388-
stop_on_first_crash=True)
389-
#wf.run('MultiProc', plugin_args={'n_procs': 4})
390-
eg = wf.run('Linear')
391-
wf.export('openfmri.py')
392-
wf.write_graph(dotfilename='hgraph.dot', graph2use='hierarchical')
393-
wf.write_graph(dotfilename='egraph.dot', graph2use='exec')
394-
wf.write_graph(dotfilename='fgraph.dot', graph2use='flat')
395-
wf.write_graph(dotfilename='ograph.dot', graph2use='orig')
396-
return eg
385+
datasink.inputs.base_directory = output_dir
386+
return wf
397387

398388
if __name__ == '__main__':
399389
import argparse
@@ -403,10 +393,33 @@ def get_subs(subject_id, conds, model_id, task_id):
403393
parser.add_argument('-s', '--subject', default=None)
404394
parser.add_argument('-m', '--model', default=1)
405395
parser.add_argument('-t', '--task', default=1)
396+
parser.add_argument("-o", "--output_dir", dest="outdir",
397+
help="Output directory base")
398+
parser.add_argument("-w", "--work_dir", dest="work_dir",
399+
help="Output directory base")
400+
parser.add_argument("-p", "--plugin", dest="plugin",
401+
default='Linear',
402+
help="Plugin to use")
403+
parser.add_argument("--plugin_args", dest="plugin_args",
404+
help="Plugin arguments")
406405
args = parser.parse_args()
407-
eg = analyze_openfmri_dataset(data_dir=os.path.abspath(args.datasetdir),
406+
outdir = args.outdir
407+
work_dir = os.getcwd()
408+
if args.work_dir:
409+
work_dir = os.path.abspath(args.work_dir)
410+
if outdir:
411+
outdir = os.path.abspath(outdir)
412+
else:
413+
outdir = os.path.join(work_dir, 'output')
414+
outdir = os.path.join(outdir, 'model%02d' % int(args.model),
415+
'task%03d' % int(args.task))
416+
wf = analyze_openfmri_dataset(data_dir=os.path.abspath(args.datasetdir),
408417
subject=args.subject,
409418
model_id=int(args.model),
410-
task_id=int(args.task))
411-
from nipype.pipeline.utils import write_prov
412-
g = write_prov(eg, format='turtle')
419+
task_id=int(args.task),
420+
output_dir=outdir)
421+
wf.base_dir = work_dir
422+
if args.plugin_args:
423+
wf.run(args.plugin, plugin_args=eval(args.plugin_args))
424+
else:
425+
wf.run(args.plugin)

examples/fmri_spm.py

Lines changed: 10 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,10 +15,14 @@
1515
1616
Import necessary modules from nipype."""
1717

18+
from nipype import spm, fsl
19+
20+
# In order to use this example with SPM's matlab common runtime
21+
# matlab_cmd = ('/Users/satra/Downloads/spm8/run_spm8.sh '
22+
# '/Applications/MATLAB/MATLAB_Compiler_Runtime/v713/ script')
23+
# spm.SPMCommand.set_mlab_paths(matlab_cmd=matlab_cmd, use_mcr=True)
24+
1825
import nipype.interfaces.io as nio # Data i/o
19-
import nipype.interfaces.spm as spm # spm
20-
import nipype.interfaces.matlab as mlab # how to run matlab
21-
import nipype.interfaces.fsl as fsl # fsl
2226
import nipype.interfaces.utility as util # utility
2327
import nipype.pipeline.engine as pe # pypeline engine
2428
import nipype.algorithms.rapidart as ra # artifact detection
@@ -40,7 +44,8 @@
4044
fsl.FSLCommand.set_default_output_type('NIFTI')
4145

4246
# Set the way matlab should be called
43-
mlab.MatlabCommand.set_default_matlab_cmd("matlab -nodesktop -nosplash")
47+
# import nipype.interfaces.matlab as mlab # how to run matlab
48+
# mlab.MatlabCommand.set_default_matlab_cmd("matlab -nodesktop -nosplash")
4449

4550
"""The nipype tutorial contains data for two subjects. Subject data
4651
is in two subdirectories, ``s1`` and ``s2``. Each subject directory
@@ -382,7 +387,7 @@ def getstripdir(subject_id):
382387
"""
383388

384389
if __name__ == '__main__':
385-
l1pipeline.run()
390+
l1pipeline.run('MultiProc')
386391
# l2pipeline.run()
387392
# l1pipeline.write_graph()
388393

0 commit comments

Comments
 (0)