@@ -46,74 +46,62 @@ Once conda finishes creating the virtual environment, activate `dsstdeface`.
4646conda activate dsstdeface
4747``` 
4848
49- ## Using  ` dsst_defacing_wf.py `  
49+ ## Usage  
5050
51- To deface anatomical scans in the dataset, run the ` src/dsst_defacing_wf .py `  script. From within the ` dsst-defacing-pipeline `  cloned directory, run the following command to see the help message.
51+ To deface anatomical scans in the dataset, run the ` src/run .py `  script. From within the ` dsst-defacing-pipeline `  cloned directory, run the following command to see the help message.
5252
5353``` text 
54- % python src/dsst_defacing_wf .py -h 
54+ % python src/run .py -h 
5555
56- usage: dsst_defacing_wf.py [-h] [-n N_CPUS] 
57-                            [-p PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]] 
58-                            [-s SESSION_ID [SESSION_ID ...]] 
59-                            [--no-clean] 
60-                            bids_dir output_dir 
56+ usage: run.py [-h] [-n N_CPUS] [-p PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]] 
57+               [-s SESSION_ID [SESSION_ID ...]] [--no-clean] 
58+               bids_dir output_dir 
6159
62- Deface anatomical scans for a given BIDS dataset or a subject 
63- directory in  BIDS format.
60+ Deface anatomical scans for a given BIDS dataset or a subject directory in  
61+ BIDS format. 
6462
6563positional arguments: 
66-   bids_dir              The directory with the input dataset 
67-                         formatted according to the BIDS standard.      
68-   output_dir            The directory where the output files should    
69-                         be stored. 
64+   bids_dir              The directory with the input dataset formatted 
65+                         according to the BIDS standard. 
66+   output_dir            The directory where the output files should be stored. 
7067
71- options :
68+ optional arguments :
7269  -h, --help            show this help message and exit 
7370  -n N_CPUS, --n-cpus N_CPUS 
74-                         Number of parallel processes to run when        
75-                         there is more  than one folder. Defaults to      
76-                         1, meaning "serial  processing". 
71+                         Number of parallel processes to run when there is more  
72+                         than one folder. Defaults to 1, meaning "serial  
73+                         processing". 
7774  -p PARTICIPANT_LABEL [PARTICIPANT_LABEL ...], --participant-label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...] 
78-                         The label(s) of the participant(s) that        
79-                         should be defaced. The label corresponds to    
80-                         sub-<participant_label> from the BIDS spec     
81-                         (so it does not include "sub-"). If this       
82-                         parameter is not provided all subjects         
83-                         should be analyzed. Multiple participants      
84-                         can be specified with a space separated        
85-                         list. 
86-   -s SESSION_ID [SESSION_ID ...], --session-id SESSION_ID [SESSION_ID ...] 
87-                         The ID(s) of the session(s) that should be     
75+                         The label(s) of the participant(s) that should be 
8876                        defaced. The label corresponds to 
89-                         ses-<session_id> from the BIDS spec (so it     
90-                         does not include "ses-"). If this parameter    
91-                         is not provided all subjects should be         
92-                         analyzed. Multiple sessions can be specified   
93-                         with a space separated list. 
94-   --no-clean            If this argument is provided, then AFNI        
95-                         intermediate files are preserved. 
96- ``` 
97- 
98- The script can be run serially on a BIDS dataset or in parallel at subject/session level. The three methods of running
99- the script have been described below with example commands:
100- 
101- For readability of example commands, the following bash variables have been defined as follows:
102- 
103- ``` bash 
104- INPUT_DIR=" <path/to/BIDS/input/dataset>" 
105- OUTPUT_DIR=" <path/to/desired/defacing/output/directory>" 
77+                         sub-<participant_label> from the BIDS spec (so it does 
78+                         not include "sub-"). If this parameter is not provided 
79+                         all subjects should be analyzed. Multiple participants 
80+                         can be specified with a space separated list. 
81+   -s SESSION_ID [SESSION_ID ...], --session-id SESSION_ID [SESSION_ID ...] 
82+                         The ID(s) of the session(s) that should be defaced. 
83+                         The label corresponds to ses-<session_id> from the 
84+                         BIDS spec (so it does not include "ses-"). If this 
85+                         parameter is not provided all subjects should be 
86+                         analyzed. Multiple sessions can be specified with a 
87+                         space separated list. 
88+   --no-clean            If this argument is provided, then AFNI intermediate 
89+                         files are preserved. 
10690``` 
10791
108- ** NOTE: **  In the example commands below,  ` <path/to/ BIDS/input/ dataset> `  and  ` <path/to/desired/output/directory> `  are 
109- placeholders for paths to input and output directories, respectively .
92+ The script can be run serially on a  BIDS  dataset or in parallel at subject/session level. Both these methods of running 
93+ the script have been described below with example commands .
11094
11195### Option 1: Serial defacing  
11296
11397If you have a small dataset with less than 10 subjects, then it might be easiest to run the defacing algorithm serially.
11498
11599``` bash 
116- python src/dsst_defacing_wf.py ${INPUT_DIR}  ${OUTPUT_DIR} 
100+ #  activate your conda environment
101+ conda activate dsstdeface
102+ 
103+ #  once your conda environment is active, execute the following
104+ python src/run.py ${INPUT_DIR}  ${OUTPUT_DIR} 
117105``` 
118106
119107### Option 2: Parallel defacing  
@@ -122,60 +110,14 @@ If you have dataset with over 10 subjects and since each defacing job is indepen
122110subject/session in the dataset using the ` -n/--n-cpus `  option. The following example command will run the pipeline occupying 10 processors at a time.
123111
124112``` bash 
125- python src/dsst_defacing_wf.py ${INPUT_DIR}  ${OUTPUT_DIR}  -n 10
126- ``` 
127- 
128- ### Option 3: Parallel defacing using ` swarm `   
129- 
130- 
131- Assuming these scripts are run on the NIH HPC system, you can create a ` swarm `  file:
132- 
133-   ``` bash 
134-   
135-   for  i  in  ` ls -d ${INPUT_DIR} /sub-* ` ;  do  \
136-     SUBJ=$( echo $i  |  sed " s|${INPUT_DIR} /||g"   ) ;  \
137-     echo  " python dsst-defacing-pipeline/src/dsst_defacing_wf.py -i ${INPUT_DIR}  -o ${OUTPUT_DIR}  -p ${SUBJ} " ;  \
138-     done  >  defacing_parallel_subject_level.swarm
139-   ``` 
140- 
141- The above BASH "for loop" crawls through the dataset and finds all subject directories to construct ` dsst_defacing_wf.py `  commands
142- with the ` -p/--participant-label `  option.
143- 
144- Next you can run the swarm file with the following command:
145- 
146- ``` bash 
147- swarm -f defacing_parallel_subject_level.swarm --merge-output --logdir ${OUTPUT_DIR} /swarm_log
148- ``` 
149- 
150- ### Option 4: In parallel at session level  
151- 
152- If the input dataset has multiple sessions per subject, then run the pipeline on every session in the dataset
153- in parallel. Similar to Option 2, the following commands loop through the dataset to find subject and session IDs to
154- create a ` swarm `  file to be run on NIH HPC systems.
155- 
156- ``` bash 
157- for  i  in  ` ls -d ${INPUT_DIR} /sub-* ` ;  do 
158-   SUBJ=$( echo $i  |  sed " s|${INPUT_DIR} /||g"   ) ; 
159-   for  j  in  ` ls -d ${INPUT_DIR} /${SUBJ} /ses-* ` ;  do 
160-     SESS=$( echo $j  |  sed " s|${INPUT_DIR} /${SUBJ} /||g"   ) 
161-     echo  " python dsst-defacing-pipeline/src/dsst_defacing_wf.py -i ${INPUT_DIR}  -o ${OUTPUT_DIR}  -p ${SUBJ}  -s ${SESS} " ; 
162-     done ; 
163-   done  >  defacing_parallel_session_level.swarm
164- ``` 
165- 
166- To run the swarm file, once created, use the following command:
113+ #  activate your conda environment
114+ conda activate dsstdeface
167115
168- ``` bash 
169- swarm -f defacing_parallel_session_level.swarm --merge-output --logdir  ${OUTPUT_DIR} /swarm_log 
116+ #  once your conda environment is active, execute the following 
117+ python src/run.py  ${INPUT_DIR}   ${OUTPUT_DIR}  -n 10 
170118``` 
171119
172- ## Using ` generate_renders.py `   
173- 
174- Generate 3D renders for every defaced image in the output directory.
175- 
176-   ``` bash 
177-   python dsst-defacing-pipeline/src/generate_renders.py -o ${OUTPUT_DIR} 
178-   ``` 
120+ Additionally, the pipeline can be run on a single subject or session using the ` -p/--participant-label `  and ` -s/--session-id ` , respectively. 
179121
180122## Visual Inspection  
181123
0 commit comments