Skip to content

Commit b448a26

Browse files
authored
📝 Document "👽 Update CLI to accept dashes and underscores where appropriate" (#303)
2 parents 2050552 + c0cd87a commit b448a26

File tree

13 files changed

+56
-57
lines changed

13 files changed

+56
-57
lines changed

‎.circleci/config.yml‎

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ commands:
3333
cd /home/circleci/build
3434
python -m venv ~/simple
3535
source ~/simple/bin/activate
36-
pip install git+https://github.com/FCP-INDI/cpac.git@cabac8b820075876cfc28966071306b9314c139e semver
36+
pip install git+https://github.com/FCP-INDI/cpac.git@81c33b52d72478bef54ccfdce05577d52ebf714c semver
3737
deactivate
3838
3939
run-cpac-commands:
@@ -53,6 +53,9 @@ commands:
5353
mkdir -p docs/_sources/user/utils
5454
printf "Usage: cpac utils\n\`\`\`\`\`\`\`\`\`\`\`\`\`\`\`\`\`\n.. code-block:: console\n\n $ cpac utils --help\n\n" > docs/_sources/user/utils/help.rst
5555
cpac utils --help | sed -e "s/.*/ &/" >> docs/_sources/user/utils/help.rst
56+
mkdir -p docs/_sources/user/group/feat/load-preset/unpaired-two
57+
printf ".. code-block:: console\n\n $ cpac group feat load-preset unpaired-two --help\n\n" > docs/_sources/user/group/feat/load-preset/unpaired-two/help.rst
58+
cpac group feat load-preset unpaired-two --help | sed -e "s/.*/ &/" >> docs/_sources/user/group/feat/load-preset/unpaired-two/help.rst
5659
deactivate
5760
prep-deploy:
5861
steps:
@@ -115,7 +118,7 @@ jobs:
115118
build-nightly:
116119
working_directory: /home/circleci/build
117120
docker:
118-
- image: cimg/python:3.7
121+
- image: cimg/python:3.10
119122
steps:
120123
- checkout:
121124
path: /home/circleci/build
@@ -131,7 +134,7 @@ jobs:
131134
build-version:
132135
working_directory: /home/circleci/build
133136
docker:
134-
- image: cimg/python:3.7
137+
- image: cimg/python:3.10
135138
steps:
136139
- checkout:
137140
path: /home/circleci/build
@@ -148,7 +151,7 @@ jobs:
148151
deploy-nightly:
149152
working_directory: /home/circleci/
150153
docker:
151-
- image: cimg/python:3.7
154+
- image: cimg/python:3.10
152155
steps:
153156
- attach_workspace:
154157
at: /home/circleci/
@@ -157,7 +160,7 @@ jobs:
157160
deploy-version:
158161
working_directory: /home/circleci/
159162
docker:
160-
- image: cimg/python:3.7
163+
- image: cimg/python:3.10
161164
steps:
162165
- attach_workspace:
163166
at: /home/circleci/

‎docs/_sources/developer/nodes.rst‎

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,9 @@ Nodes
77

88
.. _n_cpus:
99

10-
A Nipype :py:class:`~nipype.pipeline.engine.nodes.Node` has an initialization parameter ``mem_gb`` that differs from the :doc:`commandline option </user/run/help>` ``--mem_gb``. While the commandline option is **a limit**, the Node initialization parameter is **an estimate** of the most memory that Node will consume when run. The Node parameter is not a limit; rather, this value is used to allocate system resources at runtime.
10+
A Nipype :py:class:`~nipype.pipeline.engine.nodes.Node` has an initialization parameter ``mem_gb`` that differs from the :doc:`commandline option </user/run/help>` ``--mem-gb``. While the commandline option is **a limit**, the Node initialization parameter is **an estimate** of the most memory that Node will consume when run. The Node parameter is not a limit; rather, this value is used to allocate system resources at runtime.
1111

12-
Conversely, the commandline option ``--n_cpus`` is **a limit** and the Node initialization parameter ``n_procs`` is **also a limit** of the maximum number of threads a Node will be permmitted to consume.
12+
Conversely, the commandline option ``--n-cpus`` is **a limit** and the Node initialization parameter ``n_procs`` is **also a limit** of the maximum number of threads a Node will be permmitted to consume.
1313

1414
C-PAC automatically creates a JSON-like file called ``callback.log`` (via the function :py:func:`~CPAC.utils.monitoring.log_nodes_cb`) when running. This file includes for each Node:
1515

@@ -18,7 +18,7 @@ C-PAC automatically creates a JSON-like file called ``callback.log`` (via the fu
1818
* specified maximum number of threads per Node, and
1919
* threads used at runtime.
2020

21-
A ``callback.log`` can be provided to the pipeline configuration file (see :doc:`/user/compute_config`) or with the commandline flag ``--runtime_usage``. If a callback log is provided in the pipeline configuration, nodes with names that match nodes recorded in that pipeline log will have their memory estimates overridden by the values in the callback log plus a buffer percent (provided with the ``--runtime_buffer`` flag or in the pipeline configuration file).
21+
A ``callback.log`` can be provided to the pipeline configuration file (see :doc:`/user/compute_config`) or with the commandline flag ``--runtime-usage``. If a callback log is provided in the pipeline configuration, nodes with names that match nodes recorded in that pipeline log will have their memory estimates overridden by the values in the callback log plus a buffer percent (provided with the ``--runtime-buffer`` flag or in the pipeline configuration file).
2222

2323
When a developer creates or modifies a Node in C-PAC, a ``mem_gb`` and ``n_procs`` argument should be provided unless the respective defaults of 0.2 and None (number of available system cores) are expected to be sufficient. When testing, the ``mem_gb`` and ``n_procs`` arguments should be adjusted if the observed memory and/or thread usage of a Node exceeds the estimate.
2424

‎docs/_sources/user/compute_config.rst‎

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,8 +20,8 @@ Computer Settings
2020

2121
`Open image <../_static/flowcharts/observed-usage.svg>`_
2222

23-
#. **Callback log - [text]:** The path to a callback log file from a previous run, including any resource-management parameters that will be applied in this run, like ``n_cpus`` and ``num_ants_threads``. This file is used override memory estimates with previously observed memory usage. Can be overridden with the commandline flag ``--runtime_usage``.
24-
#. **Buffer - [percent]:** A percent of the previously observed memory usage that is to be added to the memory estimate. Default: 10. Can be overridden with the commandline flag ``--runtime_buffer``.
23+
#. **Callback log - [text]:** The path to a callback log file from a previous run, including any resource-management parameters that will be applied in this run, like ``n_cpus`` and ``num_ants_threads``. This file is used override memory estimates with previously observed memory usage. Can be overridden with the commandline flag ``--runtime-usage``.
24+
#. **Buffer - [percent]:** A percent of the previously observed memory usage that is to be added to the memory estimate. Default: 10. Can be overridden with the commandline flag ``--runtime-buffer``.
2525

2626
#. **Number of Participants to Run Simultaneously - [integer]:** This number depends on computing resources.
2727

‎docs/_sources/user/cpac.rst‎

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -88,32 +88,33 @@ To run C-PAC with a pipeline configuration file other than one of the pre-config
8888

8989
.. code-block:: console
9090
91-
cpac run /Users/You/local_bids_data /Users/You/some_folder_for_outputs participant --pipeline_file /Users/You/Documents/pipeline_config.yml
91+
cpac run /Users/You/local_bids_data /Users/You/some_folder_for_outputs participant --pipeline-file /Users/You/Documents/pipeline_config.yml
9292
9393
Finally, to run C-PAC with a specific data configuration file (instead of providing a BIDS data directory):
9494

9595
.. code-block:: console
9696
97-
cpac run /Users/You/any_directory /Users/You/some_folder_for_outputs participant --data_config_file /Users/You/Documents/data_config.yml
97+
cpac run /Users/You/any_directory /Users/You/some_folder_for_outputs participant --data-config-file /Users/You/Documents/data_config.yml
9898
99-
Note: we are still providing the postionally-required ``bids_dir`` input parameter. However C-PAC will not look for data in this directory when you provide a data configuration YAML with the ``--data_config_file`` flag. Providing ``.`` or ``$PWD`` will simply pass the present working directory. In addition, if the dataset in your data configuration file is not in BIDS format, just make sure to add the ``--skip_bids_validator`` flag at the end of your command to bypass the BIDS validation process.
99+
Note: we are still providing the postionally-required ``bids_dir`` input parameter. However C-PAC will not look for data in this directory when you provide a data configuration YAML with the ``--data-config-file`` flag. Providing ``.`` or ``$PWD`` will simply pass the present working directory. In addition, if the dataset in your data configuration file is not in BIDS format, just make sure to add the ``--skip-bids-validator`` flag at the end of your command to bypass the BIDS validation process.
100100

101101
The full list of parameters and options that can be passed to C-PAC are shown below:
102102

103103
.. include:: /user/run/help.rst
104104

105105
.. include:: /user/utils/help.rst
106106

107-
Note that any of the optional arguments above will over-ride any pipeline settings in the default pipeline or in the pipeline configuration file you provide via the ``--pipeline_file`` parameter.
107+
Note that any of the optional arguments above will over-ride any pipeline settings in the default pipeline or in the pipeline configuration file you provide via the ``--pipeline-file`` parameter.
108108

109109
**Further usage notes:**
110110

111-
* You can run only anatomical preprocessing easily, without modifying your data or pipeline configuration files, by providing the ``--anat_only`` flag.
111+
* You can run only anatomical preprocessing easily, without modifying your data or pipeline configuration files, by providing the ``--anat-only`` flag.
112112

113-
* As stated, the default behavior is to read data that is organized in the BIDS format. This includes data that is in Amazon AWS S3 by using the format ``s3://<bucket_name>/<bids_dir>`` for the ``bids_dir`` command line argument. Outputs can be written to S3 using the same format for the ``output_dir``. Credentials for accessing these buckets can be specified on the command line (using ``--aws_input_creds`` or ``--aws_output_creds``).
113+
* As stated, the default behavior is to read data that is organized in the BIDS format. This includes data that is in Amazon AWS S3 by using the format ``s3://<bucket_name>/<bids_dir>`` for the ``bids_dir`` command line argument. Outputs can be written to S3 using the same format for the ``output_dir``. Credentials for accessing these buckets can be specified on the command line (using ``--aws-input-creds`` or ``--aws-output-creds``).
114114

115-
* When the app is run, a data configuration file is written to the working directory. This directory can be specified with ``--working_dir`` or the directory from which you run ``cpac`` will be used. This file can be passed into subsequent runs, which avoids the overhead of re-parsing the BIDS input directory on each run (i.e. for cluster or cloud runs). These files can be generated without executing the C-PAC pipeline using the ``test_run`` command line argument.
115+
* When the app is run, a data configuration file is written to the working directory. This directory can be specified with ``--working-dir`` or the directory from which you run ``cpac`` will be used. This file can be passed into subsequent runs, which avoids the overhead of re-parsing the BIDS input directory on each run (i.e. for cluster or cloud runs). These files can be generated without executing the C-PAC pipeline using ``test_config`` as the analysis level.
116116

117117
* The ``participant_label`` and ``participant_ndx`` arguments allow the user to specify which of the many datasets should be processed, which is useful when parallelizing the run of multiple participants.
118118

119119
* If you want to pass runtime options to your container plaform (Docker or Singularity), you can pass them with ``-o`` or ``--container_options``.
120+
.. TODO: Update cpac to handle `-`s and `_`s like C-PAC

‎docs/_sources/user/docker.rst‎

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -58,9 +58,9 @@ To run the C-PAC Docker container with a pipeline configuration file other than
5858
-v /tmp:/tmp \
5959
-v /Users/You/Documents:/configs \
6060
-v /Users/You/resources:/resources \
61-
fcpindi/c-pac:latest /bids_dataset /outputs participant --pipeline_file /configs/pipeline_config.yml
61+
fcpindi/c-pac:latest /bids_dataset /outputs participant --pipeline-file /configs/pipeline_config.yml
6262
63-
In this case, we need to map the directory containing the pipeline configuration file ``/Users/You/Documents`` to a Docker image virtual directory ``/configs``. Note we are using this ``/configs`` directory in the ``--pipeline_file`` input flag. In addition, if there are any ROIs, masks, or input files listed in your pipeline configuration file, the directory these are in must be mapped as well- assuming ``/Users/You/resources`` is your directory of ROI and/or mask files, we map it with ``-v /Users/You/resources:/resources``. In the pipeline configuration file you are providing, these ROI and mask files must be listed as ``/resources/ROI.nii.gz`` (etc.) because we have mapped ``/Users/You/resources`` to ``/resources``.
63+
In this case, we need to map the directory containing the pipeline configuration file ``/Users/You/Documents`` to a Docker image virtual directory ``/configs``. Note we are using this ``/configs`` directory in the ``--pipeline-file`` input flag. In addition, if there are any ROIs, masks, or input files listed in your pipeline configuration file, the directory these are in must be mapped as well- assuming ``/Users/You/resources`` is your directory of ROI and/or mask files, we map it with ``-v /Users/You/resources:/resources``. In the pipeline configuration file you are providing, these ROI and mask files must be listed as ``/resources/ROI.nii.gz`` (etc.) because we have mapped ``/Users/You/resources`` to ``/resources``.
6464

6565
Finally, to run the Docker container with a specific data configuration file (instead of providing a BIDS data directory):
6666

@@ -71,22 +71,22 @@ Finally, to run the Docker container with a specific data configuration file (in
7171
-v /Users/You/some_folder:/outputs \
7272
-v /tmp:/tmp \
7373
-v /Users/You/Documents:/configs \
74-
fcpindi/c-pac:latest /bids_dataset /outputs participant --data_config_file /configs/data_config.yml
74+
fcpindi/c-pac:latest /bids_dataset /outputs participant --data-config-file /configs/data_config.yml
7575
76-
Note: we are still providing ``/bids_dataset`` to the ``bids_dir`` input parameter. However, we have mapped this to any directory on your machine, as C-PAC will not look for data in this directory when you provide a data configuration YAML with the ``--data_config_file`` flag. In addition, if the dataset in your data configuration file is not in BIDS format, just make sure to add the ``--skip_bids_validator`` flag at the end of your command to bypass the BIDS validation process.
76+
Note: we are still providing ``/bids_dataset`` to the ``bids_dir`` input parameter. However, we have mapped this to any directory on your machine, as C-PAC will not look for data in this directory when you provide a data configuration YAML with the ``--data-config-file`` flag. In addition, if the dataset in your data configuration file is not in BIDS format, just make sure to add the ``--skip-bids-validator`` flag at the end of your command to bypass the BIDS validation process.
7777

7878
The full list of parameters and options that can be passed to the Docker container are shown below:
7979

8080
.. include:: /user/run/help.rst
8181

82-
Note that any of the optional arguments above will over-ride any pipeline settings in the default pipeline or in the pipeline configuration file you provide via the ``--pipeline_file`` parameter.
82+
Note that any of the optional arguments above will over-ride any pipeline settings in the default pipeline or in the pipeline configuration file you provide via the ``--pipeline-file`` parameter.
8383

8484
**Further usage notes:**
8585

86-
* You can run only anatomical preprocessing easily, without modifying your data or pipeline configuration files, by providing the ``--anat_only`` flag.
86+
* You can run only anatomical preprocessing easily, without modifying your data or pipeline configuration files, by providing the ``--anat-only`` flag.
8787

88-
* As stated, the default behavior is to read data that is organized in the BIDS format. This includes data that is in Amazon AWS S3 by using the format ``s3://<bucket_name>/<bids_dir>`` for the ``bids_dir`` command line argument. Outputs can be written to S3 using the same format for the ``output_dir``. Credentials for accessing these buckets can be specified on the command line (using ``--aws_input_creds`` or ``--aws_output_creds``).
88+
* As stated, the default behavior is to read data that is organized in the BIDS format. This includes data that is in Amazon AWS S3 by using the format ``s3://<bucket_name>/<bids_dir>`` for the ``bids_dir`` command line argument. Outputs can be written to S3 using the same format for the ``output_dir``. Credentials for accessing these buckets can be specified on the command line (using ``--aws-input-creds`` or ``--aws-output-creds``).
8989

90-
* When the app is run, a data configuration file is written to the working directory. This file can be passed into subsequent runs, which avoids the overhead of re-parsing the BIDS input directory on each run (i.e. for cluster or cloud runs). These files can be generated without executing the C-PAC pipeline using the test_run command line argument.
90+
* When the app is run, a data configuration file is written to the working directory. This file can be passed into subsequent runs, which avoids the overhead of re-parsing the BIDS input directory on each run (i.e. for cluster or cloud runs). These files can be generated without executing the C-PAC pipeline using ``test_config`` as the analysis level.
9191

9292
* The ``participant_label`` and ``participant_ndx`` arguments allow the user to specify which of the many datasets should be processed, which is useful when parallelizing the run of multiple participants.

‎docs/_sources/user/group_fsl_feat.rst‎

Lines changed: 8 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ You can generate any of these presets using the C-PAC command-line interface (CL
4545

4646
.. code-block:: console
4747
48-
cpac group feat load_preset <preset type>
48+
cpac group feat load-preset <preset type>
4949
5050
Enter any of the following in place of :file:`<preset type>` for the type of analysis you want to run:
5151

@@ -59,26 +59,21 @@ You can get more information about the required inputs for each preset with the
5959

6060
.. code-block:: console
6161
62-
cpac group feat load_preset unpaired_two --help
62+
cpac group feat load-preset unpaired-two --help
6363
6464
This will produce:
6565

66-
.. code-block:: console
67-
68-
Usage: cpac group feat load_preset unpaired_two [OPTIONS] GROUP_PARTICIPANTS
69-
Z_THRESH P_THRESH PHENO_FILE
70-
PHENO_SUB COVARIATE MODEL_NAME
71-
Options:
72-
--output_dir TEXT
73-
--help Show this message and exit.
66+
.. literalinclude:: /user/group/feat/load-preset/unpaired-two/help.rst
67+
:language: bash
68+
:start-at: Usage:
7469

7570
Following this, you could generate a ready-to-run two-sample unpaired t-test by running the following, assuming the phenotype CSV has a column of participant IDs named "subject_id" and a column named "diagnosis", which is the covariate you wish to test:
7671

7772
.. code-block:: console
7873
79-
cpac group feat load_preset unpaired_two /path/to/group_participant_list.txt 2.3 0.05
74+
cpac group feat load-preset unpaired-two /path/to/group_participant_list.txt 2.3 0.05
8075
/path/to/phenotypic_file.csv subject_id diagnosis grp_analysis1
81-
--output_dir /path/to/output_dir
76+
--output-dir /path/to/output_dir
8277
8378
You will receive a message like this shortly after:
8479

@@ -133,7 +128,7 @@ From the terminal
133128

134129
Similar to the pipeline configuration YAML file, the group configuration YAML file allows you to configure your runs with key-value combinations. From terminal, you can quickly generate a default group configuration YAML file template in the directory you are in: ::
135130

136-
cpac utils group_config new_template
131+
cpac utils group-config new-template
137132

138133
This will generate a group configuration file that you can then modify to make your selections. See below: ::
139134

‎docs/_sources/user/group_pybasc.rst‎

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ From terminal
2020

2121
Similar to the pipeline configuration YAML file, the group configuration YAML file allows you to configure your runs with key-value combinations. From terminal, you can quickly generate a default group configuration YAML file template in the directory you are in: ::
2222

23-
cpac utils group_config new_template
23+
cpac utils group-config new-template
2424

2525
This will generate a group configuration file that you can then modify to make your selections. See below: ::
2626

0 commit comments

Comments
 (0)