You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/aws.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ SSO credentials and instance profile credentials are the most recommended becaus
39
39
40
40
## AWS IAM policies
41
41
42
-
[IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) are the mechanism used by AWS to defines permissions for IAM identities. In order to access certain AWS services, the proper policies must be attached to the identity associated to the AWS credentials.
42
+
[IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) are the mechanism used by AWS to define permissions for IAM identities. In order to access certain AWS services, the proper policies must be attached to the identity associated to the AWS credentials.
43
43
44
44
Minimal permissions policies to be attached to the AWS account used by Nextflow are:
45
45
@@ -366,7 +366,7 @@ sudo service docker start
366
366
sudo usermod -a -G docker ec2-user
367
367
```
368
368
369
-
You must logging out and logging back in again to use the new `ec2-user` permissions.
369
+
You must log out and log back in again to use the new `ec2-user` permissions.
370
370
371
371
These steps must be done *before* creating the AMI from the current EC2 instance.
Copy file name to clipboardExpand all lines: docs/azure.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@ To run pipelines with Azure Batch:
29
29
30
30
- Set `process.executor` to `azurebatch` to make Nextflow submit tasks to Azure Batch.
31
31
32
-
- Set `workDir` to a working directory on Azure Blob Storage. For example, `az://<BLOB_STORAGE>/work`, where `BLOB_CONTAINER` is a blob container in your storage account.
32
+
- Set `workDir` to a working directory on Azure Blob Storage. For example, `az://<BLOB_CONTAINER>/work`, where `BLOB_CONTAINER` is a blob container in your storage account.
33
33
34
34
5. Launch your pipeline with the above configuration:
35
35
@@ -152,7 +152,7 @@ azure {
152
152
153
153
Replace the following:
154
154
155
-
- `USER_ASSIGNED_MANAGED_IDENTITY_CLIENT_ID`: your user assigned managed identity object ID
155
+
- `USER_ASSIGNED_MANAGED_IDENTITY_CLIENT_ID`: your user assigned managed identity client ID
156
156
- `STORAGE_ACCOUNT_NAME`: your Azure Storage account name
157
157
- `BATCH_ACCOUNT_NAME`: your Azure Batch account name
158
158
- `BATCH_ACCOUNT_LOCATION`: your Azure Batch account location
@@ -289,7 +289,7 @@ This section describes how to configure and use Azure Batch with Nextflow for ef
289
289
290
290
Nextflow integrates with Azure Batch by mapping its execution model to Azure Batch's structure. A Nextflow process corresponds to an Azure Batch job, and every execution of that process (a Nextflow task) becomes an Azure Batch task. These Azure Batch tasks are executed on compute nodes within an Azure Batch pool, which is a collection of virtual machines that can scale up or down based on an autoscale formula.
291
291
292
-
Nextflow manages these pools dynamically. You can assign processes to specific, pre-existing pools using the process `queue` directive. Nextflow will create if it doesn't exist and `azure.batch.allowPoolCreation` is set to `true`. Alternatively, `autoPoolMode` enables Nextflow to automatically create multiple pools based on the CPU and memory requirements defined in your processes.
292
+
Nextflow manages these pools dynamically. You can assign processes to specific, pre-existing pools using the process `queue` directive. Nextflow will create it if it doesn't exist and `azure.batch.allowPoolCreation` is set to `true`. Alternatively, `autoPoolMode` enables Nextflow to automatically create multiple pools based on the CPU and memory requirements defined in your processes.
293
293
294
294
An Azure Batch task is created for each Nextflow task. This task first downloads the necessary input files from Azure Blob Storage to its assigned compute node. It then runs the process script. Finally, it uploads any output files back to Azure Blob Storage.
295
295
@@ -492,7 +492,7 @@ The `azure.batch.pools.<POOL_NAME>.scaleFormula` setting can be used to specify
492
492
493
493
### Task authentication
494
494
495
-
By default, Nextflow creates SAS tokens for specific containers and passes them to tasks to enable file operations with Azure Storage. SAS tokens expire after a set period of time. The expiration time is 48 hours by default and cat be configured using `azure.storage.tokenDuration` in your configuration.
495
+
By default, Nextflow creates SAS tokens for specific containers and passes them to tasks to enable file operations with Azure Storage. SAS tokens expire after a set period of time. The expiration time is 48 hours by default and can be configured using `azure.storage.tokenDuration` in your configuration.
496
496
497
497
:::{versionadded} 25.05.0-edge
498
498
:::
@@ -529,7 +529,7 @@ For example, consider a *Standard_D4d_v5* machine with 4 vCPUs, 16 GB of memory,
529
529
530
530
- If a process requests `cpus 4`, `memory 16.GB`, or `disk 150.GB`, four task slots are allocated (100% of resources), allowing one task to run on the node.
531
531
532
-
Resource overprovisioning can occur if tasks consume more than their allocated share of resources. For instance, the node described above my become overloaded and fail if a task with `cpus 2` uses more than 8 GB of memory or 75 GB of disk space. Make sure to accurately specify resource requirements to ensure optimal performance and prevent task failures.
532
+
Resource overprovisioning can occur if tasks consume more than their allocated share of resources. For instance, the node described above may become overloaded and fail if a task with `cpus 2` uses more than 8 GB of memory or 75 GB of disk space. Make sure to accurately specify resource requirements to ensure optimal performance and prevent task failures.
533
533
534
534
:::{warning}
535
535
Azure virtual machines come with fixed storage disks that are not expandable. Tasks will fail if the tasks running concurrently on a node use more storage than the machine has available.
Copy file name to clipboardExpand all lines: docs/cache-and-resume.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ The task hash is computed from the following metadata:
32
32
- Whether the task is a {ref}`stub run <process-stub>`
33
33
34
34
:::{note}
35
-
Nextflow also includes an incrementing component in the hash generation process, which allows it to iterate through multiple hash values until it finds one that does not match an existing execution directory. This mechanism typically usually aligns with task retries (i.e., task attempts), however this is not guaranteed.
35
+
Nextflow also includes an incrementing component in the hash generation process, which allows it to iterate through multiple hash values until it finds one that does not match an existing execution directory. This mechanism typically aligns with task retries (i.e., task attempts), however this is not guaranteed.
Copy file name to clipboardExpand all lines: docs/cli.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
# Command line
4
4
5
-
Nextflow provides a robust command line interface (CLI) for the management and execution pipelines.
5
+
Nextflow provides a robust command line interface (CLI) for the management and execution of pipelines.
6
6
7
7
Simply run `nextflow` with no options or `nextflow -h` to see the list of available top-level options and commands. See {ref}`cli-reference` for the full list of subcommands with examples.
8
8
@@ -36,7 +36,7 @@ Set JVM properties.
36
36
$ nextflow -Dkey=value COMMAND [arg...]
37
37
```
38
38
39
-
This options allows the definition of custom Java system properties that can be used to properly configure or fine tuning the JVM instance used by the Nextflow runtime.
39
+
This option allows the definition of custom Java system properties that can be used to properly configure or fine tuning the JVM instance used by the Nextflow runtime.
40
40
41
41
For specifying other JVM level options, please refer to the {ref}`config-env-vars` section.
42
42
@@ -96,7 +96,7 @@ Sets the path of the nextflow log file.
96
96
$ nextflow -log custom.log COMMAND [arg...]
97
97
```
98
98
99
-
The `-log` option takes a path of the new log file which to be used instead of the default `.nextflow.log` or to save logs files to another directory.
99
+
The `-log` option takes a path of the new log file which will be used instead of the default `.nextflow.log` or to save logs files to another directory.
100
100
101
101
- Save all execution logs to the custom `/var/log/nextflow.log` file:
102
102
@@ -144,7 +144,7 @@ Print the Nextflow version information.
144
144
$ nextflow -v
145
145
```
146
146
147
-
The `-v` option prints out information about Nextflow, such as the version and build. The `-version` option in addition prints out the citation reference and official website.
147
+
The `-v` option prints out information about Nextflow, such as the version and build. The `-version` option, in addition, prints out the citation reference and official website.
Copy file name to clipboardExpand all lines: docs/conda.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,7 @@ Alternatively, it can be specified by setting the variable `NXF_CONDA_ENABLED=tr
43
43
44
44
### Use Conda package names
45
45
46
-
Conda package names can specified using the `conda` directive. Multiple package names can be specified by separating them with a blank space. For example:
46
+
Conda package names can be specified using the `conda` directive. Multiple package names can be specified by separating them with a blank space. For example:
47
47
48
48
```nextflow
49
49
process hello {
@@ -144,9 +144,9 @@ If you're using Mamba or Micromamba, use this command instead:
144
144
micromamba env export --explicit > spec-file.txt
145
145
```
146
146
147
-
You can also download Conda lock files from [Wave](https://seqera.io/wave/) build pages.
147
+
You can also download Conda lock files from [Wave](https://seqera.io/wave/) container build pages.
148
148
149
-
These files list every package and its dependencies, so Conda doesn't need to resolve the environment. This makes environment setup faster and more reproducible.
149
+
These files list every package and its dependencies, so Conda doesn't need to perform dependency resolution. This makes environment setup faster and more reproducible.
150
150
151
151
Each file includes package URLs and, optionally, an MD5 hash for verifying file integrity:
Copy file name to clipboardExpand all lines: docs/config.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -221,7 +221,7 @@ process {
221
221
}
222
222
```
223
223
224
-
The above configuration snippet sets 2 cpus for every process labeled as `hello` and 4 cpus to every process *not*label as `hello`. It also specifies the `long` queue for every process whose name does *not* start with `align`.
224
+
The above configuration snippet sets 2 cpus for every process labeled as `hello` and 4 cpus to every process *not*labeled as `hello`. It also specifies the `long` queue for every process whose name does *not* start with `align`.
Copy file name to clipboardExpand all lines: docs/container.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ You will need Apptainer installed on your execution environment e.g. your comput
23
23
24
24
### Images
25
25
26
-
Apptainer makes use of a container image file, which physically contains the container. Refer to the [Apptainer documentation](https://apptainer.org/docs) to learn how create Apptainer images.
26
+
Apptainer makes use of a container image file, which physically contains the container. Refer to the [Apptainer documentation](https://apptainer.org/docs) to learn how to create Apptainer images.
27
27
28
28
Apptainer allows paths that do not currently exist within the container to be created and mounted dynamically by specifying them on the command line. However this feature is only supported on hosts that support the [Overlay file system](https://en.wikipedia.org/wiki/OverlayFS) and is not enabled by default.
29
29
@@ -41,10 +41,10 @@ The integration for Apptainer follows the same execution model implemented for D
41
41
nextflow run <your script> -with-apptainer [apptainer image file]
42
42
```
43
43
44
-
Every time your script launches a process execution, Nextflow will run it into a Apptainer container created by using the specified image. In practice Nextflow will automatically wrap your processes and launch them by running the `apptainer exec` command with the image you have provided.
44
+
Every time your script launches a process execution, Nextflow will run it into an Apptainer container created by using the specified image. In practice Nextflow will automatically wrap your processes and launch them by running the `apptainer exec` command with the image you have provided.
45
45
46
46
:::{note}
47
-
A Apptainer image can contain any tool or piece of software you may need to carry out a process execution. Moreover, the container is run in such a way that the process result files are created in the host file system, thus it behaves in a completely transparent manner without requiring extra steps or affecting the flow in your pipeline.
47
+
An Apptainer image can contain any tool or piece of software you may need to carry out a process execution. Moreover, the container is run in such a way that the process result files are created in the host file system, thus it behaves in a completely transparent manner without requiring extra steps or affecting the flow in your pipeline.
48
48
:::
49
49
50
50
If you want to avoid entering the Apptainer image as a command line parameter, you can define it in the Nextflow configuration file. For example you can add the following lines in the configuration file:
@@ -124,7 +124,7 @@ Nextflow caches Apptainer images in the `apptainer` directory, in the pipeline w
124
124
125
125
Nextflow uses the library directory to determine the location of Apptainer containers. The library directory can be defined using the `apptainer.libraryDir` configuration setting or the `NXF_APPTAINER_LIBRARYDIR` environment variable. The configuration file option overrides the environment variable if both are set.
126
126
127
-
Nextflow first checks the library directory when searching for the image. If the image is not found it then checks the cache directory. The main difference between the library directory and the cache directory is that the first is assumed to be a read-only container repository, while the latter is expected to be writable path where container images can added for caching purposes.
127
+
Nextflow first checks the library directory when searching for the image. If the image is not found it then checks the cache directory. The main difference between the library directory and the cache directory is that the first is assumed to be a read-only container repository, while the latter is expected to be writable path where container images can be added for caching purposes.
128
128
129
129
:::{warning}
130
130
When using a compute cluster, the Apptainer cache directory must reside in a shared filesystem accessible to all compute nodes.
@@ -573,7 +573,7 @@ In the above example replace `/path/to/singularity.img` with any Singularity ima
573
573
Read the {ref}`config-page` page to learn more about the configuration file and how to use it to configure your pipeline execution.
574
574
575
575
:::{note}
576
-
Unlike Docker, Nextflow does not automatically mount host paths in the container when using Singularity. It expects that the paths are configure and mounted system wide by the Singularity runtime. If your Singularity installation allows user defined bind points, read the {ref}`Singularity configuration <config-singularity>` section to learn how to enable Nextflow auto mounts.
576
+
Unlike Docker, Nextflow does not automatically mount host paths in the container when using Singularity. It expects that the paths are configured and mounted system wide by the Singularity runtime. If your Singularity installation allows user defined bind points, read the {ref}`Singularity configuration <config-singularity>` section to learn how to enable Nextflow auto mounts.
577
577
:::
578
578
579
579
:::{warning}
@@ -657,7 +657,7 @@ Nextflow caches Singularity images in the `singularity` directory, in the pipeli
657
657
658
658
Nextflow uses the library directory to determine the location of Singularity images. The library directory can be defined using the `singularity.libraryDir` configuration setting or the `NXF_SINGULARITY_LIBRARYDIR` environment variable. The configuration file option overrides the environment variable if both are set.
659
659
660
-
Nextflow first checks the library directory when searching for the image. If the image is not found it then checks the cache directory. The main difference between the library directory and the cache directory is that the first is assumed to be a read-only container repository, while the latter is expected to be writable path where container images can added for caching purposes.
660
+
Nextflow first checks the library directory when searching for the image. If the image is not found it then checks the cache directory. The main difference between the library directory and the cache directory is that the first is assumed to be a read-only container repository, while the latter is expected to be writable path where container images can be added for caching purposes.
661
661
662
662
:::{warning}
663
663
When using a compute cluster, the Singularity cache directory must reside in a shared filesystem accessible to all compute nodes.
Copy file name to clipboardExpand all lines: docs/developer-env.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ Setting up a Nextflow development environment is a prerequisite for creating, te
9
9
- {ref}`devenv-vscode`: A versatile code editor that enhances your Nextflow development with features like syntax highlighting and debugging.
10
10
- {ref}`devenv-extensions`: The VS Code marketplace offers a variety of extensions to enhance development. The {ref}`Nextflow extension <devenv-nextflow>` is specifically designed to enhance Nextflow development with diagnostics, hover hints, code navigation, code completion, and more.
11
11
- {ref}`devenv-docker`: A containerization platform that ensures your Nextflow workflows run consistently across different environments by packaging dependencies into isolated containers.
12
-
- {ref}`devenv-git`: A version control system that helps manage and track changes in your Nextflow projects, making collaboration, and code management more efficient.
12
+
- {ref}`devenv-git`: A version control system that helps manage and track changes in your Nextflow projects, making collaboration and code management more efficient.
13
13
14
14
The sections below outline the steps for setting up these tools.
15
15
@@ -37,7 +37,7 @@ To install VS Code on Windows:
37
37
38
38
1. Visit the [VS Code](https://code.visualstudio.com/download) website.
39
39
1. Download VS Code for Windows.
40
-
1. Double-click the installer executable (`.exe`) file and follow the set up steps.
40
+
1. Double-click the installer executable (`.exe`) file and follow the setup steps.
Git provides powerful version control that helps track code changes. Git operates locally, meaning you don't need an internet connection to track changes, but it can also be used with remote platforms like GitHub, GitLab, or Bitbucket for collaborative development.
245
245
246
-
Nextflow seamlessly integrates with Git for source code management providers for managing pipelines as version-controlled Git repositories.
246
+
Nextflow seamlessly integrates with Git for source code management providers to manage pipelines as version-controlled Git repositories.
0 commit comments