You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/aws.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ SSO credentials and instance profile credentials are the most recommended becaus
39
39
40
40
## AWS IAM policies
41
41
42
-
[IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) are the mechanism used by AWS to defines permissions for IAM identities. In order to access certain AWS services, the proper policies must be attached to the identity associated to the AWS credentials.
42
+
[IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) are the mechanism used by AWS to define permissions for IAM identities. In order to access certain AWS services, the proper policies must be attached to the identity associated to the AWS credentials.
43
43
44
44
Minimal permissions policies to be attached to the AWS account used by Nextflow are:
45
45
@@ -366,7 +366,7 @@ sudo service docker start
366
366
sudo usermod -a -G docker ec2-user
367
367
```
368
368
369
-
You must logging out and logging back in again to use the new `ec2-user` permissions.
369
+
You must log out and log back in again to use the new `ec2-user` permissions.
370
370
371
371
These steps must be done *before* creating the AMI from the current EC2 instance.
Copy file name to clipboardExpand all lines: docs/azure.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@ To run pipelines with Azure Batch:
29
29
30
30
- Set `process.executor` to `azurebatch` to make Nextflow submit tasks to Azure Batch.
31
31
32
-
- Set `workDir` to a working directory on Azure Blob Storage. For example, `az://<BLOB_STORAGE>/work`, where `BLOB_CONTAINER` is a blob container in your storage account.
32
+
- Set `workDir` to a working directory on Azure Blob Storage. For example, `az://<BLOB_CONTAINER>/work`, where `BLOB_CONTAINER` is a blob container in your storage account.
33
33
34
34
5. Launch your pipeline with the above configuration:
35
35
@@ -152,7 +152,7 @@ azure {
152
152
153
153
Replace the following:
154
154
155
-
- `USER_ASSIGNED_MANAGED_IDENTITY_CLIENT_ID`: your user assigned managed identity object ID
155
+
- `USER_ASSIGNED_MANAGED_IDENTITY_CLIENT_ID`: your user assigned managed identity client ID
156
156
- `STORAGE_ACCOUNT_NAME`: your Azure Storage account name
157
157
- `BATCH_ACCOUNT_NAME`: your Azure Batch account name
158
158
- `BATCH_ACCOUNT_LOCATION`: your Azure Batch account location
@@ -289,7 +289,7 @@ This section describes how to configure and use Azure Batch with Nextflow for ef
289
289
290
290
Nextflow integrates with Azure Batch by mapping its execution model to Azure Batch's structure. A Nextflow process corresponds to an Azure Batch job, and every execution of that process (a Nextflow task) becomes an Azure Batch task. These Azure Batch tasks are executed on compute nodes within an Azure Batch pool, which is a collection of virtual machines that can scale up or down based on an autoscale formula.
291
291
292
-
Nextflow manages these pools dynamically. You can assign processes to specific, pre-existing pools using the process `queue` directive. Nextflow will create if it doesn't exist and `azure.batch.allowPoolCreation` is set to `true`. Alternatively, `autoPoolMode` enables Nextflow to automatically create multiple pools based on the CPU and memory requirements defined in your processes.
292
+
Nextflow manages these pools dynamically. You can assign processes to specific, pre-existing pools using the process `queue` directive. Nextflow will create it if it doesn't exist and `azure.batch.allowPoolCreation` is set to `true`. Alternatively, `autoPoolMode` enables Nextflow to automatically create multiple pools based on the CPU and memory requirements defined in your processes.
293
293
294
294
An Azure Batch task is created for each Nextflow task. This task first downloads the necessary input files from Azure Blob Storage to its assigned compute node. It then runs the process script. Finally, it uploads any output files back to Azure Blob Storage.
295
295
@@ -492,7 +492,7 @@ The `azure.batch.pools.<POOL_NAME>.scaleFormula` setting can be used to specify
492
492
493
493
### Task authentication
494
494
495
-
By default, Nextflow creates SAS tokens for specific containers and passes them to tasks to enable file operations with Azure Storage. SAS tokens expire after a set period of time. The expiration time is 48 hours by default and cat be configured using `azure.storage.tokenDuration` in your configuration.
495
+
By default, Nextflow creates SAS tokens for specific containers and passes them to tasks to enable file operations with Azure Storage. SAS tokens expire after a set period of time. The expiration time is 48 hours by default and can be configured using `azure.storage.tokenDuration` in your configuration.
496
496
497
497
:::{versionadded} 25.05.0-edge
498
498
:::
@@ -525,7 +525,7 @@ For example, consider a *Standard_D4d_v5* machine with 4 vCPUs, 16 GB of memory,
525
525
526
526
- If a process requests `cpus 4`, `memory 16.GB`, or `disk 150.GB`, four task slots are allocated (100% of resources), allowing one task to run on the node.
527
527
528
-
Resource overprovisioning can occur if tasks consume more than their allocated share of resources. For instance, the node described above my become overloaded and fail if a task with `cpus 2` uses more than 8 GB of memory or 75 GB of disk space. Make sure to accurately specify resource requirements to ensure optimal performance and prevent task failures.
528
+
Resource overprovisioning can occur if tasks consume more than their allocated share of resources. For instance, the node described above may become overloaded and fail if a task with `cpus 2` uses more than 8 GB of memory or 75 GB of disk space. Make sure to accurately specify resource requirements to ensure optimal performance and prevent task failures.
529
529
530
530
:::{warning}
531
531
Azure virtual machines come with fixed storage disks that are not expandable. Tasks will fail if the tasks running concurrently on a node use more storage than the machine has available.
Copy file name to clipboardExpand all lines: docs/cache-and-resume.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ The task hash is computed from the following metadata:
32
32
- Whether the task is a {ref}`stub run <process-stub>`
33
33
34
34
:::{note}
35
-
Nextflow also includes an incrementing component in the hash generation process, which allows it to iterate through multiple hash values until it finds one that does not match an existing execution directory. This mechanism typically usually aligns with task retries (i.e., task attempts), however this is not guaranteed.
35
+
Nextflow also includes an incrementing component in the hash generation process, which allows it to iterate through multiple hash values until it finds one that does not match an existing execution directory. This mechanism typically aligns with task retries (i.e., task attempts), however this is not guaranteed.
Copy file name to clipboardExpand all lines: docs/cli.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
# Command line
4
4
5
-
Nextflow provides a robust command line interface (CLI) for the management and execution pipelines.
5
+
Nextflow provides a robust command line interface (CLI) for the management and execution of pipelines.
6
6
7
7
Simply run `nextflow` with no options or `nextflow -h` to see the list of available top-level options and commands. See {ref}`cli-reference` for the full list of subcommands with examples.
8
8
@@ -36,7 +36,7 @@ Set JVM properties.
36
36
$ nextflow -Dkey=value COMMAND [arg...]
37
37
```
38
38
39
-
This options allows the definition of custom Java system properties that can be used to properly configure or fine tuning the JVM instance used by the Nextflow runtime.
39
+
This option allows the definition of custom Java system properties that can be used to properly configure or fine tuning the JVM instance used by the Nextflow runtime.
40
40
41
41
For specifying other JVM level options, please refer to the {ref}`config-env-vars` section.
42
42
@@ -96,7 +96,7 @@ Sets the path of the nextflow log file.
96
96
$ nextflow -log custom.log COMMAND [arg...]
97
97
```
98
98
99
-
The `-log` option takes a path of the new log file which to be used instead of the default `.nextflow.log` or to save logs files to another directory.
99
+
The `-log` option takes a path of the new log file which will be used instead of the default `.nextflow.log` or to save logs files to another directory.
100
100
101
101
- Save all execution logs to the custom `/var/log/nextflow.log` file:
102
102
@@ -144,7 +144,7 @@ Print the Nextflow version information.
144
144
$ nextflow -v
145
145
```
146
146
147
-
The `-v` option prints out information about Nextflow, such as the version and build. The `-version` option in addition prints out the citation reference and official website.
147
+
The `-v` option prints out information about Nextflow, such as the version and build. The `-version` option, in addition, prints out the citation reference and official website.
Copy file name to clipboardExpand all lines: docs/conda.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,7 @@ Alternatively, it can be specified by setting the variable `NXF_CONDA_ENABLED=tr
43
43
44
44
### Use Conda package names
45
45
46
-
Conda package names can specified using the `conda` directive. Multiple package names can be specified by separating them with a blank space. For example:
46
+
Conda package names can be specified using the `conda` directive. Multiple package names can be specified by separating them with a blank space. For example:
47
47
48
48
```nextflow
49
49
process hello {
@@ -144,9 +144,9 @@ If you're using Mamba or Micromamba, use this command instead:
144
144
micromamba env export --explicit > spec-file.txt
145
145
```
146
146
147
-
You can also download Conda lock files from [Wave](https://seqera.io/wave/) build pages.
147
+
You can also download Conda lock files from [Wave](https://seqera.io/wave/) container build pages.
148
148
149
-
These files list every package and its dependencies, so Conda doesn't need to resolve the environment. This makes environment setup faster and more reproducible.
149
+
These files list every package and its dependencies, so Conda doesn't need to perform dependency resolution. This makes environment setup faster and more reproducible.
150
150
151
151
Each file includes package URLs and, optionally, an MD5 hash for verifying file integrity:
0 commit comments