Skip to content

Commit a4e2639

Browse files
Fix mistakes
Signed-off-by: Christopher Hakkaart <[email protected]>
1 parent 14be407 commit a4e2639

File tree

7 files changed

+18
-19
lines changed

7 files changed

+18
-19
lines changed

docs/aws.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ SSO credentials and instance profile credentials are the most recommended becaus
3939

4040
## AWS IAM policies
4141

42-
[IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) are the mechanism used by AWS to defines permissions for IAM identities. In order to access certain AWS services, the proper policies must be attached to the identity associated to the AWS credentials.
42+
[IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) are the mechanism used by AWS to define permissions for IAM identities. In order to access certain AWS services, the proper policies must be attached to the identity associated to the AWS credentials.
4343

4444
Minimal permissions policies to be attached to the AWS account used by Nextflow are:
4545

@@ -366,7 +366,7 @@ sudo service docker start
366366
sudo usermod -a -G docker ec2-user
367367
```
368368

369-
You must logging out and logging back in again to use the new `ec2-user` permissions.
369+
You must log out and log back in again to use the new `ec2-user` permissions.
370370

371371
These steps must be done *before* creating the AMI from the current EC2 instance.
372372

@@ -386,7 +386,7 @@ sudo systemctl enable --now ecs
386386
To test the installation:
387387

388388
```bash
389-
curl -s http://localhost:51678/v1/metadata | python -mjson.tool (test)
389+
curl -s http://localhost:51678/v1/metadata | python -mjson.tool
390390
```
391391

392392
:::{note}

docs/azure.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ To run pipelines with Azure Batch:
2929

3030
- Set `process.executor` to `azurebatch` to make Nextflow submit tasks to Azure Batch.
3131

32-
- Set `workDir` to a working directory on Azure Blob Storage. For example, `az://<BLOB_STORAGE>/work`, where `BLOB_CONTAINER` is a blob container in your storage account.
32+
- Set `workDir` to a working directory on Azure Blob Storage. For example, `az://<BLOB_CONTAINER>/work`, where `BLOB_CONTAINER` is a blob container in your storage account.
3333

3434
5. Launch your pipeline with the above configuration:
3535

@@ -152,7 +152,7 @@ azure {
152152

153153
Replace the following:
154154

155-
- `USER_ASSIGNED_MANAGED_IDENTITY_CLIENT_ID`: your user assigned managed identity object ID
155+
- `USER_ASSIGNED_MANAGED_IDENTITY_CLIENT_ID`: your user assigned managed identity client ID
156156
- `STORAGE_ACCOUNT_NAME`: your Azure Storage account name
157157
- `BATCH_ACCOUNT_NAME`: your Azure Batch account name
158158
- `BATCH_ACCOUNT_LOCATION`: your Azure Batch account location
@@ -289,7 +289,7 @@ This section describes how to configure and use Azure Batch with Nextflow for ef
289289
290290
Nextflow integrates with Azure Batch by mapping its execution model to Azure Batch's structure. A Nextflow process corresponds to an Azure Batch job, and every execution of that process (a Nextflow task) becomes an Azure Batch task. These Azure Batch tasks are executed on compute nodes within an Azure Batch pool, which is a collection of virtual machines that can scale up or down based on an autoscale formula.
291291

292-
Nextflow manages these pools dynamically. You can assign processes to specific, pre-existing pools using the process `queue` directive. Nextflow will create if it doesn't exist and `azure.batch.allowPoolCreation` is set to `true`. Alternatively, `autoPoolMode` enables Nextflow to automatically create multiple pools based on the CPU and memory requirements defined in your processes.
292+
Nextflow manages these pools dynamically. You can assign processes to specific, pre-existing pools using the process `queue` directive. Nextflow will create it if it doesn't exist and `azure.batch.allowPoolCreation` is set to `true`. Alternatively, `autoPoolMode` enables Nextflow to automatically create multiple pools based on the CPU and memory requirements defined in your processes.
293293
294294
An Azure Batch task is created for each Nextflow task. This task first downloads the necessary input files from Azure Blob Storage to its assigned compute node. It then runs the process script. Finally, it uploads any output files back to Azure Blob Storage.
295295
@@ -492,7 +492,7 @@ The `azure.batch.pools.<POOL_NAME>.scaleFormula` setting can be used to specify
492492
493493
### Task authentication
494494
495-
By default, Nextflow creates SAS tokens for specific containers and passes them to tasks to enable file operations with Azure Storage. SAS tokens expire after a set period of time. The expiration time is 48 hours by default and cat be configured using `azure.storage.tokenDuration` in your configuration.
495+
By default, Nextflow creates SAS tokens for specific containers and passes them to tasks to enable file operations with Azure Storage. SAS tokens expire after a set period of time. The expiration time is 48 hours by default and can be configured using `azure.storage.tokenDuration` in your configuration.
496496
497497
:::{versionadded} 25.05.0-edge
498498
:::
@@ -525,7 +525,7 @@ For example, consider a *Standard_D4d_v5* machine with 4 vCPUs, 16 GB of memory,
525525

526526
- If a process requests `cpus 4`, `memory 16.GB`, or `disk 150.GB`, four task slots are allocated (100% of resources), allowing one task to run on the node.
527527

528-
Resource overprovisioning can occur if tasks consume more than their allocated share of resources. For instance, the node described above my become overloaded and fail if a task with `cpus 2` uses more than 8 GB of memory or 75 GB of disk space. Make sure to accurately specify resource requirements to ensure optimal performance and prevent task failures.
528+
Resource overprovisioning can occur if tasks consume more than their allocated share of resources. For instance, the node described above may become overloaded and fail if a task with `cpus 2` uses more than 8 GB of memory or 75 GB of disk space. Make sure to accurately specify resource requirements to ensure optimal performance and prevent task failures.
529529

530530
:::{warning}
531531
Azure virtual machines come with fixed storage disks that are not expandable. Tasks will fail if the tasks running concurrently on a node use more storage than the machine has available.

docs/cache-and-resume.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ The task hash is computed from the following metadata:
3232
- Whether the task is a {ref}`stub run <process-stub>`
3333

3434
:::{note}
35-
Nextflow also includes an incrementing component in the hash generation process, which allows it to iterate through multiple hash values until it finds one that does not match an existing execution directory. This mechanism typically usually aligns with task retries (i.e., task attempts), however this is not guaranteed.
35+
Nextflow also includes an incrementing component in the hash generation process, which allows it to iterate through multiple hash values until it finds one that does not match an existing execution directory. This mechanism typically aligns with task retries (i.e., task attempts), however this is not guaranteed.
3636
:::
3737

3838
:::{versionchanged} 23.09.2-edge

docs/channel.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -68,8 +68,7 @@ Commonly used operators include:
6868

6969
- {ref}`operator-filter`: select the values in a channel that satisfy a condition
7070

71-
- {ref}`operator-flatMap`: transform each value from a channel into a list and emit each list
72-
element separately
71+
- {ref}`operator-flatMap`: transform each value from a channel into a list and emit each list element separately
7372

7473
- {ref}`operator-grouptuple`: group the values from a channel based on a grouping key
7574

docs/cli.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
# Command line
44

5-
Nextflow provides a robust command line interface (CLI) for the management and execution pipelines.
5+
Nextflow provides a robust command line interface (CLI) for the management and execution of pipelines.
66

77
Simply run `nextflow` with no options or `nextflow -h` to see the list of available top-level options and commands. See {ref}`cli-reference` for the full list of subcommands with examples.
88

@@ -36,7 +36,7 @@ Set JVM properties.
3636
$ nextflow -Dkey=value COMMAND [arg...]
3737
```
3838

39-
This options allows the definition of custom Java system properties that can be used to properly configure or fine tuning the JVM instance used by the Nextflow runtime.
39+
This option allows the definition of custom Java system properties that can be used to properly configure or fine tuning the JVM instance used by the Nextflow runtime.
4040

4141
For specifying other JVM level options, please refer to the {ref}`config-env-vars` section.
4242

@@ -96,7 +96,7 @@ Sets the path of the nextflow log file.
9696
$ nextflow -log custom.log COMMAND [arg...]
9797
```
9898

99-
The `-log` option takes a path of the new log file which to be used instead of the default `.nextflow.log` or to save logs files to another directory.
99+
The `-log` option takes a path of the new log file which will be used instead of the default `.nextflow.log` or to save logs files to another directory.
100100

101101
- Save all execution logs to the custom `/var/log/nextflow.log` file:
102102

@@ -144,7 +144,7 @@ Print the Nextflow version information.
144144
$ nextflow -v
145145
```
146146

147-
The `-v` option prints out information about Nextflow, such as the version and build. The `-version` option in addition prints out the citation reference and official website.
147+
The `-v` option prints out information about Nextflow, such as the version and build. The `-version` option, in addition, prints out the citation reference and official website.
148148

149149
- The short version:
150150

docs/conda.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ Alternatively, it can be specified by setting the variable `NXF_CONDA_ENABLED=tr
4343

4444
### Use Conda package names
4545

46-
Conda package names can specified using the `conda` directive. Multiple package names can be specified by separating them with a blank space. For example:
46+
Conda package names can be specified using the `conda` directive. Multiple package names can be specified by separating them with a blank space. For example:
4747

4848
```nextflow
4949
process hello {
@@ -144,9 +144,9 @@ If you're using Mamba or Micromamba, use this command instead:
144144
micromamba env export --explicit > spec-file.txt
145145
```
146146

147-
You can also download Conda lock files from [Wave](https://seqera.io/wave/) build pages.
147+
You can also download Conda lock files from [Wave](https://seqera.io/wave/) container build pages.
148148

149-
These files list every package and its dependencies, so Conda doesn't need to resolve the environment. This makes environment setup faster and more reproducible.
149+
These files list every package and its dependencies, so Conda doesn't need to perform dependency resolution. This makes environment setup faster and more reproducible.
150150

151151
Each file includes package URLs and, optionally, an MD5 hash for verifying file integrity:
152152

docs/install.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ To install Nextflow with Conda:
118118
119119
```{code-block} bash
120120
:class: copyable
121-
source activate nf_env
121+
source activate nf-env
122122
```
123123
124124
3. Confirm Nextflow is installed correctly:

0 commit comments

Comments
 (0)