diff --git a/platform-cloud/docs/reports/overview.md b/platform-cloud/docs/reports/overview.md index c4ba99b1c..f4e0acc2d 100644 --- a/platform-cloud/docs/reports/overview.md +++ b/platform-cloud/docs/reports/overview.md @@ -1,17 +1,18 @@ --- title: "Reports" description: "Overview of pipeline reports in Seqera Platform." -date: "24 Apr 2023" -tags: [pipeline, schema] +date: "2023-04-24" +last_update: "2025-09-12" +tags: [pipeline, schema, reports, configuration] --- -Most Nextflow pipelines will generate reports or output files which are useful to inspect at the end of the pipeline execution. Reports may be in various formats (e.g. HTML, PDF, TXT) and would typically contain quality control (QC) metrics that would be important to assess the integrity of the results. +Most Nextflow pipelines generate reports and output files that you can inspect after pipeline execution. Reports may be in various formats (e.g. HTML, PDF, TXT) and would typically contain quality control (QC) metrics that would be important to assess the integrity of the results. -**Reports** allow you to directly visualise supported file types or to download them via the user interface (see [Limitations](#limitations)). This saves users the time and effort of having to retrieve and visualize output files from their local storage. +**Reports** allow you to directly visualize supported file types or to download them via the user interface (see [Limitations](#limitations)). This saves users the time and effort of having to retrieve and visualize output files from their local storage. ### Visualize reports -Available reports are listed in a **Reports** tab on the **Runs** page. You can select a report from the table to view or download it (see [Limitations](#limitations) for supported file types and sizes). +You can find available reports in the **Reports** tab on the **Runs** page. You can select a report from the table to view or download it (see [Limitations](#limitations) for supported file types and sizes). To open a report preview, the file must be smaller than 10 MB. @@ -19,7 +20,7 @@ You can download a report directly or from the provided file path. Reports large ### Configure reports -Create a config file that defines the paths to a selection of output files published by the pipeline for Seqera to render reports. There are 2 ways to provide the config file, both of which have to be in YAML format: +Create a configuration file that defines paths to output files that Seqera uses to render reports. There are 2 ways to provide the config file, both of which have to be in YAML format: 1. **Pipeline repository**: If a file called `tower.yml` exists in the root of the pipeline repository then this will be fetched automatically before the pipeline execution. 2. **Seqera Platform interface**: Provide the YAML definition within the **Advanced options > Seqera Cloud config file** box when: @@ -30,11 +31,11 @@ Create a config file that defines the paths to a selection of output files publi Any configuration provided in the interface will override configuration supplied in the pipeline repository. ::: -### Configure reports for Nextflow CLI runs +### Configure reports for Nextflow CLI runs -The reports and log files for pipeline runs launched with Nextflow CLI (`nextflow run -with-tower`) can be accessed directly in the Seqera UI. The files generated by the run must be accessible to your Seqera workspace primary compute environment. Specify your workspace prior to launch by setting the `TOWER_WORKSPACE_ID` environment variable. Reports are listed under the **Reports** tab on the run details page. +You can access reports and log files for pipeline runs launched with Nextflow CLI (`nextflow run -with-tower`) in the Seqera UI. The files generated by the run must be accessible to your Seqera workspace primary compute environment. Specify your workspace prior to launch by setting the `TOWER_WORKSPACE_ID` environment variable. Reports are listed under the **Reports** tab on the run details page. -Execution logs are available in the **Logs** tab by default, provided the output files are accessible to your workspace primary compute environment. To specify additional report files to be made available, your pipeline repository root folder must include a `tower.yml` file that specifies the files to be included (see below). +You can view execution logs in the **Logs** tab by default if output files are accessible to your workspace primary compute environment. To specify additional report files to be made available, your pipeline repository root folder must include a `tower.yml` file that specifies the files to be included (see below). ### Reports implementation @@ -42,9 +43,9 @@ Pipeline reports need to be specified using YAML syntax: ```yaml reports: - : - display: text to display (required) - mimeType: file mime type (optional) + : + display: (required) + mimeType: (optional) ``` ### Path pattern @@ -56,7 +57,7 @@ Examples of valid path patterns are: - `multiqc.html`: This will match all the published files with this name. - `**/multiqc.html`: This is a glob expression that matches any subfolder. It's equivalent to the previous expression. - `results/output.txt`: This will match all the `output.txt` files inside any `results` folder. -- `*_output.tsv`: This will match any file that ends with `\_output.tsv`. +- `*_output.tsv`: This will match any file that ends with `_output.tsv`. :::caution To use `*` in your path pattern, you must wrap the pattern in double quotes for valid YAML syntax. @@ -72,11 +73,11 @@ reports: display: "Data sheet" ``` -For paths `/workdir/sample1/out/sheet.tsv` and `/workdir/sample2/out/sheet.tsv`, both match the path pattern. The final display name will for these paths will be _Data sheet (sample1)_ and _Data sheet (sample2)_. +For paths `/workdir/sample1/out/sheet.tsv` and `/workdir/sample2/out/sheet.tsv`, both match the path pattern. The final display name for these paths will be _Data sheet (sample1)_ and _Data sheet (sample2)_. ### MIME type -By default, the MIME type is deduced from the file extension, so you don't need to explicitly define it. Optionally, you can define it to force a viewer, for example showing a `txt` file as a `tsv`. It is important that it is a valid MIME-type text, otherwise it will be ignored and the extension will be used instead. +By default, Seqera deduces the MIME type from the file extension, so you don't need to explicitly define it. Optionally, you can define it to force a viewer, for example showing a `txt` file as a `tsv`. It is important that it is a valid MIME-type text, otherwise it will be ignored and the extension will be used instead. ### Built-in reports @@ -85,9 +86,9 @@ Nextflow can generate a number of built-in reports: - [Execution report](https://nextflow.io/docs/latest/tracing.html#execution-report) - [Execution timeline](https://nextflow.io/docs/latest/tracing.html#timeline-report) - [Trace file](https://nextflow.io/docs/latest/tracing.html#trace-report) -- [Workflow diagram](https://nextflow.io/docs/latest/tracing.html#dag-visualisation) (i.e. DAG) +- [Workflow diagram](https://nextflow.io/docs/latest/tracing.html#dag-visualisation) -In Nextflow version 24.03.0-edge and later, these reports can be included as pipeline reports in Seqera Platform. Specify them in `tower.yml` like any other file: +In Nextflow version 24.03.0-edge and later, you can include these reports as pipeline reports in Seqera Platform. Specify them in `tower.yml` like any other file: ```yaml reports: @@ -113,8 +114,8 @@ The filenames must match any custom filenames defined in the Nextflow config: ### Limitations -The current reports implementation limits rendering to the following formats: `HTML`, `csv`, `tsv`, `pdf`, and `txt`. In-page rendering/report preview is restricted to files smaller than 10 MB. Larger files need to be downloaded first. +Seqera currently limits report rendering to the following formats: `HTML`, `CSV`, `TSV`, `PDF`, and `TXT`. In-page rendering/report preview is restricted to files smaller than 10 MB. Larger files need to be downloaded first. The download is restricted to files smaller than 25 MB. Files larger than 25 MB need to be downloaded from the path. -YAML formatting validation checks both the `tower.yml` file inside the repository and the UI configuration box. The validation phase will produce an error message if you try to launch a pipeline with non-compliant YAML definitions. +Seqera validates YAML formatting for both the `tower.yml` file in the repository and the UI configuration box. Seqera displays an error message if you try to launch a pipeline with invalid YAML definitions. diff --git a/platform-cloud/docs/resource-labels/overview.md b/platform-cloud/docs/resource-labels/overview.md index 805884609..a9be6a460 100644 --- a/platform-cloud/docs/resource-labels/overview.md +++ b/platform-cloud/docs/resource-labels/overview.md @@ -1,7 +1,8 @@ --- title: "Resource labels" description: "Instructions to use resource labels in Seqera Platform." -date: "24 Apr 2023" +date: "2023-04-24" +last_update: "2025-08-12" tags: [resource labels, labels] --- @@ -21,7 +22,7 @@ Resource labels can be created, applied, and edited by a workspace admin or owne Admins can assign a set of resource labels when creating a compute environment. All runs executed using the compute environment will be tagged with its resource labels. Resource labels applied to a compute environment are displayed on the compute environment details page. -Apply resource labels when you create a new compute environment. +To apply resource labels, add them when you create a new compute environment. :::info Once the compute environment has been created, its resource labels cannot be edited. @@ -45,11 +46,11 @@ If a maintainer changes the compute environment associated with a pipeline or ru ### Search and filter with resource labels -Search and filter pipelines and runs using one or more resource labels. The resource label search uses a `label:key=value` format. +To search and filter pipelines and runs, use one or more resource labels in the `label:key=value` format. ### Manage workspace resource labels -Select a workspace's **Settings** tab to view all the resource labels used in that workspace. All users can add resource labels, but only admins can edit or delete them, provided they're not already associated with **any** resource. This applies to resource labels associated with compute environments and runs. +Select a workspace's **Settings** tab to view all the resource labels used in that workspace. All users can add resource labels, but only admins can edit or delete them, provided they're not already associated with **any** resource. This applies to resource labels associated with compute environments and runs. When you add or edit a resource label, you can optionally set **Use as default in compute environment form**. Workspace default resource labels are prefilled in the **Resource labels** field when you create a new compute environment in that workspace. @@ -118,14 +119,14 @@ To include the cost information associated with your resource labels in your AWS #### AWS limits - Resource label keys and values must contain a minimum of 2 and a maximum of 39 alphanumeric characters (each), separated by dashes or underscores. -- The key and value cannot begin or end with dashes `-` or underscores `_`. -- The key and value cannot contain a consecutive combination of `-` or `_` characters (`--`, `__`, `-_`, etc.) +- Keys and values cannot begin or end with dashes `-` or underscores `_`. +- Keys and values cannot contain consecutive `-` or `_` characters (`--`, `__`, `-_`, etc.) - A maximum of 25 resource labels can be applied to each resource. - A maximum of 1000 resource labels can be used in each workspace. - Keys and values cannot start with `aws` or `user`, as these are reserved prefixes appended to tags by AWS. - Keys and values are case-sensitive in AWS. -See [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#tag-restrictions) for more information on AWS resource tagging. +For more resource tagging information, see [Tag restrictions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#tag-restrictions). ### Google Cloud @@ -143,13 +144,13 @@ The following resources are tagged using the labels associated with the compute #### GCP limits - Resource label keys and values must contain a minimum of 2 and a maximum of 39 alphanumeric characters (each), separated by dashes or underscores. -- The key and value cannot begin or end with dashes `-` or underscores `_`. -- The key and value cannot contain a consecutive combination of `-` or `_` characters (`--`, `__`, `-_`, etc.) +- Keys and values cannot begin or end with dashes `-` or underscores `_`. +- Keys and values cannot contain consecutive `-` or `_` characters (`--`, `__`, `-_`, etc.) - A maximum of 25 resource labels can be applied to each resource. - A maximum of 1000 resource labels can be used in each workspace. - Keys and values in Google Cloud Resource Manager may contain only lowercase letters. Resource labels created with uppercase characters are changed to lowercase before propagating to Google Cloud. -See [here](https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements) for more information on Google Cloud Resource Manager labeling. +For more Google Cloud Resource Manager labeling information, see [Requirements](https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements). ### Azure @@ -157,19 +158,19 @@ See [here](https://cloud.google.com/resource-manager/docs/creating-managing-labe The labeling system on Azure Cloud uses the term metadata to refer to resource and other labels ::: -When creating an Azure Batch compute environment with Forge, resource labels are added to the Pool parameters — this adds set of `key=value` metadata pairs to the Azure Batch Pool. +When you create an Azure Batch compute environment with Forge, resource labels are added to the Pool parameters. This adds a set of `key=value` metadata pairs to the Azure Batch Pool. #### Azure limits - Resource label keys and values must contain a minimum of 2 and a maximum of 39 alphanumeric characters (each), separated by dashes or underscores. -- The key and value cannot begin or end with dashes `-` or underscores `_`. -- The key and value cannot contain a consecutive combination of `-` or `_` characters (`--`, `__`, `-_`, etc.) +- Keys and values cannot begin or end with dashes `-` or underscores `_`. +- Keys and values cannot contain consecutive `-` or `_` characters (`--`, `__`, `-_`, etc.) - A maximum of 25 resource labels can be applied to each resource. - A maximum of 1000 resource labels can be used in each workspace. - Keys are case-insensitive, but values are case-sensitive. -- Microsoft advises against using a non-English language in your resource labels, as this can lead to decoding progress failure while loading your VM's metadata. +- Microsoft advises against using a non-English language in your resource labels, as this can cause decoding failures when loading your VM metadata. -See [here](https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/tag-resources?tabs=json) for more information on Azure Resource Manager tagging. +For more information, see [Azure Resource Manager tagging](https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/tag-resources?tabs=json). ### Kubernetes @@ -197,9 +198,9 @@ The following resources will be tagged using the labels associated with the comp #### Kubernetes limits - Resource label keys and values must contain a minimum of 2 and a maximum of 39 alphanumeric characters (each), separated by dashes or underscores. -- The key and value cannot begin or end with dashes `-` or underscores `_`. -- The key and value cannot contain a consecutive combination of `-` or `_` characters (`--`, `__`, `-_`, etc.) +- Keys and values cannot begin or end with dashes `-` or underscores `_`. +- Keys and values cannot contain consecutive `-` or `_` characters (`--`, `__`, `-_`, etc.) - A maximum of 25 resource labels can be applied to each resource. - A maximum of 1000 resource labels can be used in each workspace. -See [Syntax and character set](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set) for more information on Kubernetes object labeling. +For more information, see [Kubernetes label syntax and character set](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set). diff --git a/platform-cloud/docs/secrets/overview.md b/platform-cloud/docs/secrets/overview.md index 35c05b2ec..2a48c4dfd 100644 --- a/platform-cloud/docs/secrets/overview.md +++ b/platform-cloud/docs/secrets/overview.md @@ -1,11 +1,12 @@ --- title: "Secrets" description: "Instructions to use secrets in Seqera Platform." -date: "24 Apr 2023" -tags: [pipeline, secrets] +date: "2023-04-24" +last_update: "2025-09-12" +tags: [pipeline, secrets, security, authentication] --- -**Secrets** store the keys and tokens used by workflow tasks to interact with external systems, such as a password to connect to an external database or an API token. Seqera Platform relies on third-party secret manager services to maintain security between the workflow execution context and the secret container. This means that no secure data is transmitted from your Seqera instance to the compute environment. +**Secrets** store keys and tokens that your workflow tasks use to interact with external systems, such as passwords for external databases or API tokens. Seqera Platform relies on third-party secret manager services to maintain security between the workflow execution context and the secret container. This means that no secure data is transmitted from your Seqera instance to the compute environment. :::note AWS, Google Cloud, and HPC compute environments are currently supported. See [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/index.html) and [Google Secret Manager](https://cloud.google.com/secret-manager/docs/overview) for more information. @@ -13,17 +14,17 @@ AWS, Google Cloud, and HPC compute environments are currently supported. See [AW ## Pipeline secrets -To create a pipeline secret, go to a workspace (private or shared) and select the **Secrets** tab in the navigation bar. Available secrets are listed here and users with appropriate [permissions](../orgs-and-teams/roles) (maintainer, admin, or owner) can create or update secret values. +Create a pipeline secret by navigating to a workspace (private or shared) and selecting the **Secrets** tab. Available secrets are listed here and users with appropriate [permissions](../orgs-and-teams/roles) (maintainer, admin, or owner) can create or update secret values. :::note Multi-line secrets must be base64-encoded. ::: -Select **Add Pipeline Secret** and enter a name and value for the secret. Then select **Add**. +Select **Add Pipeline Secret**, enter a name and value for the secret, then click **Add**. ## User secrets -Listing, creating, and updating secrets for users is the same as secrets in a workspace. You can access user secrets from **Your secrets** in the user menu. +You can list, create, and update user secrets the same way you manage workspace secrets. You can access user secrets from **Your secrets** in the user menu. :::caution Secrets defined by a user have higher priority and will override any secrets with the same name defined in a workspace. @@ -31,12 +32,12 @@ Secrets defined by a user have higher priority and will override any secrets wit ## Use secrets in workflows -When you launch a new workflow, all secrets are sent to the corresponding secrets manager for the compute environment. Nextflow downloads these secrets internally when they're referenced in the pipeline code. See [Nextflow secrets](https://www.nextflow.io/docs/edge/secrets.html#process-secrets) for more information. +When you launch a new workflow, Seqera Platform sends all secrets to the corresponding secrets manager for the compute environment. Nextflow downloads these secrets internally when they're referenced in the pipeline code. See [Nextflow secrets](https://www.nextflow.io/docs/edge/secrets.html#process-secrets) for more information. -Secrets are automatically deleted from the secret manager when the pipeline completes, successfully or unsuccessfully. +Seqera Platform automatically deletes secrets from the secret manager when your pipeline completes, whether successfully or unsuccessfully. -:::note -In AWS Batch compute environments, Seqera passes stored secrets to jobs as part of the Seqera-created job definition. Seqera secrets cannot be used in Nextflow processes that use a [custom job definition](https://www.nextflow.io/docs/latest/aws.html#custom-job-definition). +:::note +In AWS Batch compute environments, Seqera passes stored secrets to jobs as part of the Seqera-created job definition. Seqera secrets cannot be used in Nextflow processes that use a [custom job definition](https://www.nextflow.io/docs/latest/aws.html#custom-job-definition). ::: ## AWS Secrets Manager integration @@ -45,11 +46,11 @@ Seqera and associated AWS Batch IAM Roles require additional permissions to inte ### Seqera instance permissions -Augment the existing instance [permissions](https://github.com/seqeralabs/nf-tower-aws) with this policy: +You need to augment the existing instance [permissions](https://github.com/seqeralabs/nf-tower-aws) with this policy: **IAM Permissions** -Augment the permissions given to Seqera with the following Sid: +Add the following policy statement to your Seqera permissions: ```json { @@ -79,7 +80,7 @@ The ECS Agent uses the [Batch Execution role](https://docs.aws.amazon.com/batch/ **IAM permissions** 1. Add the [`AmazonECSTaskExecutionRolePolicy` managed policy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonECSTaskExecutionRolePolicy.html). -1. Add this inline policy (specifying ``): +1. Add this inline policy, replacing `` with your actual compute region: ```json { @@ -89,14 +90,14 @@ The ECS Agent uses the [Batch Execution role](https://docs.aws.amazon.com/batch/ "Sid": "AllowECSAgentToRetrieveSecrets", "Effect": "Allow", "Action": "secretsmanager:GetSecretValue", - "Resource": "arn:aws:secretsmanager::*:secret:tower-*" + "Resource": "arn:aws:secretsmanager::*:secret:tower-*" } ] } ``` :::note -Including `tower-*` in the Resource ARN above limits access to Platform secrets only (as opposed to all secrets in the given region). +Including `tower-*` in the Resource ARN limits access to Platform secrets only (as opposed to all secrets in the given region). ::: **IAM trust relationship** @@ -121,7 +122,7 @@ Including `tower-*` in the Resource ARN above limits access to Platform secrets The Nextflow head job must communicate with AWS Secrets Manager. Its permissions are inherited either from a custom role assigned during the [AWS Batch CE creation process](../compute-envs/aws-batch#advanced-options), or from its host [EC2 instance](https://docs.aws.amazon.com/batch/latest/userguide/instance_IAM_role.html). -Augment your Nextflow head job permissions source with one of the following policies: +Add one of the following policies to your Nextflow head job permissions source: **EC2 Instance role** @@ -143,7 +144,7 @@ Add this policy to your EC2 Instance role: **Custom IAM role** -Add this policy to your custom IAM role (specifying `YOUR_ACCOUNT` and `YOUR_BATCH_CLUSTER`): +Add this policy to your custom IAM role, replacing `` with your AWS account ID and `` with your Batch cluster name: ```json { @@ -162,7 +163,7 @@ Add this policy to your custom IAM role (specifying `YOUR_ACCOUNT` and `YOUR_BAT "iam:GetRole", "iam:PassRole" ], - "Resource": "arn:aws:iam::YOUR_ACCOUNT:role/YOUR_BATCH_CLUSTER-ExecutionRole" + "Resource": "arn:aws:iam:::role/-ExecutionRole" } ] } @@ -188,10 +189,10 @@ Add this trust policy to your custom IAM role: ## Google Secret Manager integration -You must [enable Google Secret Manager](https://cloud.google.com/secret-manager/docs/configuring-secret-manager) in the same project that your Google compute environment credentials have access to. Your compute environment credentials require additional IAM permissions to interact with Google Secret Manager. +Enable [Google Secret Manager](https://cloud.google.com/secret-manager/docs/configuring-secret-manager) in the same project that contains your Google compute environment credentials. Your compute environment credentials require additional IAM permissions to interact with Google Secret Manager. ### IAM permissions See the [Google documentation](https://cloud.google.com/secret-manager/docs/access-control) for permission configuration instructions to integrate with Google Secret Manager. -Seqera Platform requires `roles/secretmanager.admin` permissions in the project where it will manage your secrets. Ensure that your compute environment contains credentials with this access role for the same `project_id` listed in the service account JSON file. +Seqera Platform requires `roles/secretmanager.admin` permissions in the project where it manages your secrets. Ensure that your compute environment contains credentials with this access role for the same `project_id` listed in the service account JSON file.