From 2c74c7d43bbc91754cfcedb4bad9c21353087cc9 Mon Sep 17 00:00:00 2001 From: Christopher Hakkaart Date: Thu, 27 Nov 2025 14:55:06 +1300 Subject: [PATCH 1/2] General POC --- .../troubleshooting.md | 288 +++++++++++++----- 1 file changed, 207 insertions(+), 81 deletions(-) diff --git a/platform-cloud/docs/troubleshooting_and_faqs/troubleshooting.md b/platform-cloud/docs/troubleshooting_and_faqs/troubleshooting.md index d1cf45c80..c32de70e9 100644 --- a/platform-cloud/docs/troubleshooting_and_faqs/troubleshooting.md +++ b/platform-cloud/docs/troubleshooting_and_faqs/troubleshooting.md @@ -1,43 +1,86 @@ --- -title: "General troubleshooting" -description: "Troubleshooting Seqera Platform" -date: "24 Apr 2023" +title: "General" +description: "General troubleshooting for Seqera Platform" +date: "2023-04-23" +toc_max_heading_level: 2 tags: [troubleshooting, help] --- ## Common errors -**_timeout is not an integer or out of range_** or **_ERR timeout is not an integer or out of range_** +### Redis timeout error -This error can occur if you're using Seqera Platfrom v24.2 upwards and have an outdated version of Redis. From v24.2 Redis version 6.2 or greater is required. Follow your cloud provider specifications to upgrade your instance. +**Error message:** -**_Unknown pipeline repository or missing credentials_ error from public GitHub repositories** +``` +timeout is not an integer or out of range +``` + +or + +``` +ERR timeout is not an integer or out of range +``` + +**Cause:** This error occurs when using Seqera Platform v24.2 or later with an outdated version of Redis. + +**Solution:** Upgrade your Redis instance to version 6.2 or greater. Follow your cloud provider specifications to upgrade your instance. -GitHub imposes [rate limits](https://docs.github.com/en/rest/overview/resources-in-the-rest-api#rate-limiting) on repository pulls (including public repositories), where unauthenticated requests are capped at 60 requests/hour and authenticated requests are capped at 5000 requests/hour. Seqera Platform users tend to encounter this error due to the 60 requests/hour cap. +### GitHub repository access error -Try the following: +**Error message:** + +``` +Unknown pipeline repository or missing credentials +``` + +**Cause:** GitHub imposes [rate limits](https://docs.github.com/en/rest/overview/resources-in-the-rest-api#rate-limiting) on repository pulls (including public repositories). Unauthenticated requests are capped at 60 requests/hour and authenticated requests are capped at 5000 requests/hour. This error typically occurs when hitting the 60 requests/hour cap for unauthenticated requests. + +**Solution:** 1. Ensure there's at least one GitHub credential in your workspace's **Credentials** tab. 2. Ensure that the **Access token** field of all GitHub credential objects is populated with a [Personal Access Token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) value and **not** a user password. GitHub PATs are typically longer than passwords and include a `ghp_` prefix. For example: `ghp*IqIMNOZH6zOwIEB4T9A2g4EHMy8Ji42q4HA` 3. Confirm that your PAT is providing the elevated threshold and transactions are being charged against it: - `curl -H "Authorization: token ghp_LONG_ALPHANUMERIC_PAT" -H "Accept: application/vnd.github.v3+json" https://api.github.com/rate_limit` + ```bash + curl -H "Authorization: token ghp_LONG_ALPHANUMERIC_PAT" -H "Accept: application/vnd.github.v3+json" https://api.github.com/rate_limit + ``` + +### DSL1 variable error + +**Error message:** + +``` +No such variable +``` + +**Cause** -**_No such variable_ error** +This error occurs when executing a DSL1-based Nextflow workflow using [Nextflow 22.03.0-edge](https://github.com/nextflow-io/nextflow/releases/tag/v22.03.0-edge) or later. -This error can occur if you execute a DSL1-based Nextflow workflow using [Nextflow 22.03.0-edge](https://github.com/nextflow-io/nextflow/releases/tag/v22.03.0-edge) or later. +**Solution** -**Sleep commands in Nextflow workflows** +Upgrade your workflow to use DSL2 syntax, or use a Nextflow version earlier than 22.03.0-edge. -The `sleep` commands in your Nextflow workflows may differ in behavior depending on where they are: +### Sleep commands in Nextflow workflows -- If used within an `errorStrategy` block, the Groovy sleep function will be used (which takes its value in milliseconds). -- If used within a process script block, that language's sleep binary/method will be used. For example, [this bash script](https://www.nextflow.io/docs/latest/metrics.html?highlight=sleep) uses the bash sleep binary, which takes its value in seconds. +**Problem:** Sleep commands in Nextflow workflows may behave differently than expected. +**Cause:** The `sleep` command behavior differs depending on where it's used in your workflow: -**Large number of batch job definitions** +- If used within an `errorStrategy` block, the Groovy sleep function is used (which takes its value in milliseconds). +- If used within a process script block, that language's sleep binary/method is used. For example, [this bash script](https://www.nextflow.io/docs/latest/metrics.html?highlight=sleep) uses the bash sleep binary, which takes its value in seconds. -Platform normally looks for an existing job definition that matches your workflow requirement. If nothing matches, it recreates the job definition. You can use a simple bash script to clear job definitions. You can tailor this according to your needs, e.g., deregister only job definitions older than x days. +**Solution:** Be aware of the context where you're using `sleep` and adjust the time value accordingly (milliseconds for `errorStrategy` blocks, seconds for bash process scripts). + + +### Large number of batch job definitions + +**Problem:** Your AWS Batch account has accumulated a large number of job definitions. + +**Cause:** Platform looks for an existing job definition that matches your workflow requirement. If nothing matches, it recreates the job definition. Over time, this can lead to a buildup of job definitions. + +**Solution:** Use a bash script to clear job definitions. You can tailor this according to your needs, e.g., deregister only job definitions older than x days. ```bash jobs=$(aws --region eu-west-1 batch describe-job-definitions | jq -r .jobDefinitions[].jobDefinitionArn) @@ -51,15 +94,17 @@ done ## Containers -**Use rootless containers in Nextflow pipelines** +### Rootless container permission errors -Most containers use the root user by default. However, some users prefer to define a non-root user in the container to minimize the risk of privilege escalation. Because Nextflow and its tasks use a shared work directory to manage input and output data, using rootless containers can lead to file permissions errors in some environments: +**Error message:** ``` touch: cannot touch '/fsx/work/ab/27d78d2b9b17ee895b88fcee794226/.command.begin': Permission denied ``` -This should not occur when using AWS Batch from Seqera version 22.1.0. In other situations, you can avoid this issue by forcing all task containers to run as root. Add one of the following snippets to your [Nextflow configuration](../launch/advanced#nextflow-config-file): +**Cause:** Most containers use the root user by default. However, some users prefer to define a non-root user in the container to minimize the risk of privilege escalation. Because Nextflow and its tasks use a shared work directory to manage input and output data, using rootless containers can lead to file permissions errors in some environments. + +**Solution:** This should not occur when using AWS Batch from Seqera version 22.1.0. In other situations, you can avoid this issue by forcing all task containers to run as root. Add one of the following snippets to your [Nextflow configuration](../launch/advanced#nextflow-config-file): ``` // cloud executors @@ -74,18 +119,34 @@ k8s.securityContext = [ ## Git integration -**BitBucket authentication failure: _Can't retrieve revisions for pipeline - https://my.bitbucketserver.com/path/to/pipeline/repo - Cause: Get branches operation not supported by BitbucketServerRepositoryProvider provider_** +### BitBucket authentication failure + +**Error message:** + +``` +Can't retrieve revisions for pipeline - https://my.bitbucketserver.com/path/to/pipeline/repo - Cause: Get branches operation not supported by BitbucketServerRepositoryProvider provider +``` -If you supplied the correct BitBucket credentials and URL details in your `tower.yml` and still experience this error, update your version to at least v22.3.0. This version addresses SCM provider authentication issues and is likely to resolve the retrieval failure described here. +**Cause:** This error can occur due to SCM provider authentication issues in Seqera Platform versions earlier than v22.3.0. + +**Solution:** Update your Seqera Platform version to at least v22.3.0. This version addresses SCM provider authentication issues and is likely to resolve the retrieval failure. ## Optimization -**Optimized task failures: _OutOfMemoryError: Container killed due to memory usage_ error** +### Out of memory error + +**Error message:** + +``` +OutOfMemoryError: Container killed due to memory usage +``` -Improvements are being made to the way Nextflow calculates the optimal memory needed for containerized tasks, which will resolve issues with underestimating memory allocation in an upcoming release. +**Cause:** Nextflow may underestimate the optimal memory needed for containerized tasks, leading to out-of-memory errors. -A temporary workaround for this issue is to implement a `retry` error strategy in the failing process that will increase the allocated memory each time the failed task is retried. Add the following `errorStrategy` block to the failing process: +**Solution:** Improvements are being made to the way Nextflow calculates the optimal memory needed for containerized tasks in an upcoming release. + +As a temporary workaround, implement a `retry` error strategy in the failing process that will increase the allocated memory each time the failed task is retried. Add the following `errorStrategy` block to the failing process: ```bash process { @@ -97,9 +158,13 @@ process { ## Plugins -**Use the Nextflow SQL DB plugin to query AWS Athena** +### Query AWS Athena with the Nextflow SQL DB plugin + +**Problem:** You need to query data from AWS Athena in your Nextflow pipelines. -From [Nextflow 22.05.0-edge](https://github.com/nextflow-io/nextflow/releases/tag/v22.05.0-edge), your Nextflow pipelines can query data from AWS Athena. Add these configuration items to your `nextflow.config`. The use of secrets is optional: +**Requirements:** [Nextflow 22.05.0-edge](https://github.com/nextflow-io/nextflow/releases/tag/v22.05.0-edge) or later. + +**Solution:** Add these configuration items to your `nextflow.config`. The use of secrets is optional: ``` plugins { @@ -128,65 +193,108 @@ See [here](https://github.com/nextflow-io/nf-sqldb/discussions/5) for more infor ## Repositories -**Private Docker registry integration** +### Private Docker registry integration + +**Problem:** You need Seqera-invoked jobs to pull container images from private Docker registries, such as JFrog Artifactory. -Seqera-invoked jobs can pull container images from private Docker registries, such as JFrog Artifactory. The method to enable this depends on your computing platform. +**Solution:** The method to enable this depends on your computing platform. -For **AWS Batch**, modify your EC2 Launch Template using [these AWS instructions](https://aws.amazon.com/blogs/compute/how-to-authenticate-private-container-registries-using-aws-batch/). +- **AWS Batch** -:::note -This solution requires Docker Engine [17.07 or greater](https://docs.docker.com/engine/release-notes/17.07/), to use `--password-stdin`.
+ Modify your EC2 Launch Template using [these AWS instructions](https://aws.amazon.com/blogs/compute/how-to-authenticate-private-container-registries-using-aws-batch/). + + :::note + This solution requires Docker Engine [17.07 or greater](https://docs.docker.com/engine/release-notes/17.07/), to use `--password-stdin`.
You may need to add additional commands to your Launch template, depending on your security posture:
`cp /root/.docker/config.json /home/ec2-user/.docker/config.json && chmod 777 /home/ec2-user/.docker/config.json` -::: + ::: + +- **Azure Batch** + + Create a **Container registry**-type credential in your Seqera workspace and associate it with the Azure Batch compute environment defined in the same workspace. + +- **Kubernetes** + + Use an `imagePullSecret`, per [#2827](https://github.com/nextflow-io/nextflow/issues/2827). -For **Azure Batch**, create a **Container registry**-type credential in your Seqera workspace and associate it with the Azure Batch compute environment defined in the same workspace. +### Remote resource not found -For **Kubernetes**, use an `imagePullSecret`, per [#2827](https://github.com/nextflow-io/nextflow/issues/2827). +**Error message:** -**Nextflow error: _Remote resource not found_** +``` +Remote resource not found +``` + +**Cause:** This error can occur if the Nextflow head job fails to retrieve the necessary repository credentials from Seqera. This typically happens when the `TOWER_SERVER_URL` configuration is using the wrong protocol. -This error can occur if the Nextflow head job fails to retrieve the necessary repository credentials from Seqera. If your Nextflow log contains an entry like `DEBUG nextflow.scm.RepositoryProvider - Request [credentials -:-]`, check the protocol of your instance's `TOWER_SERVER_URL` configuration value. This must be set to `https` rather than `http` (unless you are using `TOWER_ENABLE_UNSAFE_MODE` to allow HTTP connections to Seqera in a test environment). +**Solution:** Check your Nextflow log for an entry like `DEBUG nextflow.scm.RepositoryProvider - Request [credentials -:-]`. If present, verify that your instance's `TOWER_SERVER_URL` configuration value is set to `https` rather than `http` (unless you are using `TOWER_ENABLE_UNSAFE_MODE` to allow HTTP connections to Seqera in a test environment). ## Secrets -**_Missing AWS execution role arn_ error during Seqera launch** +### Missing AWS execution role -The [ECS Agent must have access](https://docs.aws.amazon.com/batch/latest/userguide/execution-IAM-role.html) to retrieve secrets from the AWS Secrets Manager. Secrets-using pipelines launched from your instance in an AWS Batch compute environment will encounter this error if an IAM Execution Role is not provided. See [Secrets](../secrets/overview) for more information. +**Error message:** -**AWS Batch task failures with secrets** +``` +Missing AWS execution role arn +``` -You may encounter errors when executing pipelines that use secrets via AWS Batch: +**Cause:** The [ECS Agent must have access](https://docs.aws.amazon.com/batch/latest/userguide/execution-IAM-role.html) to retrieve secrets from the AWS Secrets Manager. This error occurs when pipelines that use secrets are launched in an AWS Batch compute environment without an IAM Execution Role. -- If you use `nf-sqldb` version 0.4.1 or earlier and have secrets in your `nextflow.config`, you may encounter _nextflow.secret.MissingSecretException: Unknown config secret_ errors in your Nextflow log. - Resolve this error by explicitly defining the `xpack-amzn` plugin in your configuration: +**Solution:** Provide an IAM Execution Role when configuring your AWS Batch compute environment. See [Secrets](../secrets/overview) for more information. - ``` - plugins { - id 'xpack-amzn' - id 'nf-sqldb' - } - ``` +### AWS Batch task failures with secrets -- If you have two or more processes that use the same container image, but only a subset of these processes use secrets, your secret-using processes may fail during the initial run and then succeed when resumed. This is due to a bug in how Nextflow (22.07.1-edge and earlier) registers jobs with AWS Batch. +**Problem:** Pipelines that use secrets fail when executed via AWS Batch. - To resolve the issue, upgrade your Nextflow to version 22.08.0-edge or later. If you cannot upgrade, use the following as workarounds: +**Cause:** There are two common causes: + +1. **nf-sqldb plugin version 0.4.1 or earlier**: If you use `nf-sqldb` version 0.4.1 or earlier and have secrets in your `nextflow.config`, you may encounter _nextflow.secret.MissingSecretException: Unknown config secret_ errors in your Nextflow log. + +2. **Nextflow job registration bug (22.07.1-edge and earlier)**: If you have two or more processes that use the same container image, but only a subset of these processes use secrets, your secret-using processes may fail during the initial run and then succeed when resumed. This is due to a bug in how Nextflow registers jobs with AWS Batch. + +**Solution:** To resolve this issue: + +For nf-sqldb, explicitly define the `xpack-amzn` plugin in your configuration: + +``` +plugins { + id 'xpack-amzn' + id 'nf-sqldb' +} +``` - - Use a different container image for each process. - - Define the same set of secrets in each process that uses the same container image. +For Nextflow job registration, upgrade your Nextflow to version 22.08.0-edge or later. + +If you cannot upgrade, use the following as workarounds: + +- Use a different container image for each process. +- Define the same set of secrets in each process that uses the same container image. ## Tower Agent -**"_Unexpected Exception in WebSocket [...]: Operation timed out java.io.IOException: Operation timed out_" error** +### WebSocket operation timeout + +**Error message:** -We have improved Tower Agent reconnection logic with the release of version 0.5.0. [Update your Tower Agent version](https://github.com/seqeralabs/tower-agent) before relaunching your pipeline. +``` +Unexpected Exception in WebSocket [...]: Operation timed out java.io.IOException: Operation timed out +``` + +**Cause:** This error occurs due to connection timeout issues in Tower Agent versions earlier than 0.5.0. + +**Solution:** [Update your Tower Agent version](https://github.com/seqeralabs/tower-agent) to 0.5.0 or later, which includes improved reconnection logic. Then relaunch your pipeline. ## Google -**VM preemption causes task interruptions** +### VM preemption causes task interruptions + +**Problem:** Tasks are interrupted before completion when running pipelines on Google Cloud preemptible VMs. + +**Cause:** Running pipelines on preemptible VMs provides significant cost savings, but increases the likelihood that a task will be interrupted before completion due to VM preemption. -Running your pipelines on preemptible VMs provides significant cost savings, but increases the likelihood that a task will be interrupted before completion. It is a recommended best practice to implement a retry strategy when you encounter [exit codes](https://cloud.google.com/life-sciences/docs/troubleshooting#retrying_after_encountering_errors) that are commonly related to preemption. For example: +**Solution:** Implement a retry strategy when you encounter [exit codes](https://cloud.google.com/life-sciences/docs/troubleshooting#retrying_after_encountering_errors) that are commonly related to preemption. For example: ```config process { @@ -196,24 +304,33 @@ process { } ``` -**Seqera Service account permissions for Google Life Sciences and GKE** +### Seqera Service account permissions for Google Life Sciences and GKE -The following roles must be granted to the `nextflow-service-account`: +**Problem:** You need to configure service account permissions for Google Life Sciences and GKE. + +**Requirements:** The following roles must be granted to the `nextflow-service-account`: 1. Cloud Life Sciences Workflows Runner 2. Service Account User 3. Service Usage Consumer 4. Storage Object Admin -For detailed information, see [this guide](https://cloud.google.com/life-sciences/docs/tutorials/nextflow#create_a_service_account_and_add_roles). +**Solution:** Grant the required roles to your service account. For detailed information, see [this guide](https://cloud.google.com/life-sciences/docs/tutorials/nextflow#create_a_service_account_and_add_roles). ## Kubernetes -**_Invalid value: "xxx": must be less or equal to memory limit_ error** +### Kubernetes memory limit error + +**Error message:** + +``` +field: spec.containers[x].resources.requests +message: Invalid value: "xxx": must be less than or equal to memory limit +``` -This error may be encountered when you specify a value in the **Head Job memory** field during the creation of a Kubernetes-type compute environment. +**Cause:** This error is encountered when you specify a value in the **Head Job memory** field during the creation of a Kubernetes-type compute environment that exceeds the [system resource limits](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/) configured on your Kubernetes cluster. The cluster's resource limits deny the Nextflow head job's resource request. -If you receive an error that includes _field: spec.containers[x].resources.requests_ and _message: Invalid value: "xxx": must be less than or equal to memory limit_, your Kubernetes cluster may be configured with [system resource limits](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/) which deny the Nextflow head job's resource request. To isolate the component causing the problem, try to launch a pod directly on your cluster via your Kubernetes administration solution. For example: +**Solution:** To isolate the component causing the problem, try to launch a pod directly on your cluster via your Kubernetes administration solution. For example: ```yaml --- @@ -234,11 +351,13 @@ spec: restartPolicy: Never ``` +If this test pod also fails, adjust your cluster's resource limits or reduce the memory request to comply with the limits. + ## On-prem HPC -**_java: command not found_ error** +### _java: command not found_ error -When submitting jobs to your on-prem HPC (using either SSH or Tower Agent authentication), the following error may appear in your Nextflow logs, even with Java on your `PATH` environment variable: +**Error message:** When submitting jobs to your on-prem HPC (using either SSH or Tower Agent authentication), the following error may appear in your Nextflow logs, even with Java on your `PATH` environment variable: ``` java: command not found @@ -247,36 +366,43 @@ Nextflow is trying to use the Java VM defined for the following environment vari NXF_OPTS: ``` -Possible reasons for this error: +**Cause:** There are two possible causes: 1. The queue where the Nextflow head job runs is in a different environment/node than your login node userspace. 2. If your HPC cluster uses modules, the Java module may not be loaded by default. -To troubleshoot: +**Solution:** To resolve this issue: 1. Open an interactive session with the head job queue. -2. Launch the Nextflow job from the interactive session. +2. Launch the Nextflow job from the interactive session to verify Java availability. 3. If your cluster uses modules: - Add `module load ` in the **Advanced Features > Pre-run script** field when creating your HPC compute environment in Seqera. 4. If your cluster doesn't use modules: - 1. Source an environment with Java and Nextflow using the **Advanced Features > Pre-run script** field when creating your HPC compute environment in Seqera. + - Source an environment with Java and Nextflow using the **Advanced Features > Pre-run script** field when creating your HPC compute environment in Seqera. + +### Pipeline submissions to HPC clusters fail for some users -**Pipeline submissions to HPC clusters fail for some users** +**Error message:** -Nextflow launcher scripts will fail if processed by a non-Bash shell (e.g., `zsh`, `tcsh`). This problem can be identified from certain error entries: +1. Your _.nextflow.log_ contains an error like: + ``` + Invalid workflow status - expected: SUBMITTED; current: FAILED + ``` -1. Your _.nextflow.log_ contains an error like _Invalid workflow status - expected: SUBMITTED; current: FAILED_. 2. Your Seqera **Error report** tab contains an error like: + ```yaml + Slurm job submission failed + - command: mkdir -p /home//\//scratch; cd /home//\//scratch; echo | base64 -d > nf-.launcher.sh; sbatch ./nf-.launcher.sh + - exit : 1 + - message: Submitted batch job <#> + ``` -```yaml -Slurm job submission failed -- command: mkdir -p /home//\//scratch; cd /home//\//scratch; echo | base64 -d > nf-.launcher.sh; sbatch ./nf-.launcher.sh -- exit : 1 -- message: Submitted batch job <#> -``` +**Cause:** Nextflow launcher scripts will fail if processed by a non-Bash shell (e.g., `zsh`, `tcsh`). -Connect to the head node via SSH and run `ps -p $$` to verify your default shell. If you see an entry other than Bash, fix as follows: +**Solution:** To resolve this issue: -1. Check which shells are available to you: `cat /etc/shells` -2. Change your shell: `chsh -s /usr/bin/bash` (the path to the binary may differ, depending on your HPC configuration) +1. Connect to the head node via SSH and run `ps -p $$` to verify your default shell. +2. If you see an entry other than Bash: + - Check which shells are available to you: `cat /etc/shells` + - Change your shell: `chsh -s /usr/bin/bash` (the path to the binary may differ, depending on your HPC configuration) 3. If submissions continue to fail after this shell change, ask your Seqera Platform admin to restart the **backend** and **cron** containers, then submit again. From aaae2c08c863ef376d1ea29352b8015b559af8ba Mon Sep 17 00:00:00 2001 From: Christopher Hakkaart Date: Thu, 27 Nov 2025 14:57:16 +1300 Subject: [PATCH 2/2] General POC --- .../docs/troubleshooting_and_faqs/troubleshooting.md | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-) diff --git a/platform-cloud/docs/troubleshooting_and_faqs/troubleshooting.md b/platform-cloud/docs/troubleshooting_and_faqs/troubleshooting.md index c32de70e9..008c1deb4 100644 --- a/platform-cloud/docs/troubleshooting_and_faqs/troubleshooting.md +++ b/platform-cloud/docs/troubleshooting_and_faqs/troubleshooting.md @@ -36,7 +36,7 @@ Unknown pipeline repository or missing credentials **Cause:** GitHub imposes [rate limits](https://docs.github.com/en/rest/overview/resources-in-the-rest-api#rate-limiting) on repository pulls (including public repositories). Unauthenticated requests are capped at 60 requests/hour and authenticated requests are capped at 5000 requests/hour. This error typically occurs when hitting the 60 requests/hour cap for unauthenticated requests. -**Solution:** +**Solution:** To resolve this issue: 1. Ensure there's at least one GitHub credential in your workspace's **Credentials** tab. 2. Ensure that the **Access token** field of all GitHub credential objects is populated with a [Personal Access Token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) value and **not** a user password. GitHub PATs are typically longer than passwords and include a `ghp_` prefix. For example: `ghp*IqIMNOZH6zOwIEB4T9A2g4EHMy8Ji42q4HA` @@ -54,13 +54,9 @@ Unknown pipeline repository or missing credentials No such variable ``` -**Cause** - -This error occurs when executing a DSL1-based Nextflow workflow using [Nextflow 22.03.0-edge](https://github.com/nextflow-io/nextflow/releases/tag/v22.03.0-edge) or later. +**Cause:** This error occurs when executing a DSL1-based Nextflow workflow using [Nextflow 22.03.0-edge](https://github.com/nextflow-io/nextflow/releases/tag/v22.03.0-edge) or later. -**Solution** - -Upgrade your workflow to use DSL2 syntax, or use a Nextflow version earlier than 22.03.0-edge. +**Solution:** Upgrade your workflow to use DSL2 syntax, or use a Nextflow version earlier than 22.03.0-edge. ### Sleep commands in Nextflow workflows @@ -73,7 +69,6 @@ Upgrade your workflow to use DSL2 syntax, or use a Nextflow version earlier than **Solution:** Be aware of the context where you're using `sleep` and adjust the time value accordingly (milliseconds for `errorStrategy` blocks, seconds for bash process scripts). - ### Large number of batch job definitions **Problem:** Your AWS Batch account has accumulated a large number of job definitions.