-
Bug reportExpected behavior and actual behaviorA pipeline written by a colleague of mine fails to run when using SGE as an executor. The local executor works fine. The pipeline hangs, however I can see the output files for each process are correctly produced. I believe this hang is due to the .exitcode file not being correctly produced. Steps to reproduce the problemRun the pipeline with the SGE executor. Program outputnextflow log file can be found at the following gist: https://gist.github.com/snalty/1a02fc0a8b0b3cf0f814d16f604fb743 Further to the nextflow log, the SGE job log contains the following:
Environment
Additional contextThanks for the help. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
Interestingly, when I try and run nextflow with LD_DEBUG=all, I get the following error message:
although that might be a red herring perhaps? |
Beta Was this translation helpful? Give feedback.
-
Furthermore, here is the .command.log output when adding set -x to the .command.run file:
Running the commands that fail From the shell works perfectly fine, and I can see that the file that apparently doesn't exist:
does actually exist on the file system. This seems to me like perhaps a permissions issue. |
Beta Was this translation helpful? Give feedback.
-
Ugh I'm so dumb, it genuinely was looking me right in the face:
The job simply needed more memory!
|
Beta Was this translation helpful? Give feedback.
Ugh I'm so dumb, it genuinely was looking me right in the face:
/var/spool/gridengine/execd/brain/job_scripts/28254: xrealloc: cannot allocate 33554432 bytes
The job simply needed more memory!
process.clusterOptions = '-l h_vmem=100M'
in the config fixed it.