-
Notifications
You must be signed in to change notification settings - Fork 10
Description
I ran rattle correct on my input files through snakemake. I get an error message saying this:
Error in rule cluster_correction:
jobid: 13
input: data/.../.../samplefile.fastq
output: data/RATTLE_out/samplefile/corrected.fq, data/RATTLE_out/samplefile/uncorrected.fq, data/RATTLE_out/samplefile/consensi.fq
log: log/RATTLE_log/samplefile_correct.out, log/RATTLE_log/samplefile_correct.err (check log file(s) for error details)
shell:
/storage/.../.../bin/RATTLE/rattle correct -i data/.../.../samplefile.fastq -c data/RATTLE_out/samplefile/clusters.out -o data/RATTLE_out/samplefile/corrected.fq data/RATTLE_out/samplefile/uncorrected.fq data/RATTLE_out/samplefile/consensi.fq -t 48 > log/RATTLE_log/samplefile.out 2> log/RATTLE_log/samplefile.err
(one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
Error executing rule cluster_correction on cluster (jobid: 13, external: 2761217, jobscript: /storage/.../.../.../.snakemake/tmp.tz0fhacf/snakejob.cluster_correction.13.sh). For error details see the cluster log and the log files of the involved rule(s).
When I open samplefile.err it says: "Reading fasta file... Done" and when I open samplefile.out it is empty.
I also get this message below:
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
slurmstepd: error: Detected 1 oom-kill event(s) in StepId=2761217.0 cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.
srun: error: valiant1: task 0: Out Of Memory
slurmstepd: error: Detected 1 oom-kill event(s) in StepId=2761217.batch cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.
I gave it 100GB ram to begin with but I guess it wasn't enough. Is there a way to know how much ram I need to give it before I run the snakemake command?