Skip to content

Fastp hangs when out of memory instead of throwing OOM #657

@James-S-Santangelo

Description

@James-S-Santangelo

Hello!

I just finished running fastp on around 1000 samples on a SLURM-based cluster, and noticed around 3 of them failed to finish after 2 hours due to a TIMEOUT error. These were the 3 largest samples, where read 1 and read 2 files were anywhere from 13 Gb each to 27 Gb. I resubmitted these 3 jobs with twice the time (4 hours) and 1.5X the memory (12 Gb): the two smallest samples then finished in under an hour and the largest took around 1.5 hours. It seems that if there is insufficient memory, fastp will hang rather then throwing an OOM error, and will instead just eventually TIMEOUT. Not a huge deal since the issue is easily overcome, but just thought I would mention it. Here is the command I ran just for posterity (note that variables in {} are Snakemake syntax):

fastp --in1 {input.read1} \
    --in2 {input.read2} \
    --out1 {output.r1_trim} \
    --out2 {output.r2_trim} \
    --unpaired1 {output.unp} \
    --unpaired2 {output.unp} \
    --json {output.json} \
    --thread 4 \
    --detect_adapter_for_pe \
    --trim_poly_g \
    --dont_eval_duplication \
    --overrepresentation_analysis 2> {log}

James

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions