Skip to content

Conversation

@mo-mahe
Copy link

@mo-mahe mo-mahe commented Dec 16, 2025

Workflow for processing amplicon pool sequencing data with reference.

This workflow allows you to reconstruct a sequence from an amplicon pool using a reference sequence. To run this workflow, you need the reads from the pool library you want to analyse in FASTQ format, separated into two files: forward and reverse. You will also need your reference sequence in FASTA format. This workflow creates a consensus sequence and a metadata file containing the length of the consensus sequence, the number of reads mapped to it, and the average, minimum, and maximum coverage depth. You can also retrieve a file containing unmapped reads.

@mvdbeek mvdbeek requested a review from Copilot December 16, 2025 15:43
@mo-mahe
Copy link
Author

mo-mahe commented Dec 16, 2025

@yvanlebras

@yvanlebras
Copy link
Collaborator

Youhou ! Thank you Molène ! If some help needed, don't hesitate to ask! Don't hesitate Marius, we are trying to contribute to IWC thanks to Molène and Pauline work! We have around 10 workflows to push!

@yvanlebras yvanlebras changed the title Add map to reference workflow Add ecology "pool of amplicons map to reference" workflow Dec 16, 2025
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds a new workflow for processing amplicon pool sequencing data with a reference sequence. The workflow performs quality control, read pairing, filtering, and reference-based mapping to generate a consensus sequence and associated metadata.

Key changes:

  • Introduces a complete map-to-reference workflow for amplicon pool sequencing
  • Includes comprehensive documentation and testing infrastructure
  • Implements metadata generation pipeline tracking coverage and mapping statistics

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 11 comments.

Show a summary per file
File Description
Map-to-reference.ga Main workflow file defining the complete amplicon pool processing pipeline
Map-to-reference-tests.yml Test configuration with input files and output assertions
README.md Detailed workflow documentation explaining purpose, inputs, steps, and outputs
.dockstore.yml Dockstore configuration for workflow publication
CHANGELOG.md Version history tracking initial release
metadata.tabular Test data file containing expected metadata output

Comment on lines +357 to +358
"label": "fastqc_forward",
"output_name": "html_file",
Copy link

Copilot AI Dec 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The workflow_outputs label "fastqc_forward" uses underscores. According to IWC guidelines, workflow output labels should be human-readable with spaces. Consider "FastQC forward" or "FastQC forward reads".

Copilot generated this review using guidance from repository custom instructions.
Comment on lines +414 to +415
"label": "fastqc_reverse",
"output_name": "html_file",
Copy link

Copilot AI Dec 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The workflow_outputs label "fastqc_reverse" uses underscores. According to IWC guidelines, workflow output labels should be human-readable with spaces. Consider "FastQC reverse" or "FastQC reverse reads".

Copilot generated this review using guidance from repository custom instructions.
Comment on lines +1056 to +1057
"label": "visualize_consensus",
"output_name": "output",
Copy link

Copilot AI Dec 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The workflow_outputs label "visualize_consensus" uses underscores. According to IWC guidelines, workflow output labels should be human-readable with spaces. Consider "Consensus visualization" or "Visualize consensus".

Copilot generated this review using guidance from repository custom instructions.
Forward primer: AGTGAGTTTCAACAAAACAYAAGGNCATNGG
Reverse primer: AGTGAGTAAACTTCAGGGTGTCCRAARAATCA
outputs:
fastqc_forward:
Copy link

Copilot AI Dec 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test file output names must match the workflow_outputs labels exactly. If the workflow labels are updated to human-readable format (e.g., "FastQC forward"), these test output names must be updated accordingly.

Copilot generated this review using guidance from repository custom instructions.
asserts:
has_text:
text: html
fastqc_reverse:
Copy link

Copilot AI Dec 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test file output names must match the workflow_outputs labels exactly. If the workflow labels are updated to human-readable format (e.g., "FastQC forward"), these test output names must be updated accordingly.

Copilot generated this review using guidance from repository custom instructions.
asserts:
has_text:
text: html
fastqc_paired:
Copy link

Copilot AI Dec 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test file output names must match the workflow_outputs labels exactly. If the workflow labels are updated to human-readable format (e.g., "FastQC forward"), these test output names must be updated accordingly.

Copilot generated this review using guidance from repository custom instructions.
asserts:
has_text:
text: html
visualize_consensus:
Copy link

Copilot AI Dec 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test file output names must match the workflow_outputs labels exactly. If the workflow labels are updated to human-readable format (e.g., "FastQC forward"), these test output names must be updated accordingly.

Copilot generated this review using guidance from repository custom instructions.
@github-actions
Copy link

Test Results (powered by Planemo)

Test Summary

Test State Count
Total 1
Passed 0
Error 0
Failure 1
Skipped 0
Failed Tests
  • ❌ Map-to-reference.ga_0

    Problems:

    • Output with path /tmp/tmpukg932j0/table rename column on dataset 43__f5ff3eed-6fb5-4bfd-9ee9-74cca15e4269.tabular different than expected, difference (using diff):
      ( /home/runner/work/iwc/iwc/workflows/ecology/map-to-reference-workflow/test-data/metadata.tabular v. /tmp/tmpti3benfkmetadata.tabular )
      --- local_file
      +++ history_data
      @@ -1,2 +1,2 @@
       mapped_reads	consensus_length	min_depth	max_depth	mean_depth
      -116	663	22	72	43.095744680851
      +116	669	22	72	43.095744680851
      
      

    Workflow invocation details

    • Invocation Messages

    • Steps
      • Step 1: FASTQ - Forward:

        • step_state: scheduled
      • Step 2: FASTQ - Reverse:

        • step_state: scheduled
      • Step 11: __ZIP_COLLECTION__:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
      • Step 12: toolshed.g2.bx.psu.edu/repos/bgruening/text_processing/tp_text_file_with_recurring_lines/9.5+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • times=1; yes -- 'EONForward' 2>/dev/null | head -n $times >> '/tmp/tmpa0q9y2tx/job_working_directory/000/8/outputs/dataset_4d69773e-dc80-4acb-9c96-88c61ce4d77c.dat'; times=1; yes -- 'AGTGAGTTTCAACAAAACAYAAGGNCATNGG' 2>/dev/null | head -n $times >> '/tmp/tmpa0q9y2tx/job_working_directory/000/8/outputs/dataset_4d69773e-dc80-4acb-9c96-88c61ce4d77c.dat';

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              token_set [{"__index__": 0, "line": "EONForward", "repeat_select": {"__current_case__": 0, "repeat_select_opts": "user", "times": "1"}}, {"__index__": 1, "line": "AGTGAGTTTCAACAAAACAYAAGGNCATNGG", "repeat_select": {"__current_case__": 0, "repeat_select_opts": "user", "times": "1"}}]
      • Step 13: toolshed.g2.bx.psu.edu/repos/bgruening/text_processing/tp_text_file_with_recurring_lines/9.5+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • times=1; yes -- 'EONReverse' 2>/dev/null | head -n $times >> '/tmp/tmpa0q9y2tx/job_working_directory/000/9/outputs/dataset_d584e290-ca04-4c42-9787-cc2119453c50.dat'; times=1; yes -- 'AGTGAGTAAACTTCAGGGTGTCCRAARAATCA' 2>/dev/null | head -n $times >> '/tmp/tmpa0q9y2tx/job_working_directory/000/9/outputs/dataset_d584e290-ca04-4c42-9787-cc2119453c50.dat';

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              token_set [{"__index__": 0, "line": "EONReverse", "repeat_select": {"__current_case__": 0, "repeat_select_opts": "user", "times": "1"}}, {"__index__": 1, "line": "AGTGAGTAAACTTCAGGGTGTCCRAARAATCA", "repeat_select": {"__current_case__": 0, "repeat_select_opts": "user", "times": "1"}}]
      • Step 14: toolshed.g2.bx.psu.edu/repos/devteam/fastq_filter/fastq_filter/1.1.5+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • gx-fastq-filter '/tmp/tmpa0q9y2tx/files/d/e/0/dataset_de06ef6d-54ae-458f-9eef-f90881962286.dat' '/tmp/tmpa0q9y2tx/job_working_directory/000/10/configs/tmp3swl894b' '/tmp/tmpa0q9y2tx/job_working_directory/000/10/outputs/dataset_9530dc79-ec79-49e2-83cd-1fa429d434e1.dat' '/tmp/tmpa0q9y2tx/job_working_directory/000/10/outputs/dataset_9530dc79-ec79-49e2-83cd-1fa429d434e1_files' 'sanger'

            Exit Code:

            • 0

            Standard Output:

            • Kept 8562 of 8562 reads (100.00%).
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              fastq_filters []
              max_num_deviants "0"
              max_quality "0.0"
              max_size "0"
              min_quality "0.0"
              min_size "0"
              paired_end false
      • Step 15: toolshed.g2.bx.psu.edu/repos/devteam/bowtie2/bowtie2/2.5.4+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • set -o | grep -q pipefail && set -o pipefail; bowtie2-build --threads ${GALAXY_SLOTS:-4} '/tmp/tmpa0q9y2tx/files/1/8/4/dataset_18465be9-6602-4eda-bdad-e9ab59a336a0.dat' genome && ln -s -f '/tmp/tmpa0q9y2tx/files/1/8/4/dataset_18465be9-6602-4eda-bdad-e9ab59a336a0.dat' genome.fa &&   ln -f -s '/tmp/tmpa0q9y2tx/files/4/a/8/dataset_4a899d09-4768-4202-9869-e6dfb1a74628.dat' input_f.fastq &&  ln -f -s '/tmp/tmpa0q9y2tx/files/d/3/d/dataset_d3d260e7-8cd9-481c-bfc2-7811867e7278.dat' input_r.fastq &&   THREADS=${GALAXY_SLOTS:-4} && if [ "$THREADS" -gt 1 ]; then (( THREADS-- )); fi &&   bowtie2  -p "$THREADS"  -x 'genome'   -1 'input_f.fastq' -2 'input_r.fastq' --un-conc 'unaligned_reads' -I 0 -X 500 --fr                   --skip 0 --qupto 100000000 --trim5 0 --trim3 0 --phred33    -N 0 -L 28 -i 'S,1,1.15' --n-ceil 'L,0,0.15' --dpad 15 --gbar 4     --local --score-min 'G,20,8'  --ma 2 --mp '6,2' --np 1 --rdg 5,3 --rfg 5,3   -D 15 -R 2  --seed 0     | samtools sort -l 0 -T "${TMPDIR:-.}" -O bam | samtools view --no-PG -O bam -@ ${GALAXY_SLOTS:-1} -o '/tmp/tmpa0q9y2tx/job_working_directory/000/11/outputs/dataset_ed6edf45-c0a2-4a55-a1cd-5d5870d70357.dat'

            Exit Code:

            • 0

            Standard Error:

            • Building a SMALL index
              Renaming genome.3.bt2.tmp to genome.3.bt2
              Renaming genome.4.bt2.tmp to genome.4.bt2
              Renaming genome.1.bt2.tmp to genome.1.bt2
              Renaming genome.2.bt2.tmp to genome.2.bt2
              Renaming genome.rev.1.bt2.tmp to genome.rev.1.bt2
              Renaming genome.rev.2.bt2.tmp to genome.rev.2.bt2
              8839 reads; of these:
                8839 (100.00%) were paired; of these:
                  8741 (98.89%) aligned concordantly 0 times
                  98 (1.11%) aligned concordantly exactly 1 time
                  0 (0.00%) aligned concordantly >1 times
                  ----
                  8741 pairs aligned concordantly 0 times; of these:
                    23 (0.26%) aligned discordantly 1 time
                  ----
                  8718 pairs aligned 0 times concordantly or discordantly; of these:
                    17436 mates make up the pairs; of these:
                      17432 (99.98%) aligned 0 times
                      2 (0.01%) aligned exactly 1 time
                      2 (0.01%) aligned >1 times
              1.39% overall alignment rate
              

            Standard Output:

            • Settings:
                Output files: "genome.*.bt2"
                Line rate: 6 (line is 64 bytes)
                Lines per side: 1 (side is 64 bytes)
                Offset rate: 4 (one in 16)
                FTable chars: 10
                Strings: unpacked
                Max bucket size: default
                Max bucket size, sqrt multiplier: default
                Max bucket size, len divisor: 4
                Difference-cover sample period: 1024
                Endianness: little
                Actual local endianness: little
                Sanity checking: disabled
                Assertions: disabled
                Random seed: 0
                Sizeofs: void*:8, int:4, long:8, size_t:8
              Input files DNA, FASTA:
                /tmp/tmpa0q9y2tx/files/1/8/4/dataset_18465be9-6602-4eda-bdad-e9ab59a336a0.dat
              Reading reference sizes
                Time reading reference sizes: 00:00:00
              Calculating joined length
              Writing header
              Reserving space for joined string
              Joining reference sequences
                Time to join reference sequences: 00:00:00
              bmax according to bmaxDivN setting: 164
              Using parameters --bmax 123 --dcv 1024
                Doing ahead-of-time memory usage test
                Passed!  Constructing with these parameters: --bmax 123 --dcv 1024
              Constructing suffix-array element generator
              Building DifferenceCoverSample
                Building sPrime
                Building sPrimeOrder
                V-Sorting samples
                V-Sorting samples time: 00:00:00
                Allocating rank array
                Ranking v-sort output
                Ranking v-sort output time: 00:00:00
                Invoking Larsson-Sadakane on ranks
                Invoking Larsson-Sadakane on ranks time: 00:00:00
                Sanity-checking and returning
              Building samples
              Reserving space for 12 sample suffixes
              Generating random suffixes
              QSorting 12 sample offsets, eliminating duplicates
              QSorting sample offsets, eliminating duplicates time: 00:00:00
              Multikey QSorting 12 samples
                (Using difference cover)
                Multikey QSorting samples time: 00:00:00
              Calculating bucket sizes
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 1, merged 7; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 1, merged 1; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Avg bucket size: 93.1429 (target: 122)
              Converting suffix-array elements to index image
              Allocating ftab, absorbFtab
              Entering Ebwt loop
              Getting block 1 of 7
                Reserving size (123) for bucket 1
                Calculating Z arrays for bucket 1
                Entering block accumulator loop for bucket 1:
                bucket 1: 10%
                bucket 1: 20%
                bucket 1: 30%
                bucket 1: 40%
                bucket 1: 50%
                bucket 1: 60%
                bucket 1: 70%
                bucket 1: 80%
                bucket 1: 90%
                bucket 1: 100%
                Sorting block of length 118 for bucket 1
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 119 for bucket 1
              Getting block 2 of 7
                Reserving size (123) for bucket 2
                Calculating Z arrays for bucket 2
                Entering block accumulator loop for bucket 2:
                bucket 2: 10%
                bucket 2: 20%
                bucket 2: 30%
                bucket 2: 40%
                bucket 2: 50%
                bucket 2: 60%
                bucket 2: 70%
                bucket 2: 80%
                bucket 2: 90%
                bucket 2: 100%
                Sorting block of length 115 for bucket 2
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 116 for bucket 2
              Getting block 3 of 7
                Reserving size (123) for bucket 3
                Calculating Z arrays for bucket 3
                Entering block accumulator loop for bucket 3:
                bucket 3: 10%
                bucket 3: 20%
                bucket 3: 30%
                bucket 3: 40%
                bucket 3: 50%
                bucket 3: 60%
                bucket 3: 70%
                bucket 3: 80%
                bucket 3: 90%
                bucket 3: 100%
                Sorting block of length 96 for bucket 3
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 97 for bucket 3
              Getting block 4 of 7
                Reserving size (123) for bucket 4
                Calculating Z arrays for bucket 4
                Entering block accumulator loop for bucket 4:
                bucket 4: 10%
                bucket 4: 20%
                bucket 4: 30%
                bucket 4: 40%
                bucket 4: 50%
                bucket 4: 60%
                bucket 4: 70%
                bucket 4: 80%
                bucket 4: 90%
                bucket 4: 100%
                Sorting block of length 77 for bucket 4
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 78 for bucket 4
              Getting block 5 of 7
                Reserving size (123) for bucket 5
                Calculating Z arrays for bucket 5
                Entering block accumulator loop for bucket 5:
                bucket 5: 10%
                bucket 5: 20%
                bucket 5: 30%
                bucket 5: 40%
                bucket 5: 50%
                bucket 5: 60%
                bucket 5: 70%
                bucket 5: 80%
                bucket 5: 90%
                bucket 5: 100%
                Sorting block of length 66 for bucket 5
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 67 for bucket 5
              Getting block 6 of 7
                Reserving size (123) for bucket 6
                Calculating Z arrays for bucket 6
                Entering block accumulator loop for bucket 6:
                bucket 6: 10%
                bucket 6: 20%
                bucket 6: 30%
                bucket 6: 40%
                bucket 6: 50%
                bucket 6: 60%
                bucket 6: 70%
                bucket 6: 80%
                bucket 6: 90%
                bucket 6: 100%
                Sorting block of length 95 for bucket 6
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 96 for bucket 6
              Getting block 7 of 7
                Reserving size (123) for bucket 7
                Calculating Z arrays for bucket 7
                Entering block accumulator loop for bucket 7:
                bucket 7: 10%
                bucket 7: 20%
                bucket 7: 30%
                bucket 7: 40%
                bucket 7: 50%
                bucket 7: 60%
                bucket 7: 70%
                bucket 7: 80%
                bucket 7: 90%
                bucket 7: 100%
                Sorting block of length 85 for bucket 7
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 86 for bucket 7
              Exited Ebwt loop
              fchr[A]: 0
              fchr[C]: 219
              fchr[G]: 358
              fchr[T]: 469
              fchr[$]: 658
              Exiting Ebwt::buildToDisk()
              Returning from initFromVector
              Wrote 4194838 bytes to primary EBWT file: genome.1.bt2.tmp
              Wrote 172 bytes to secondary EBWT file: genome.2.bt2.tmp
              Re-opening _in1 and _in2 as input streams
              Returning from Ebwt constructor
              Headers:
                  len: 658
                  bwtLen: 659
                  sz: 165
                  bwtSz: 165
                  lineRate: 6
                  offRate: 4
                  offMask: 0xfffffff0
                  ftabChars: 10
                  eftabLen: 20
                  eftabSz: 80
                  ftabLen: 1048577
                  ftabSz: 4194308
                  offsLen: 42
                  offsSz: 168
                  lineSz: 64
                  sideSz: 64
                  sideBwtSz: 48
                  sideBwtLen: 192
                  numSides: 4
                  numLines: 4
                  ebwtTotLen: 256
                  ebwtTotSz: 256
                  color: 0
                  reverse: 0
              Total time for call to driver() for forward index: 00:00:00
              Reading reference sizes
                Time reading reference sizes: 00:00:00
              Calculating joined length
              Writing header
              Reserving space for joined string
              Joining reference sequences
                Time to join reference sequences: 00:00:00
                Time to reverse reference sequence: 00:00:00
              bmax according to bmaxDivN setting: 164
              Using parameters --bmax 123 --dcv 1024
                Doing ahead-of-time memory usage test
                Passed!  Constructing with these parameters: --bmax 123 --dcv 1024
              Constructing suffix-array element generator
              Building DifferenceCoverSample
                Building sPrime
                Building sPrimeOrder
                V-Sorting samples
                V-Sorting samples time: 00:00:00
                Allocating rank array
                Ranking v-sort output
                Ranking v-sort output time: 00:00:00
                Invoking Larsson-Sadakane on ranks
                Invoking Larsson-Sadakane on ranks time: 00:00:00
                Sanity-checking and returning
              Building samples
              Reserving space for 12 sample suffixes
              Generating random suffixes
              QSorting 12 sample offsets, eliminating duplicates
              QSorting sample offsets, eliminating duplicates time: 00:00:00
              Multikey QSorting 12 samples
                (Using difference cover)
                Multikey QSorting samples time: 00:00:00
              Calculating bucket sizes
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 2, merged 8; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 1, merged 0; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 1, merged 0; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Avg bucket size: 93.1429 (target: 122)
              Converting suffix-array elements to index image
              Allocating ftab, absorbFtab
              Entering Ebwt loop
              Getting block 1 of 7
                Reserving size (123) for bucket 1
                Calculating Z arrays for bucket 1
                Entering block accumulator loop for bucket 1:
                bucket 1: 10%
                bucket 1: 20%
                bucket 1: 30%
                bucket 1: 40%
                bucket 1: 50%
                bucket 1: 60%
                bucket 1: 70%
                bucket 1: 80%
                bucket 1: 90%
                bucket 1: 100%
                Sorting block of length 67 for bucket 1
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 68 for bucket 1
              Getting block 2 of 7
                Reserving size (123) for bucket 2
                Calculating Z arrays for bucket 2
                Entering block accumulator loop for bucket 2:
                bucket 2: 10%
                bucket 2: 20%
                bucket 2: 30%
                bucket 2: 40%
                bucket 2: 50%
                bucket 2: 60%
                bucket 2: 70%
                bucket 2: 80%
                bucket 2: 90%
                bucket 2: 100%
                Sorting block of length 88 for bucket 2
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 89 for bucket 2
              Getting block 3 of 7
                Reserving size (123) for bucket 3
                Calculating Z arrays for bucket 3
                Entering block accumulator loop for bucket 3:
                bucket 3: 10%
                bucket 3: 20%
                bucket 3: 30%
                bucket 3: 40%
                bucket 3: 50%
                bucket 3: 60%
                bucket 3: 70%
                bucket 3: 80%
                bucket 3: 90%
                bucket 3: 100%
                Sorting block of length 103 for bucket 3
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 104 for bucket 3
              Getting block 4 of 7
                Reserving size (123) for bucket 4
                Calculating Z arrays for bucket 4
                Entering block accumulator loop for bucket 4:
                bucket 4: 10%
                bucket 4: 20%
                bucket 4: 30%
                bucket 4: 40%
                bucket 4: 50%
                bucket 4: 60%
                bucket 4: 70%
                bucket 4: 80%
                bucket 4: 90%
                bucket 4: 100%
                Sorting block of length 121 for bucket 4
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 122 for bucket 4
              Getting block 5 of 7
                Reserving size (123) for bucket 5
                Calculating Z arrays for bucket 5
                Entering block accumulator loop for bucket 5:
                bucket 5: 10%
                bucket 5: 20%
                bucket 5: 30%
                bucket 5: 40%
                bucket 5: 50%
                bucket 5: 60%
                bucket 5: 70%
                bucket 5: 80%
                bucket 5: 90%
                bucket 5: 100%
                Sorting block of length 90 for bucket 5
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 91 for bucket 5
              Getting block 6 of 7
                Reserving size (123) for bucket 6
                Calculating Z arrays for bucket 6
                Entering block accumulator loop for bucket 6:
                bucket 6: 10%
                bucket 6: 20%
                bucket 6: 30%
                bucket 6: 40%
                bucket 6: 50%
                bucket 6: 60%
                bucket 6: 70%
                bucket 6: 80%
                bucket 6: 90%
                bucket 6: 100%
                Sorting block of length 95 for bucket 6
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 96 for bucket 6
              Getting block 7 of 7
                Reserving size (123) for bucket 7
                Calculating Z arrays for bucket 7
                Entering block accumulator loop for bucket 7:
                bucket 7: 10%
                bucket 7: 20%
                bucket 7: 30%
                bucket 7: 40%
                bucket 7: 50%
                bucket 7: 60%
                bucket 7: 70%
                bucket 7: 80%
                bucket 7: 90%
                bucket 7: 100%
                Sorting block of length 88 for bucket 7
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 89 for bucket 7
              Exited Ebwt loop
              fchr[A]: 0
              fchr[C]: 219
              fchr[G]: 358
              fchr[T]: 469
              fchr[$]: 658
              Exiting Ebwt::buildToDisk()
              Returning from initFromVector
              Wrote 4194838 bytes to primary EBWT file: genome.rev.1.bt2.tmp
              Wrote 172 bytes to secondary EBWT file: genome.rev.2.bt2.tmp
              Re-opening _in1 and _in2 as input streams
              Returning from Ebwt constructor
              Headers:
                  len: 658
                  bwtLen: 659
                  sz: 165
                  bwtSz: 165
                  lineRate: 6
                  offRate: 4
                  offMask: 0xfffffff0
                  ftabChars: 10
                  eftabLen: 20
                  eftabSz: 80
                  ftabLen: 1048577
                  ftabSz: 4194308
                  offsLen: 42
                  offsSz: 168
                  lineSz: 64
                  sideSz: 64
                  sideBwtSz: 48
                  sideBwtLen: 192
                  numSides: 4
                  numLines: 4
                  ebwtTotLen: 256
                  ebwtTotSz: 256
                  color: 0
                  reverse: 1
              Total time for backward call to driver() for mirror index: 00:00:00
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              analysis_type {"__current_case__": 1, "alignment_options": {"L": "28", "N": "0", "__current_case__": 0, "align_mode": {"__current_case__": 1, "align_mode_selector": "local", "score_min_loc": "G,20,8"}, "alignment_options_selector": "yes", "dpad": "15", "gbar": "4", "i": "S,1,1.15", "ignore_quals": false, "n_ceil": "L,0,0.15", "no_1mm_upfront": false, "nofw": false, "norc": false}, "analysis_type_selector": "full", "effort_options": {"D": "15", "R": "2", "__current_case__": 0, "effort_options_selector": "yes"}, "input_options": {"__current_case__": 0, "input_options_selector": "yes", "int_quals": false, "qupto": "100000000", "qv_encoding": "--phred33", "skip": "0", "solexa_quals": false, "trim3": "0", "trim5": "0"}, "other_options": {"__current_case__": 0, "non_deterministic": false, "other_options_selector": "yes", "seed": "0"}, "reporting_options": {"__current_case__": 0, "reporting_options_selector": "no"}, "scoring_options": {"__current_case__": 0, "ma": "2", "mp": "6,2", "np": "1", "rdg_read_extend": "3", "rdg_read_open": "5", "rfg_ref_extend": "3", "rfg_ref_open": "5", "scoring_options_selector": "yes"}}
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              library {"__current_case__": 1, "aligned_file": false, "input_1": {"values": [{"id": 1, "src": "hdca"}]}, "paired_options": {"I": "0", "X": "500", "__current_case__": 0, "dovetail": false, "fr_rf_ff": "--fr", "no_contain": false, "no_discordant": false, "no_mixed": false, "no_overlap": false, "paired_options_selector": "yes"}, "type": "paired_collection", "unaligned_file": true}
              reference_genome {"__current_case__": 1, "own_file": {"values": [{"id": 3, "src": "hda"}]}, "source": "history"}
              rg {"__current_case__": 3, "rg_selector": "do_not_set"}
              sam_options {"__current_case__": 1, "sam_options_selector": "no"}
              save_mapping_stats false
      • Step 16: toolshed.g2.bx.psu.edu/repos/galaxyp/regex_find_replace/regex1/1.0.3:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • python '/tmp/shed_dir/toolshed.g2.bx.psu.edu/repos/galaxyp/regex_find_replace/503bcd6ebe4b/regex_find_replace/regex.py' --input '/tmp/tmpa0q9y2tx/files/4/d/6/dataset_4d69773e-dc80-4acb-9c96-88c61ce4d77c.dat' --output '/tmp/tmpa0q9y2tx/job_working_directory/000/12/outputs/dataset_abf889ea-a828-4b1f-a8a9-9593249bddb8.dat' --input_display_name 'Create text file' --pattern='EON' --replacement='>'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              checks [{"__index__": 0, "pattern": "EON", "replacement": ">"}]
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
      • Step 17: toolshed.g2.bx.psu.edu/repos/galaxyp/regex_find_replace/regex1/1.0.3:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • python '/tmp/shed_dir/toolshed.g2.bx.psu.edu/repos/galaxyp/regex_find_replace/503bcd6ebe4b/regex_find_replace/regex.py' --input '/tmp/tmpa0q9y2tx/files/d/5/8/dataset_d584e290-ca04-4c42-9787-cc2119453c50.dat' --output '/tmp/tmpa0q9y2tx/job_working_directory/000/13/outputs/dataset_1c684c4a-e36f-4291-8264-006d511da935.dat' --input_display_name 'Create text file' --pattern='EON' --replacement='>'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              checks [{"__index__": 0, "pattern": "EON", "replacement": ">"}]
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
      • Step 18: toolshed.g2.bx.psu.edu/repos/devteam/fastqc/fastqc/0.74+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • ln -s '/tmp/tmpa0q9y2tx/files/9/5/3/dataset_9530dc79-ec79-49e2-83cd-1fa429d434e1.dat' 'Filter FASTQ on dataset 8' && mkdir -p '/tmp/tmpa0q9y2tx/job_working_directory/000/14/outputs/dataset_43dbebd7-3365-478a-87a9-d700c3b9a031_files' && fastqc --outdir '/tmp/tmpa0q9y2tx/job_working_directory/000/14/outputs/dataset_43dbebd7-3365-478a-87a9-d700c3b9a031_files'   --threads ${GALAXY_SLOTS:-2} --dir ${TEMP:-$_GALAXY_JOB_TMP_DIR} --quiet --extract  --kmers 7 -f 'fastq' 'Filter FASTQ on dataset 8'  && cp '/tmp/tmpa0q9y2tx/job_working_directory/000/14/outputs/dataset_43dbebd7-3365-478a-87a9-d700c3b9a031_files'/*/fastqc_data.txt output.txt && cp '/tmp/tmpa0q9y2tx/job_working_directory/000/14/outputs/dataset_43dbebd7-3365-478a-87a9-d700c3b9a031_files'/*\.html output.html

            Exit Code:

            • 0

            Standard Output:

            • null
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              adapters None
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              contaminants None
              dbkey "?"
              kmers "7"
              limits None
              min_length None
              nogroup false
      • Step 19: toolshed.g2.bx.psu.edu/repos/devteam/bowtie2/bowtie2/2.5.4+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • set -o | grep -q pipefail && set -o pipefail; bowtie2-build --threads ${GALAXY_SLOTS:-4} '/tmp/tmpa0q9y2tx/files/1/8/4/dataset_18465be9-6602-4eda-bdad-e9ab59a336a0.dat' genome && ln -s -f '/tmp/tmpa0q9y2tx/files/1/8/4/dataset_18465be9-6602-4eda-bdad-e9ab59a336a0.dat' genome.fa &&   ln -f -s '/tmp/tmpa0q9y2tx/files/9/5/3/dataset_9530dc79-ec79-49e2-83cd-1fa429d434e1.dat' input_f.fastq &&   THREADS=${GALAXY_SLOTS:-4} && if [ "$THREADS" -gt 1 ]; then (( THREADS-- )); fi &&   bowtie2  -p "$THREADS"  -x 'genome'   -U 'input_f.fastq'                -N 1 -L 22 -i 'S,1,1.15' --n-ceil 'L,0,0.15' --dpad 15 --gbar 4     --local --score-min 'G,20,8'        | samtools sort -l 0 -T "${TMPDIR:-.}" -O bam | samtools view --no-PG -O bam -@ ${GALAXY_SLOTS:-1} -o '/tmp/tmpa0q9y2tx/job_working_directory/000/15/outputs/dataset_7663f9da-d3fb-48d3-82e4-833b22b79cc0.dat'

            Exit Code:

            • 0

            Standard Error:

            • Building a SMALL index
              Renaming genome.3.bt2.tmp to genome.3.bt2
              Renaming genome.4.bt2.tmp to genome.4.bt2
              Renaming genome.1.bt2.tmp to genome.1.bt2
              Renaming genome.2.bt2.tmp to genome.2.bt2
              Renaming genome.rev.1.bt2.tmp to genome.rev.1.bt2
              Renaming genome.rev.2.bt2.tmp to genome.rev.2.bt2
              8562 reads; of these:
                8562 (100.00%) were unpaired; of these:
                  8431 (98.47%) aligned 0 times
                  130 (1.52%) aligned exactly 1 time
                  1 (0.01%) aligned >1 times
              1.53% overall alignment rate
              

            Standard Output:

            • Settings:
                Output files: "genome.*.bt2"
                Line rate: 6 (line is 64 bytes)
                Lines per side: 1 (side is 64 bytes)
                Offset rate: 4 (one in 16)
                FTable chars: 10
                Strings: unpacked
                Max bucket size: default
                Max bucket size, sqrt multiplier: default
                Max bucket size, len divisor: 4
                Difference-cover sample period: 1024
                Endianness: little
                Actual local endianness: little
                Sanity checking: disabled
                Assertions: disabled
                Random seed: 0
                Sizeofs: void*:8, int:4, long:8, size_t:8
              Input files DNA, FASTA:
                /tmp/tmpa0q9y2tx/files/1/8/4/dataset_18465be9-6602-4eda-bdad-e9ab59a336a0.dat
              Reading reference sizes
                Time reading reference sizes: 00:00:00
              Calculating joined length
              Writing header
              Reserving space for joined string
              Joining reference sequences
                Time to join reference sequences: 00:00:00
              bmax according to bmaxDivN setting: 164
              Using parameters --bmax 123 --dcv 1024
                Doing ahead-of-time memory usage test
                Passed!  Constructing with these parameters: --bmax 123 --dcv 1024
              Constructing suffix-array element generator
              Building DifferenceCoverSample
                Building sPrime
                Building sPrimeOrder
                V-Sorting samples
                V-Sorting samples time: 00:00:00
                Allocating rank array
                Ranking v-sort output
                Ranking v-sort output time: 00:00:00
                Invoking Larsson-Sadakane on ranks
                Invoking Larsson-Sadakane on ranks time: 00:00:00
                Sanity-checking and returning
              Building samples
              Reserving space for 12 sample suffixes
              Generating random suffixes
              QSorting 12 sample offsets, eliminating duplicates
              QSorting sample offsets, eliminating duplicates time: 00:00:00
              Multikey QSorting 12 samples
                (Using difference cover)
                Multikey QSorting samples time: 00:00:00
              Calculating bucket sizes
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 1, merged 7; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 1, merged 1; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Avg bucket size: 93.1429 (target: 122)
              Converting suffix-array elements to index image
              Allocating ftab, absorbFtab
              Entering Ebwt loop
              Getting block 1 of 7
                Reserving size (123) for bucket 1
                Calculating Z arrays for bucket 1
                Entering block accumulator loop for bucket 1:
                bucket 1: 10%
                bucket 1: 20%
                bucket 1: 30%
                bucket 1: 40%
                bucket 1: 50%
                bucket 1: 60%
                bucket 1: 70%
                bucket 1: 80%
                bucket 1: 90%
                bucket 1: 100%
                Sorting block of length 118 for bucket 1
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 119 for bucket 1
              Getting block 2 of 7
                Reserving size (123) for bucket 2
                Calculating Z arrays for bucket 2
                Entering block accumulator loop for bucket 2:
                bucket 2: 10%
                bucket 2: 20%
                bucket 2: 30%
                bucket 2: 40%
                bucket 2: 50%
                bucket 2: 60%
                bucket 2: 70%
                bucket 2: 80%
                bucket 2: 90%
                bucket 2: 100%
                Sorting block of length 115 for bucket 2
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 116 for bucket 2
              Getting block 3 of 7
                Reserving size (123) for bucket 3
                Calculating Z arrays for bucket 3
                Entering block accumulator loop for bucket 3:
                bucket 3: 10%
                bucket 3: 20%
                bucket 3: 30%
                bucket 3: 40%
                bucket 3: 50%
                bucket 3: 60%
                bucket 3: 70%
                bucket 3: 80%
                bucket 3: 90%
                bucket 3: 100%
                Sorting block of length 96 for bucket 3
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 97 for bucket 3
              Getting block 4 of 7
                Reserving size (123) for bucket 4
                Calculating Z arrays for bucket 4
                Entering block accumulator loop for bucket 4:
                bucket 4: 10%
                bucket 4: 20%
                bucket 4: 30%
                bucket 4: 40%
                bucket 4: 50%
                bucket 4: 60%
                bucket 4: 70%
                bucket 4: 80%
                bucket 4: 90%
                bucket 4: 100%
                Sorting block of length 77 for bucket 4
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 78 for bucket 4
              Getting block 5 of 7
                Reserving size (123) for bucket 5
                Calculating Z arrays for bucket 5
                Entering block accumulator loop for bucket 5:
                bucket 5: 10%
                bucket 5: 20%
                bucket 5: 30%
                bucket 5: 40%
                bucket 5: 50%
                bucket 5: 60%
                bucket 5: 70%
                bucket 5: 80%
                bucket 5: 90%
                bucket 5: 100%
                Sorting block of length 66 for bucket 5
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 67 for bucket 5
              Getting block 6 of 7
                Reserving size (123) for bucket 6
                Calculating Z arrays for bucket 6
                Entering block accumulator loop for bucket 6:
                bucket 6: 10%
                bucket 6: 20%
                bucket 6: 30%
                bucket 6: 40%
                bucket 6: 50%
                bucket 6: 60%
                bucket 6: 70%
                bucket 6: 80%
                bucket 6: 90%
                bucket 6: 100%
                Sorting block of length 95 for bucket 6
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 96 for bucket 6
              Getting block 7 of 7
                Reserving size (123) for bucket 7
                Calculating Z arrays for bucket 7
                Entering block accumulator loop for bucket 7:
                bucket 7: 10%
                bucket 7: 20%
                bucket 7: 30%
                bucket 7: 40%
                bucket 7: 50%
                bucket 7: 60%
                bucket 7: 70%
                bucket 7: 80%
                bucket 7: 90%
                bucket 7: 100%
                Sorting block of length 85 for bucket 7
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 86 for bucket 7
              Exited Ebwt loop
              fchr[A]: 0
              fchr[C]: 219
              fchr[G]: 358
              fchr[T]: 469
              fchr[$]: 658
              Exiting Ebwt::buildToDisk()
              Returning from initFromVector
              Wrote 4194838 bytes to primary EBWT file: genome.1.bt2.tmp
              Wrote 172 bytes to secondary EBWT file: genome.2.bt2.tmp
              Re-opening _in1 and _in2 as input streams
              Returning from Ebwt constructor
              Headers:
                  len: 658
                  bwtLen: 659
                  sz: 165
                  bwtSz: 165
                  lineRate: 6
                  offRate: 4
                  offMask: 0xfffffff0
                  ftabChars: 10
                  eftabLen: 20
                  eftabSz: 80
                  ftabLen: 1048577
                  ftabSz: 4194308
                  offsLen: 42
                  offsSz: 168
                  lineSz: 64
                  sideSz: 64
                  sideBwtSz: 48
                  sideBwtLen: 192
                  numSides: 4
                  numLines: 4
                  ebwtTotLen: 256
                  ebwtTotSz: 256
                  color: 0
                  reverse: 0
              Total time for call to driver() for forward index: 00:00:00
              Reading reference sizes
                Time reading reference sizes: 00:00:00
              Calculating joined length
              Writing header
              Reserving space for joined string
              Joining reference sequences
                Time to join reference sequences: 00:00:00
                Time to reverse reference sequence: 00:00:00
              bmax according to bmaxDivN setting: 164
              Using parameters --bmax 123 --dcv 1024
                Doing ahead-of-time memory usage test
                Passed!  Constructing with these parameters: --bmax 123 --dcv 1024
              Constructing suffix-array element generator
              Building DifferenceCoverSample
                Building sPrime
                Building sPrimeOrder
                V-Sorting samples
                V-Sorting samples time: 00:00:00
                Allocating rank array
                Ranking v-sort output
                Ranking v-sort output time: 00:00:00
                Invoking Larsson-Sadakane on ranks
                Invoking Larsson-Sadakane on ranks time: 00:00:00
                Sanity-checking and returning
              Building samples
              Reserving space for 12 sample suffixes
              Generating random suffixes
              QSorting 12 sample offsets, eliminating duplicates
              QSorting sample offsets, eliminating duplicates time: 00:00:00
              Multikey QSorting 12 samples
                (Using difference cover)
                Multikey QSorting samples time: 00:00:00
              Calculating bucket sizes
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 2, merged 8; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 1, merged 0; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 1, merged 0; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Avg bucket size: 93.1429 (target: 122)
              Converting suffix-array elements to index image
              Allocating ftab, absorbFtab
              Entering Ebwt loop
              Getting block 1 of 7
                Reserving size (123) for bucket 1
                Calculating Z arrays for bucket 1
                Entering block accumulator loop for bucket 1:
                bucket 1: 10%
                bucket 1: 20%
                bucket 1: 30%
                bucket 1: 40%
                bucket 1: 50%
                bucket 1: 60%
                bucket 1: 70%
                bucket 1: 80%
                bucket 1: 90%
                bucket 1: 100%
                Sorting block of length 67 for bucket 1
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 68 for bucket 1
              Getting block 2 of 7
                Reserving size (123) for bucket 2
                Calculating Z arrays for bucket 2
                Entering block accumulator loop for bucket 2:
                bucket 2: 10%
                bucket 2: 20%
                bucket 2: 30%
                bucket 2: 40%
                bucket 2: 50%
                bucket 2: 60%
                bucket 2: 70%
                bucket 2: 80%
                bucket 2: 90%
                bucket 2: 100%
                Sorting block of length 88 for bucket 2
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 89 for bucket 2
              Getting block 3 of 7
                Reserving size (123) for bucket 3
                Calculating Z arrays for bucket 3
                Entering block accumulator loop for bucket 3:
                bucket 3: 10%
                bucket 3: 20%
                bucket 3: 30%
                bucket 3: 40%
                bucket 3: 50%
                bucket 3: 60%
                bucket 3: 70%
                bucket 3: 80%
                bucket 3: 90%
                bucket 3: 100%
                Sorting block of length 103 for bucket 3
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 104 for bucket 3
              Getting block 4 of 7
                Reserving size (123) for bucket 4
                Calculating Z arrays for bucket 4
                Entering block accumulator loop for bucket 4:
                bucket 4: 10%
                bucket 4: 20%
                bucket 4: 30%
                bucket 4: 40%
                bucket 4: 50%
                bucket 4: 60%
                bucket 4: 70%
                bucket 4: 80%
                bucket 4: 90%
                bucket 4: 100%
                Sorting block of length 121 for bucket 4
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 122 for bucket 4
              Getting block 5 of 7
                Reserving size (123) for bucket 5
                Calculating Z arrays for bucket 5
                Entering block accumulator loop for bucket 5:
                bucket 5: 10%
                bucket 5: 20%
                bucket 5: 30%
                bucket 5: 40%
                bucket 5: 50%
                bucket 5: 60%
                bucket 5: 70%
                bucket 5: 80%
                bucket 5: 90%
                bucket 5: 100%
                Sorting block of length 90 for bucket 5
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 91 for bucket 5
              Getting block 6 of 7
                Reserving size (123) for bucket 6
                Calculating Z arrays for bucket 6
                Entering block accumulator loop for bucket 6:
                bucket 6: 10%
                bucket 6: 20%
                bucket 6: 30%
                bucket 6: 40%
                bucket 6: 50%
                bucket 6: 60%
                bucket 6: 70%
                bucket 6: 80%
                bucket 6: 90%
                bucket 6: 100%
                Sorting block of length 95 for bucket 6
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 96 for bucket 6
              Getting block 7 of 7
                Reserving size (123) for bucket 7
                Calculating Z arrays for bucket 7
                Entering block accumulator loop for bucket 7:
                bucket 7: 10%
                bucket 7: 20%
                bucket 7: 30%
                bucket 7: 40%
                bucket 7: 50%
                bucket 7: 60%
                bucket 7: 70%
                bucket 7: 80%
                bucket 7: 90%
                bucket 7: 100%
                Sorting block of length 88 for bucket 7
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 89 for bucket 7
              Exited Ebwt loop
              fchr[A]: 0
              fchr[C]: 219
              fchr[G]: 358
              fchr[T]: 469
              fchr[$]: 658
              Exiting Ebwt::buildToDisk()
              Returning from initFromVector
              Wrote 4194838 bytes to primary EBWT file: genome.rev.1.bt2.tmp
              Wrote 172 bytes to secondary EBWT file: genome.rev.2.bt2.tmp
              Re-opening _in1 and _in2 as input streams
              Returning from Ebwt constructor
              Headers:
                  len: 658
                  bwtLen: 659
                  sz: 165
                  bwtSz: 165
                  lineRate: 6
                  offRate: 4
                  offMask: 0xfffffff0
                  ftabChars: 10
                  eftabLen: 20
                  eftabSz: 80
                  ftabLen: 1048577
                  ftabSz: 4194308
                  offsLen: 42
                  offsSz: 168
                  lineSz: 64
                  sideSz: 64
                  sideBwtSz: 48
                  sideBwtLen: 192
                  numSides: 4
                  numLines: 4
                  ebwtTotLen: 256
                  ebwtTotSz: 256
                  color: 0
                  reverse: 1
              Total time for backward call to driver() for mirror index: 00:00:00
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              analysis_type {"__current_case__": 1, "alignment_options": {"L": "22", "N": "1", "__current_case__": 0, "align_mode": {"__current_case__": 1, "align_mode_selector": "local", "score_min_loc": "G,20,8"}, "alignment_options_selector": "yes", "dpad": "15", "gbar": "4", "i": "S,1,1.15", "ignore_quals": false, "n_ceil": "L,0,0.15", "no_1mm_upfront": false, "nofw": false, "norc": false}, "analysis_type_selector": "full", "effort_options": {"__current_case__": 1, "effort_options_selector": "no"}, "input_options": {"__current_case__": 1, "input_options_selector": "no"}, "other_options": {"__current_case__": 1, "other_options_selector": "no"}, "reporting_options": {"__current_case__": 0, "reporting_options_selector": "no"}, "scoring_options": {"__current_case__": 1, "scoring_options_selector": "no"}}
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              library {"__current_case__": 0, "aligned_file": false, "input_1": {"values": [{"id": 13, "src": "hda"}]}, "type": "single", "unaligned_file": false}
              reference_genome {"__current_case__": 1, "own_file": {"values": [{"id": 3, "src": "hda"}]}, "source": "history"}
              rg {"__current_case__": 3, "rg_selector": "do_not_set"}
              sam_options {"__current_case__": 1, "sam_options_selector": "no"}
              save_mapping_stats false
      • Step 20: toolshed.g2.bx.psu.edu/repos/devteam/bowtie2/bowtie2/2.5.4+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • set -o | grep -q pipefail && set -o pipefail; bowtie2-build --threads ${GALAXY_SLOTS:-4} '/tmp/tmpa0q9y2tx/files/1/8/4/dataset_18465be9-6602-4eda-bdad-e9ab59a336a0.dat' genome && ln -s -f '/tmp/tmpa0q9y2tx/files/1/8/4/dataset_18465be9-6602-4eda-bdad-e9ab59a336a0.dat' genome.fa &&   ln -f -s '/tmp/tmpa0q9y2tx/files/3/0/9/dataset_3099eb14-6a80-442d-a22b-f388b3bf600a.dat' input_f.fastq &&  ln -f -s '/tmp/tmpa0q9y2tx/files/2/0/a/dataset_20a66c05-9546-43c9-963c-cff23d87c2a6.dat' input_r.fastq &&   THREADS=${GALAXY_SLOTS:-4} && if [ "$THREADS" -gt 1 ]; then (( THREADS-- )); fi &&   bowtie2  -p "$THREADS"  -x 'genome'   -1 'input_f.fastq' -2 'input_r.fastq' --un-conc 'unaligned_reads' -I 0 -X 500 --rf                   --skip 0 --qupto 100000000 --trim5 0 --trim3 0 --phred33    -N 0 -L 28 -i 'S,1,1.15' --n-ceil 'L,0,0.15' --dpad 15 --gbar 4     --local --score-min 'G,20,8'  --ma 2 --mp '6,2' --np 1 --rdg 5,3 --rfg 5,3   -D 15 -R 2  --seed 0     | samtools sort -l 0 -T "${TMPDIR:-.}" -O bam | samtools view --no-PG -O bam -@ ${GALAXY_SLOTS:-1} -o '/tmp/tmpa0q9y2tx/job_working_directory/000/16/outputs/dataset_3a4814cf-2e95-4018-9f55-f2ab52cff535.dat'

            Exit Code:

            • 0

            Standard Error:

            • Building a SMALL index
              Renaming genome.3.bt2.tmp to genome.3.bt2
              Renaming genome.4.bt2.tmp to genome.4.bt2
              Renaming genome.1.bt2.tmp to genome.1.bt2
              Renaming genome.2.bt2.tmp to genome.2.bt2
              Renaming genome.rev.1.bt2.tmp to genome.rev.1.bt2
              Renaming genome.rev.2.bt2.tmp to genome.rev.2.bt2
              8741 reads; of these:
                8741 (100.00%) were paired; of these:
                  8723 (99.79%) aligned concordantly 0 times
                  17 (0.19%) aligned concordantly exactly 1 time
                  1 (0.01%) aligned concordantly >1 times
                  ----
                  8723 pairs aligned concordantly 0 times; of these:
                    6 (0.07%) aligned discordantly 1 time
                  ----
                  8717 pairs aligned 0 times concordantly or discordantly; of these:
                    17434 mates make up the pairs; of these:
                      17432 (99.99%) aligned 0 times
                      2 (0.01%) aligned exactly 1 time
                      0 (0.00%) aligned >1 times
              0.29% overall alignment rate
              

            Standard Output:

            • Settings:
                Output files: "genome.*.bt2"
                Line rate: 6 (line is 64 bytes)
                Lines per side: 1 (side is 64 bytes)
                Offset rate: 4 (one in 16)
                FTable chars: 10
                Strings: unpacked
                Max bucket size: default
                Max bucket size, sqrt multiplier: default
                Max bucket size, len divisor: 4
                Difference-cover sample period: 1024
                Endianness: little
                Actual local endianness: little
                Sanity checking: disabled
                Assertions: disabled
                Random seed: 0
                Sizeofs: void*:8, int:4, long:8, size_t:8
              Input files DNA, FASTA:
                /tmp/tmpa0q9y2tx/files/1/8/4/dataset_18465be9-6602-4eda-bdad-e9ab59a336a0.dat
              Reading reference sizes
                Time reading reference sizes: 00:00:00
              Calculating joined length
              Writing header
              Reserving space for joined string
              Joining reference sequences
                Time to join reference sequences: 00:00:00
              bmax according to bmaxDivN setting: 164
              Using parameters --bmax 123 --dcv 1024
                Doing ahead-of-time memory usage test
                Passed!  Constructing with these parameters: --bmax 123 --dcv 1024
              Constructing suffix-array element generator
              Building DifferenceCoverSample
                Building sPrime
                Building sPrimeOrder
                V-Sorting samples
                V-Sorting samples time: 00:00:00
                Allocating rank array
                Ranking v-sort output
                Ranking v-sort output time: 00:00:00
                Invoking Larsson-Sadakane on ranks
                Invoking Larsson-Sadakane on ranks time: 00:00:00
                Sanity-checking and returning
              Building samples
              Reserving space for 12 sample suffixes
              Generating random suffixes
              QSorting 12 sample offsets, eliminating duplicates
              QSorting sample offsets, eliminating duplicates time: 00:00:00
              Multikey QSorting 12 samples
                (Using difference cover)
                Multikey QSorting samples time: 00:00:00
              Calculating bucket sizes
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 1, merged 7; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 1, merged 1; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Avg bucket size: 93.1429 (target: 122)
              Converting suffix-array elements to index image
              Allocating ftab, absorbFtab
              Entering Ebwt loop
              Getting block 1 of 7
                Reserving size (123) for bucket 1
                Calculating Z arrays for bucket 1
                Entering block accumulator loop for bucket 1:
                bucket 1: 10%
                bucket 1: 20%
                bucket 1: 30%
                bucket 1: 40%
                bucket 1: 50%
                bucket 1: 60%
                bucket 1: 70%
                bucket 1: 80%
                bucket 1: 90%
                bucket 1: 100%
                Sorting block of length 118 for bucket 1
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 119 for bucket 1
              Getting block 2 of 7
                Reserving size (123) for bucket 2
                Calculating Z arrays for bucket 2
                Entering block accumulator loop for bucket 2:
                bucket 2: 10%
                bucket 2: 20%
                bucket 2: 30%
                bucket 2: 40%
                bucket 2: 50%
                bucket 2: 60%
                bucket 2: 70%
                bucket 2: 80%
                bucket 2: 90%
                bucket 2: 100%
                Sorting block of length 115 for bucket 2
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 116 for bucket 2
              Getting block 3 of 7
                Reserving size (123) for bucket 3
                Calculating Z arrays for bucket 3
                Entering block accumulator loop for bucket 3:
                bucket 3: 10%
                bucket 3: 20%
                bucket 3: 30%
                bucket 3: 40%
                bucket 3: 50%
                bucket 3: 60%
                bucket 3: 70%
                bucket 3: 80%
                bucket 3: 90%
                bucket 3: 100%
                Sorting block of length 96 for bucket 3
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 97 for bucket 3
              Getting block 4 of 7
                Reserving size (123) for bucket 4
                Calculating Z arrays for bucket 4
                Entering block accumulator loop for bucket 4:
                bucket 4: 10%
                bucket 4: 20%
                bucket 4: 30%
                bucket 4: 40%
                bucket 4: 50%
                bucket 4: 60%
                bucket 4: 70%
                bucket 4: 80%
                bucket 4: 90%
                bucket 4: 100%
                Sorting block of length 77 for bucket 4
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 78 for bucket 4
              Getting block 5 of 7
                Reserving size (123) for bucket 5
                Calculating Z arrays for bucket 5
                Entering block accumulator loop for bucket 5:
                bucket 5: 10%
                bucket 5: 20%
                bucket 5: 30%
                bucket 5: 40%
                bucket 5: 50%
                bucket 5: 60%
                bucket 5: 70%
                bucket 5: 80%
                bucket 5: 90%
                bucket 5: 100%
                Sorting block of length 66 for bucket 5
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 67 for bucket 5
              Getting block 6 of 7
                Reserving size (123) for bucket 6
                Calculating Z arrays for bucket 6
                Entering block accumulator loop for bucket 6:
                bucket 6: 10%
                bucket 6: 20%
                bucket 6: 30%
                bucket 6: 40%
                bucket 6: 50%
                bucket 6: 60%
                bucket 6: 70%
                bucket 6: 80%
                bucket 6: 90%
                bucket 6: 100%
                Sorting block of length 95 for bucket 6
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 96 for bucket 6
              Getting block 7 of 7
                Reserving size (123) for bucket 7
                Calculating Z arrays for bucket 7
                Entering block accumulator loop for bucket 7:
                bucket 7: 10%
                bucket 7: 20%
                bucket 7: 30%
                bucket 7: 40%
                bucket 7: 50%
                bucket 7: 60%
                bucket 7: 70%
                bucket 7: 80%
                bucket 7: 90%
                bucket 7: 100%
                Sorting block of length 85 for bucket 7
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 86 for bucket 7
              Exited Ebwt loop
              fchr[A]: 0
              fchr[C]: 219
              fchr[G]: 358
              fchr[T]: 469
              fchr[$]: 658
              Exiting Ebwt::buildToDisk()
              Returning from initFromVector
              Wrote 4194838 bytes to primary EBWT file: genome.1.bt2.tmp
              Wrote 172 bytes to secondary EBWT file: genome.2.bt2.tmp
              Re-opening _in1 and _in2 as input streams
              Returning from Ebwt constructor
              Headers:
                  len: 658
                  bwtLen: 659
                  sz: 165
                  bwtSz: 165
                  lineRate: 6
                  offRate: 4
                  offMask: 0xfffffff0
                  ftabChars: 10
                  eftabLen: 20
                  eftabSz: 80
                  ftabLen: 1048577
                  ftabSz: 4194308
                  offsLen: 42
                  offsSz: 168
                  lineSz: 64
                  sideSz: 64
                  sideBwtSz: 48
                  sideBwtLen: 192
                  numSides: 4
                  numLines: 4
                  ebwtTotLen: 256
                  ebwtTotSz: 256
                  color: 0
                  reverse: 0
              Total time for call to driver() for forward index: 00:00:00
              Reading reference sizes
                Time reading reference sizes: 00:00:00
              Calculating joined length
              Writing header
              Reserving space for joined string
              Joining reference sequences
                Time to join reference sequences: 00:00:00
                Time to reverse reference sequence: 00:00:00
              bmax according to bmaxDivN setting: 164
              Using parameters --bmax 123 --dcv 1024
                Doing ahead-of-time memory usage test
                Passed!  Constructing with these parameters: --bmax 123 --dcv 1024
              Constructing suffix-array element generator
              Building DifferenceCoverSample
                Building sPrime
                Building sPrimeOrder
                V-Sorting samples
                V-Sorting samples time: 00:00:00
                Allocating rank array
                Ranking v-sort output
                Ranking v-sort output time: 00:00:00
                Invoking Larsson-Sadakane on ranks
                Invoking Larsson-Sadakane on ranks time: 00:00:00
                Sanity-checking and returning
              Building samples
              Reserving space for 12 sample suffixes
              Generating random suffixes
              QSorting 12 sample offsets, eliminating duplicates
              QSorting sample offsets, eliminating duplicates time: 00:00:00
              Multikey QSorting 12 samples
                (Using difference cover)
                Multikey QSorting samples time: 00:00:00
              Calculating bucket sizes
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 2, merged 8; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 1, merged 0; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Split 1, merged 0; iterating...
              Splitting and merging
                Splitting and merging time: 00:00:00
              Avg bucket size: 93.1429 (target: 122)
              Converting suffix-array elements to index image
              Allocating ftab, absorbFtab
              Entering Ebwt loop
              Getting block 1 of 7
                Reserving size (123) for bucket 1
                Calculating Z arrays for bucket 1
                Entering block accumulator loop for bucket 1:
                bucket 1: 10%
                bucket 1: 20%
                bucket 1: 30%
                bucket 1: 40%
                bucket 1: 50%
                bucket 1: 60%
                bucket 1: 70%
                bucket 1: 80%
                bucket 1: 90%
                bucket 1: 100%
                Sorting block of length 67 for bucket 1
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 68 for bucket 1
              Getting block 2 of 7
                Reserving size (123) for bucket 2
                Calculating Z arrays for bucket 2
                Entering block accumulator loop for bucket 2:
                bucket 2: 10%
                bucket 2: 20%
                bucket 2: 30%
                bucket 2: 40%
                bucket 2: 50%
                bucket 2: 60%
                bucket 2: 70%
                bucket 2: 80%
                bucket 2: 90%
                bucket 2: 100%
                Sorting block of length 88 for bucket 2
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 89 for bucket 2
              Getting block 3 of 7
                Reserving size (123) for bucket 3
                Calculating Z arrays for bucket 3
                Entering block accumulator loop for bucket 3:
                bucket 3: 10%
                bucket 3: 20%
                bucket 3: 30%
                bucket 3: 40%
                bucket 3: 50%
                bucket 3: 60%
                bucket 3: 70%
                bucket 3: 80%
                bucket 3: 90%
                bucket 3: 100%
                Sorting block of length 103 for bucket 3
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 104 for bucket 3
              Getting block 4 of 7
                Reserving size (123) for bucket 4
                Calculating Z arrays for bucket 4
                Entering block accumulator loop for bucket 4:
                bucket 4: 10%
                bucket 4: 20%
                bucket 4: 30%
                bucket 4: 40%
                bucket 4: 50%
                bucket 4: 60%
                bucket 4: 70%
                bucket 4: 80%
                bucket 4: 90%
                bucket 4: 100%
                Sorting block of length 121 for bucket 4
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 122 for bucket 4
              Getting block 5 of 7
                Reserving size (123) for bucket 5
                Calculating Z arrays for bucket 5
                Entering block accumulator loop for bucket 5:
                bucket 5: 10%
                bucket 5: 20%
                bucket 5: 30%
                bucket 5: 40%
                bucket 5: 50%
                bucket 5: 60%
                bucket 5: 70%
                bucket 5: 80%
                bucket 5: 90%
                bucket 5: 100%
                Sorting block of length 90 for bucket 5
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 91 for bucket 5
              Getting block 6 of 7
                Reserving size (123) for bucket 6
                Calculating Z arrays for bucket 6
                Entering block accumulator loop for bucket 6:
                bucket 6: 10%
                bucket 6: 20%
                bucket 6: 30%
                bucket 6: 40%
                bucket 6: 50%
                bucket 6: 60%
                bucket 6: 70%
                bucket 6: 80%
                bucket 6: 90%
                bucket 6: 100%
                Sorting block of length 95 for bucket 6
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 96 for bucket 6
              Getting block 7 of 7
                Reserving size (123) for bucket 7
                Calculating Z arrays for bucket 7
                Entering block accumulator loop for bucket 7:
                bucket 7: 10%
                bucket 7: 20%
                bucket 7: 30%
                bucket 7: 40%
                bucket 7: 50%
                bucket 7: 60%
                bucket 7: 70%
                bucket 7: 80%
                bucket 7: 90%
                bucket 7: 100%
                Sorting block of length 88 for bucket 7
                (Using difference cover)
                Sorting block time: 00:00:00
              Returning block of 89 for bucket 7
              Exited Ebwt loop
              fchr[A]: 0
              fchr[C]: 219
              fchr[G]: 358
              fchr[T]: 469
              fchr[$]: 658
              Exiting Ebwt::buildToDisk()
              Returning from initFromVector
              Wrote 4194838 bytes to primary EBWT file: genome.rev.1.bt2.tmp
              Wrote 172 bytes to secondary EBWT file: genome.rev.2.bt2.tmp
              Re-opening _in1 and _in2 as input streams
              Returning from Ebwt constructor
              Headers:
                  len: 658
                  bwtLen: 659
                  sz: 165
                  bwtSz: 165
                  lineRate: 6
                  offRate: 4
                  offMask: 0xfffffff0
                  ftabChars: 10
                  eftabLen: 20
                  eftabSz: 80
                  ftabLen: 1048577
                  ftabSz: 4194308
                  offsLen: 42
                  offsSz: 168
                  lineSz: 64
                  sideSz: 64
                  sideBwtSz: 48
                  sideBwtLen: 192
                  numSides: 4
                  numLines: 4
                  ebwtTotLen: 256
                  ebwtTotSz: 256
                  color: 0
                  reverse: 1
              Total time for backward call to driver() for mirror index: 00:00:00
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              analysis_type {"__current_case__": 1, "alignment_options": {"L": "28", "N": "0", "__current_case__": 0, "align_mode": {"__current_case__": 1, "align_mode_selector": "local", "score_min_loc": "G,20,8"}, "alignment_options_selector": "yes", "dpad": "15", "gbar": "4", "i": "S,1,1.15", "ignore_quals": false, "n_ceil": "L,0,0.15", "no_1mm_upfront": false, "nofw": false, "norc": false}, "analysis_type_selector": "full", "effort_options": {"D": "15", "R": "2", "__current_case__": 0, "effort_options_selector": "yes"}, "input_options": {"__current_case__": 0, "input_options_selector": "yes", "int_quals": false, "qupto": "100000000", "qv_encoding": "--phred33", "skip": "0", "solexa_quals": false, "trim3": "0", "trim5": "0"}, "other_options": {"__current_case__": 0, "non_deterministic": false, "other_options_selector": "yes", "seed": "0"}, "reporting_options": {"__current_case__": 0, "reporting_options_selector": "no"}, "scoring_options": {"__current_case__": 0, "ma": "2", "mp": "6,2", "np": "1", "rdg_read_extend": "3", "rdg_read_open": "5", "rfg_ref_extend": "3", "rfg_ref_open": "5", "scoring_options_selector": "yes"}}
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              library {"__current_case__": 1, "aligned_file": false, "input_1": {"values": [{"id": 2, "src": "hdca"}]}, "paired_options": {"I": "0", "X": "500", "__current_case__": 0, "dovetail": false, "fr_rf_ff": "--rf", "no_contain": false, "no_discordant": false, "no_mixed": false, "no_overlap": false, "paired_options_selector": "yes"}, "type": "paired_collection", "unaligned_file": true}
              reference_genome {"__current_case__": 1, "own_file": {"values": [{"id": 3, "src": "hda"}]}, "source": "history"}
              rg {"__current_case__": 3, "rg_selector": "do_not_set"}
              sam_options {"__current_case__": 1, "sam_options_selector": "no"}
              save_mapping_stats false
      • Step 3: Filter FASTQ - Minimum size:

        • step_state: scheduled
      • Step 21: toolshed.g2.bx.psu.edu/repos/iuc/samtools_view/samtools_view/1.21+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • addthreads=${GALAXY_SLOTS:-1} && (( addthreads-- )) &&   addmemory=${GALAXY_MEMORY_MB_PER_SLOT:-768} && ((addmemory=addmemory*75/100)) &&        ln -s '/tmp/tmpa0q9y2tx/files/7/6/6/dataset_7663f9da-d3fb-48d3-82e4-833b22b79cc0.dat' infile && ln -s '/tmp/tmpa0q9y2tx/files/_metadata_files/a/d/d/metadata_add066c2-92b6-4ca8-890e-02763bf03fe1.dat' infile.bai &&               samtools view -@ $addthreads -b  -q 30 -f 0 -F 4 -G 0   -o outfile      infile

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              addref_cond {"__current_case__": 0, "addref_select": "no"}
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              mode {"__current_case__": 1, "filter_config": {"cigarcons": null, "cond_expr": {"__current_case__": 0, "select_expr": "no"}, "cond_region": {"__current_case__": 0, "select_region": "no"}, "cond_rg": {"__current_case__": 0, "select_rg": "no"}, "exclusive_filter": ["4"], "exclusive_filter_all": null, "inclusive_filter": null, "library": null, "qname_file": null, "quality": "30", "tag": null}, "output_options": {"__current_case__": 0, "adv_output": {"collapsecigar": false, "readtags": []}, "complementary_output": false, "output_format": {"__current_case__": 2, "oformat": "bam"}, "reads_report_type": "retained"}, "outtype": "selected_reads", "subsample_config": {"subsampling_mode": {"__current_case__": 0, "factor": "1.0", "seed": null, "select_subsample": "fraction"}}}
      • Step 22: toolshed.g2.bx.psu.edu/repos/fubar/jbrowse2/jbrowse2/3.6.5+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mkdir -p '/tmp/tmpa0q9y2tx/job_working_directory/000/18/outputs/dataset_c45076b5-12d5-4658-bd32-396a84511d4e_files/data';   cp '/tmp/tmpa0q9y2tx/job_working_directory/000/18/configs/tmpymcmg0bd' '/tmp/tmpa0q9y2tx/job_working_directory/000/18/outputs/dataset_c45076b5-12d5-4658-bd32-396a84511d4e_files/galaxy.xml';  export JBROWSE2_SOURCE_DIR=$(dirname $(which jbrowse))/../opt/jbrowse2;  python '/tmp/shed_dir/toolshed.g2.bx.psu.edu/repos/fubar/jbrowse2/93fdd696c281/jbrowse2/jbrowse2.py' --jbrowse ${JBROWSE2_SOURCE_DIR}   --outdir '/tmp/tmpa0q9y2tx/job_working_directory/000/18/outputs/dataset_c45076b5-12d5-4658-bd32-396a84511d4e_files' '/tmp/tmpa0q9y2tx/job_working_directory/000/18/configs/tmpymcmg0bd';  cp '/tmp/tmpa0q9y2tx/job_working_directory/000/18/outputs/dataset_c45076b5-12d5-4658-bd32-396a84511d4e_files/index.html' '/tmp/tmpa0q9y2tx/job_working_directory/000/18/outputs/dataset_c45076b5-12d5-4658-bd32-396a84511d4e.dat';

            Exit Code:

            • 0

            Standard Error:

            • DEBUG:jbrowse:Processing genome
              DEBUG:jbrowse:cd /tmp/tmpa0q9y2tx/job_working_directory/000/18/outputs/dataset_c45076b5-12d5-4658-bd32-396a84511d4e_files && bgzip /tmp/tmpa0q9y2tx/job_working_directory/000/18/outputs/dataset_c45076b5-12d5-4658-bd32-396a84511d4e_files/data/astrotoma_agassizii_COI.fasta.fasta
              DEBUG:jbrowse:cd /tmp/tmpa0q9y2tx/job_working_directory/000/18/outputs/dataset_c45076b5-12d5-4658-bd32-396a84511d4e_files && samtools faidx /tmp/tmpa0q9y2tx/job_working_directory/000/18/outputs/dataset_c45076b5-12d5-4658-bd32-396a84511d4e_files/data/astrotoma_agassizii_COI.fasta.fasta.gz
              DEBUG:jbrowse:cd /tmp/tmpa0q9y2tx/job_working_directory/000/18/outputs/dataset_c45076b5-12d5-4658-bd32-396a84511d4e_files && jbrowse add-assembly --load inPlace --name astrotoma_agassizii_COI.fasta --type bgzipFasta --out /tmp/tmpa0q9y2tx/job_working_directory/000/18/outputs/dataset_c45076b5-12d5-4658-bd32-396a84511d4e_files --skipCheck data/astrotoma_agassizii_COI.fasta.fasta.gz
              /tmp/shed_dir/toolshed.g2.bx.psu.edu/repos/fubar/jbrowse2/93fdd696c281/jbrowse2/jbrowse2.py:1692: DeprecationWarning: Testing an element's truth value will always return True in future versions.  Use specific 'len(elem)' or 'elem is not None' test instead.
                item.tag: parse_style_conf(item) for item in (track.find("options/style") or [])
              /tmp/shed_dir/toolshed.g2.bx.psu.edu/repos/fubar/jbrowse2/93fdd696c281/jbrowse2/jbrowse2.py:1696: DeprecationWarning: Testing an element's truth value will always return True in future versions.  Use specific 'len(elem)' or 'elem is not None' test instead.
                item.tag: parse_style_conf(item) for item in (track.find("options/style") or [])
              INFO:jbrowse:-----> Processing track Default / Samtools view on dataset 23: filtered alignments (bam, 1 files)
              DEBUG:jbrowse:cd /tmp/tmpa0q9y2tx/job_working_directory/000/18/outputs/dataset_c45076b5-12d5-4658-bd32-396a84511d4e_files && cp /tmp/tmpa0q9y2tx/files/3/5/3/dataset_3532dc6e-c719-4bcd-a28d-664ec6648fd3.dat /tmp/tmpa0q9y2tx/job_working_directory/000/18/outputs/dataset_c45076b5-12d5-4658-bd32-396a84511d4e_files/data/1b2854c0f868ff4c0c3eb4b60f809316_0_0.bam
              DEBUG:jbrowse:cd /tmp/tmpa0q9y2tx/job_working_directory/000/18/outputs/dataset_c45076b5-12d5-4658-bd32-396a84511d4e_files && cp /tmp/tmpa0q9y2tx/files/_metadata_files/e/c/b/metadata_ecbcf22c-d98e-41e6-b3aa-242944f8f3ee.dat /tmp/tmpa0q9y2tx/job_working_directory/000/18/outputs/dataset_c45076b5-12d5-4658-bd32-396a84511d4e_files/data/1b2854c0f868ff4c0c3eb4b60f809316_0_0.bam.bai
              DEBUG:jbrowse:cd /tmp/tmpa0q9y2tx/job_working_directory/000/18/outputs/dataset_c45076b5-12d5-4658-bd32-396a84511d4e_files && jbrowse add-track --name Samtools view on dataset 23: filtered alignments --category Default --target /tmp/tmpa0q9y2tx/job_working_directory/000/18/outputs/dataset_c45076b5-12d5-4658-bd32-396a84511d4e_files --trackId 1b2854c0f868ff4c0c3eb4b60f809316_0_0 --assemblyNames astrotoma_agassizii_COI.fasta --load inPlace --config {"displays": [{"type": "LinearAlignmentsDisplay", "displayId": "1b2854c0f868ff4c0c3eb4b60f809316_0_0_LinearAlignmentsDisplay"}], "metadata": {"dataset_id": "ec802ce356817933", "dataset_hid": "28", "dataset_size": "8.4 KB", "dataset_edam_format": "<a target=\"_blank\" href=\"http://edamontology.org/format_2572\">bam</a>", "dataset_file_ext": "bam", "history_id": "c1f821c882229aee", "history_user_email": "<a href=\"mailto:[email protected]\">[email protected]</a>", "history_user_id": "1", "history_display_name": "<a target=\"_blank\" href=\"http://localhost:8080/history/view/c1f821c882229aee\">CWL Target History</a>", "metadata_dbkey": "?", "metadata_columns": "12", "metadata_column_names": "[\"QNAME\", \"FLAG\", \"RNAME\", \"POS\", \"MAPQ\", \"CIGAR\", \"MRNM\", \"MPOS\", \"ISIZE\", \"SEQ\", \"QUAL\", \"OPT\"]", "metadata_bam_version": "1.5", "metadata_sort_order": "coordinate", "metadata_read_groups": "[]", "metadata_reference_names": "[\"KY986589.1\"]", "metadata_reference_lengths": "[658]", "metadata_metadata_incomplete": "False", "metadata_bam_index": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", "tool_tool_id": "toolshed.g2.bx.psu.edu/repos/iuc/samtools_view/samtools_view/1.21+galaxy0", "tool_tool_version": "1.21+galaxy0", "tool_tool": "<a target=\"_blank\" href=\"http://localhost:8080/datasets/ec802ce356817933/show_params\">toolshed.g2.bx.psu.edu/repos/iuc/samtools_view/samtools_view/1.21+galaxy0</a>"}} data/1b2854c0f868ff4c0c3eb4b60f809316_0_0.bam
              

            Standard Output:

            • Added assembly "astrotoma_agassizii_COI.fasta" to /tmp/tmpa0q9y2tx/job_working_directory/000/18/outputs/dataset_c45076b5-12d5-4658-bd32-396a84511d4e_files/config.json
              Added track with name "Samtools view on dataset 23: filtered alignments" and trackId "1b2854c0f868ff4c0c3eb4b60f809316_0_0" to /tmp/tmpa0q9y2tx/job_working_directory/000/18/outputs/dataset_c45076b5-12d5-4658-bd32-396a84511d4e_files/config.json
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              action {"__current_case__": 0, "action_select": "create"}
              assemblies [{"__index__": 0, "cytobands": null, "defaultLocation": "", "ref_name_aliases": null, "reference_genome": {"__current_case__": 1, "genome": {"values": [{"id": 3, "src": "hda"}]}, "genome_type_select": "history"}, "track_groups": [{"__index__": 0, "category": "Default", "data_tracks": [{"__index__": 0, "data_format": {"__current_case__": 1, "annotation_cond": {"__current_case__": 0, "annotation": {"values": [{"id": 25, "src": "hda"}]}, "annotation_source": "history"}, "data_format_select": "pileup", "jbstyle": {"track_style": {"__current_case__": 0, "display": "LinearAlignmentsDisplay"}}, "metadata": {"galaxy_metadata": true, "metadata_bonus": null}, "track_visibility": "default_on"}}]}]}]
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              jbgen {"enableAnalytics": false, "font_size": "10", "primary_color": "#0d233f", "quaternary_color": "#ffb11d", "secondary_color": "#721e63", "tertiary_color": "#135560"}
      • Step 23: toolshed.g2.bx.psu.edu/repos/iuc/samtools_fastx/samtools_fastx/1.21+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • addthreads=${GALAXY_SLOTS:-1} && (( addthreads-- )) &&  samtools sort -@ $addthreads -n '/tmp/tmpa0q9y2tx/files/3/5/3/dataset_3532dc6e-c719-4bcd-a28d-664ec6648fd3.dat' -T "${TMPDIR:-.}" > input &&   samtools fasta     -f 0   -F 2304   -G 0  input  > output.fasta && ln -s output.fasta output

            Exit Code:

            • 0

            Standard Error:

            • [M::bam2fq_mainloop] discarded 0 singletons
              [M::bam2fq_mainloop] processed 116 reads
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              copy_arb_tags None
              copy_tags false
              dbkey "?"
              exclusive_filter ["256", "2048"]
              exclusive_filter_all None
              idxout_cond {"__current_case__": 0, "idxout_select": "no"}
              inclusive_filter None
              output_fmt_cond {"__current_case__": 1, "output_fmt_select": "fasta"}
              outputs "other"
              read_numbering ""
      • Step 24: toolshed.g2.bx.psu.edu/repos/devteam/samtools_idxstats/samtools_idxstats/2.0.7:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • addthreads=${GALAXY_SLOTS:-1} && (( addthreads-- )) &&   ln -s '/tmp/tmpa0q9y2tx/files/3/5/3/dataset_3532dc6e-c719-4bcd-a28d-664ec6648fd3.dat' infile && ln -s '/tmp/tmpa0q9y2tx/files/_metadata_files/e/c/b/metadata_ecbcf22c-d98e-41e6-b3aa-242944f8f3ee.dat' infile.bai &&  samtools idxstats -@ $addthreads infile  > '/tmp/tmpa0q9y2tx/job_working_directory/000/20/outputs/dataset_1cedb3b4-d2fa-400e-9bfb-10634a6924cf.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
      • Step 25: toolshed.g2.bx.psu.edu/repos/iuc/samtools_depth/samtools_depth/1.21+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • ln -s '/tmp/tmpa0q9y2tx/files/3/5/3/dataset_3532dc6e-c719-4bcd-a28d-664ec6648fd3.dat' '0' && ln -s '/tmp/tmpa0q9y2tx/files/_metadata_files/e/c/b/metadata_ecbcf22c-d98e-41e6-b3aa-242944f8f3ee.dat' '0.bai' &&   samtools depth  0   -g 0   -G 1796    > '/tmp/tmpa0q9y2tx/job_working_directory/000/21/outputs/dataset_b5f952d9-dd55-48dd-b5cf-fc1a3fbbeaa2.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              additional_options {"deletions": false, "required_flags": null, "single_read": false, "skipped_flags": ["4", "256", "512", "1024"]}
              all ""
              basequality None
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              cond_region {"__current_case__": 0, "select_region": "no"}
              dbkey "?"
              mapquality None
              maxdepth None
              minlength None
              output_options {"header": false}
      • Step 26: toolshed.g2.bx.psu.edu/repos/iuc/megahit/megahit/1.2.9+galaxy2:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • if [[ -n "$GALAXY_MEMORY_MB" ]]; then MEMORY="-m $((GALAXY_MEMORY_MB * 1024))"; fi;  megahit --num-cpu-threads ${GALAXY_SLOTS:-4} -r '/tmp/tmpa0q9y2tx/files/1/b/e/dataset_1bed602d-bd1f-4b12-bb3a-37b8f804ecef.dat' --min-count '2' --k-list '21,29,39,59,79,99,119,141'  --bubble-level '2' --merge-level '20,0.95' --prune-level '2' --prune-depth '2' --disconnect-ratio '0.1' --low-local-ratio '0.2' --cleaning-rounds '5'   --min-contig-len '200' $MEMORY

            Exit Code:

            • 0

            Standard Error:

            • 2025-12-16 17:15:23 - MEGAHIT v1.2.9
              2025-12-16 17:15:23 - Using megahit_core with POPCNT and BMI2 support
              2025-12-16 17:15:23 - Convert reads to binary library
              2025-12-16 17:15:23 - b'INFO  sequence/io/sequence_lib.cpp  :   75 - Lib 0 (/tmp/tmpa0q9y2tx/files/1/b/e/dataset_1bed602d-bd1f-4b12-bb3a-37b8f804ecef.dat): se, 116 reads, 449 max length'
              2025-12-16 17:15:23 - b'INFO  utils/utils.h                 :  152 - Real: 0.0005\tuser: 0.0000\tsys: 0.0019\tmaxrss: 16732'
              2025-12-16 17:15:23 - Start assembly. Number of CPU threads 1 
              2025-12-16 17:15:23 - k list: 21,29,39,59,79,99,119,141 
              2025-12-16 17:15:23 - Memory used: 15095321395
              2025-12-16 17:15:23 - Extract solid (k+1)-mers for k = 21 
              2025-12-16 17:15:23 - Build graph for k = 21 
              2025-12-16 17:15:24 - Assemble contigs from SdBG for k = 21
              2025-12-16 17:15:24 - Local assembly for k = 21
              2025-12-16 17:15:24 - Extract iterative edges from k = 21 to 29 
              2025-12-16 17:15:24 - Build graph for k = 29 
              2025-12-16 17:15:24 - Assemble contigs from SdBG for k = 29
              2025-12-16 17:15:24 - Local assembly for k = 29
              2025-12-16 17:15:24 - Extract iterative edges from k = 29 to 39 
              2025-12-16 17:15:24 - Build graph for k = 39 
              2025-12-16 17:15:24 - Assemble contigs from SdBG for k = 39
              2025-12-16 17:15:24 - Local assembly for k = 39
              2025-12-16 17:15:24 - Extract iterative edges from k = 39 to 59 
              2025-12-16 17:15:24 - Build graph for k = 59 
              2025-12-16 17:15:25 - Assemble contigs from SdBG for k = 59
              2025-12-16 17:15:25 - Local assembly for k = 59
              2025-12-16 17:15:25 - Extract iterative edges from k = 59 to 79 
              2025-12-16 17:15:25 - Build graph for k = 79 
              2025-12-16 17:15:25 - Assemble contigs from SdBG for k = 79
              2025-12-16 17:15:25 - Local assembly for k = 79
              2025-12-16 17:15:25 - Extract iterative edges from k = 79 to 99 
              2025-12-16 17:15:25 - Build graph for k = 99 
              2025-12-16 17:15:25 - Assemble contigs from SdBG for k = 99
              2025-12-16 17:15:25 - Local assembly for k = 99
              2025-12-16 17:15:25 - Extract iterative edges from k = 99 to 119 
              2025-12-16 17:15:25 - Build graph for k = 119 
              2025-12-16 17:15:25 - Assemble contigs from SdBG for k = 119
              2025-12-16 17:15:26 - Local assembly for k = 119
              2025-12-16 17:15:26 - Extract iterative edges from k = 119 to 141 
              2025-12-16 17:15:26 - Build graph for k = 141 
              2025-12-16 17:15:26 - Assemble contigs from SdBG for k = 141
              2025-12-16 17:15:26 - Merging to output final contigs 
              2025-12-16 17:15:26 - 1 contigs, total 669 bp, min 669 bp, max 669 bp, avg 669 bp, N50 669 bp
              2025-12-16 17:15:26 - ALL DONE. Time elapsed: 2.665113 seconds 
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "fasta"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              advanced_section {"bubble_level": "2", "cleaning_rounds": "5", "disconnect_ratio": "0.1", "kmin1pass": false, "low_local_ratio": "0.2", "merge_level": "20,0.95", "nolocal": false, "nomercy": false, "prune_depth": "2", "prune_level": "2"}
              basic_section {"k_mer": {"__current_case__": 0, "k_list": "21,29,39,59,79,99,119,141", "k_mer_method": "klist_method"}, "min_count": "2"}
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input_option {"__current_case__": 0, "choice": "single", "single_files": {"values": [{"id": 27, "src": "hda"}]}}
              output_section {"log_file": false, "min_contig_len": "200", "show_intermediate_contigs": false}
      • Step 27: toolshed.g2.bx.psu.edu/repos/bgruening/add_line_to_file/add_line_to_file/0.1.0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • echo 'rucio' >> '/tmp/tmpa0q9y2tx/job_working_directory/000/23/outputs/dataset_85e9ae2b-4e54-46df-bbe7-25b234421b21.dat' && cat '/tmp/tmpa0q9y2tx/files/1/c/e/dataset_1cedb3b4-d2fa-400e-9bfb-10634a6924cf.dat' >> '/tmp/tmpa0q9y2tx/job_working_directory/000/23/outputs/dataset_85e9ae2b-4e54-46df-bbe7-25b234421b21.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "tabular"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              options "header"
              text_input "rucio"
      • Step 28: toolshed.g2.bx.psu.edu/repos/iuc/datamash_ops/datamash_ops/1.9+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • datamash  --header-out       min 3 max 3 mean 3 < /tmp/tmpa0q9y2tx/files/b/5/f/dataset_b5f952d9-dd55-48dd-b5cf-fc1a3fbbeaa2.dat > '/tmp/tmpa0q9y2tx/job_working_directory/000/24/outputs/dataset_c6180f2e-1f24-44e1-a877-e72843a8c7db.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              grouping None
              header_in false
              header_out true
              ignore_case false
              narm false
              need_sort false
              operations [{"__index__": 0, "op_column": "3", "op_name": "min"}, {"__index__": 1, "op_column": "3", "op_name": "max"}, {"__index__": 2, "op_column": "3", "op_name": "mean"}]
              print_full_line false
      • Step 29: toolshed.g2.bx.psu.edu/repos/lparsons/cutadapt/cutadapt/5.1+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • ln -f -s '/tmp/tmpa0q9y2tx/files/f/6/f/dataset_f6f113c0-faff-4237-b9fc-7babeb323e0f.dat' 'Assembly with MEGAHIT on dataset 30.fa' &&  cutadapt  -j=${GALAXY_SLOTS:-4}   -a file:'/tmp/tmpa0q9y2tx/files/1/c/6/dataset_1c684c4a-e36f-4291-8264-006d511da935.dat'   -g file:'/tmp/tmpa0q9y2tx/files/a/b/f/dataset_abf889ea-a828-4b1f-a8a9-9593249bddb8.dat'    --error-rate=0.2 --times=1 --overlap=20    --action=trim         --minimum-length=1      -o 'out1.fa'  'Assembly with MEGAHIT on dataset 30.fa'

            Exit Code:

            • 0

            Standard Output:

            • This is cutadapt 5.1 with Python 3.12.10
              Command line parameters: -j=1 -a file:/tmp/tmpa0q9y2tx/files/1/c/6/dataset_1c684c4a-e36f-4291-8264-006d511da935.dat -g file:/tmp/tmpa0q9y2tx/files/a/b/f/dataset_abf889ea-a828-4b1f-a8a9-9593249bddb8.dat --error-rate=0.2 --times=1 --overlap=20 --action=trim --minimum-length=1 -o out1.fa Assembly with MEGAHIT on dataset 30.fa
              Processing single-end reads on 1 core ...
              
              === Summary ===
              
              Total reads processed:                       1
              Reads with adapters:                         0 (0.0%)
              
              == Read fate breakdown ==
              Reads that were too short:                   0 (0.0%)
              Reads written (passing filters):             1 (100.0%)
              
              Total basepairs processed:           669 bp
              Total written (filtered):            669 bp (100.0%)
              
              === Adapter Reverse ===
              
              Sequence: AGTGAGTAAACTTCAGGGTGTCCRAARAATCA; Type: regular 3'; Length: 32; Trimmed: 0 times
              
              === Adapter Forward ===
              
              Sequence: AGTGAGTTTCAACAAAACAYAAGGNCATNGG; Type: regular 5'; Length: 31; Trimmed: 0 times
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              adapter_options {"action": "trim", "error_rate": "0.2", "match_read_wildcards": false, "no_indels": false, "no_match_adapter_wildcards": true, "overlap": "20", "revcomp": false, "times": "1"}
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              filter_options {"discard_casava": false, "discard_trimmed": false, "discard_untrimmed": false, "max_average_error_rate": null, "max_expected_errors": null, "max_n": null, "maximum_length": null, "maximum_length2": null, "minimum_length": "1", "minimum_length2": null, "pair_filter": "any"}
              library {"__current_case__": 0, "input_1": {"values": [{"id": 30, "src": "hda"}]}, "r1": {"adapters": [{"__index__": 0, "adapter_source": {"__current_case__": 2, "adapter_file": {"values": [{"id": 18, "src": "hda"}]}, "adapter_source_list": "file"}, "single_noindels": false}], "anywhere_adapters": [], "front_adapters": [{"__index__": 0, "adapter_source": {"__current_case__": 2, "adapter_file": {"values": [{"id": 17, "src": "hda"}]}, "adapter_source_list": "file"}, "single_noindels": false}]}, "type": "single"}
              other_trimming_options {"cut": "0", "cut2": "0", "nextseq_trim": "0", "poly_a": false, "quality_cutoff": "0", "quality_cutoff2": "", "shorten_options": {"__current_case__": 1, "shorten_values": "False"}, "shorten_options_r2": {"__current_case__": 1, "shorten_values_r2": "False"}, "trim_n": false}
              output_selector None
              read_mod_options {"length_tag": null, "rename": null, "strip_suffix": null, "zero_cap": false}
      • Step 30: Show beginning1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • head -n 2 '/tmp/tmpa0q9y2tx/files/8/5/e/dataset_85e9ae2b-4e54-46df-bbe7-25b234421b21.dat' > '/tmp/tmpa0q9y2tx/job_working_directory/000/26/outputs/dataset_b03f7a5e-de2a-4e0f-8448-0f171d399ddb.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              header true
              lineNum "1"
      • Step 4: Filter FASTQ - Maximum size:

        • step_state: scheduled
      • Step 31: toolshed.g2.bx.psu.edu/repos/iuc/seqkit_stats/seqkit_stats/2.10.1+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • ln -s '/tmp/tmpa0q9y2tx/files/3/8/b/dataset_38bdf06f-9b3c-4a12-b8f4-660375d56f0b.dat' 'Cutadapt on dataset 19_ 20_ and 33_ Read 1 Output' &&  seqkit stats 'Cutadapt on dataset 19_ 20_ and 33_ Read 1 Output'    --tabular > '/tmp/tmpa0q9y2tx/job_working_directory/000/27/outputs/dataset_6a397858-fedb-43c7-bad1-43fb7c20aeed.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              all false
              basename false
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              skip_err false
              tabular true
      • Step 32: Cut1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • perl '/tmp/tmpa0q9y2tx/galaxy-dev/tools/filters/cutWrapper.pl' '/tmp/tmpa0q9y2tx/files/b/0/3/dataset_b03f7a5e-de2a-4e0f-8448-0f171d399ddb.dat' 'c3' T '/tmp/tmpa0q9y2tx/job_working_directory/000/28/outputs/dataset_0438bd0c-ea67-4937-af30-feea61fcf8b5.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "tabular"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              columnList "c3"
              dbkey "?"
              delimiter "T"
      • Step 33: Show tail1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • set -eo pipefail; ( head -n 1 '/tmp/tmpa0q9y2tx/files/6/a/3/dataset_6a397858-fedb-43c7-bad1-43fb7c20aeed.dat' && tail -n +2 '/tmp/tmpa0q9y2tx/files/6/a/3/dataset_6a397858-fedb-43c7-bad1-43fb7c20aeed.dat' | tail -n 1 ) > '/tmp/tmpa0q9y2tx/job_working_directory/000/29/outputs/dataset_491cb49f-bf58-420e-a429-56f8212636ad.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              header true
              lineNum "1"
      • Step 34: Cut1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • perl '/tmp/tmpa0q9y2tx/galaxy-dev/tools/filters/cutWrapper.pl' '/tmp/tmpa0q9y2tx/files/4/9/1/dataset_491cb49f-bf58-420e-a429-56f8212636ad.dat' 'c8' T '/tmp/tmpa0q9y2tx/job_working_directory/000/30/outputs/dataset_88d7bef4-bea7-4561-9bef-9b6108c61f8a.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "tabular"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              columnList "c8"
              dbkey "?"
              delimiter "T"
      • Step 35: Paste1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • paste '/tmp/tmpa0q9y2tx/files/8/8/d/dataset_88d7bef4-bea7-4561-9bef-9b6108c61f8a.dat' '/tmp/tmpa0q9y2tx/files/c/6/1/dataset_c6180f2e-1f24-44e1-a877-e72843a8c7db.dat' > '/tmp/tmpa0q9y2tx/job_working_directory/000/31/outputs/dataset_d4c75af3-22a8-4bd3-b14c-6a90f3e21b35.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              delimiter "T"
      • Step 36: Paste1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • paste '/tmp/tmpa0q9y2tx/files/0/4/3/dataset_0438bd0c-ea67-4937-af30-feea61fcf8b5.dat' '/tmp/tmpa0q9y2tx/files/d/4/c/dataset_d4c75af3-22a8-4bd3-b14c-6a90f3e21b35.dat' > '/tmp/tmpa0q9y2tx/job_working_directory/000/32/outputs/dataset_893d2474-7586-4ae0-9f26-468396f05959.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              delimiter "T"
      • Step 37: toolshed.g2.bx.psu.edu/repos/recetox/table_pandas_rename_column/table_pandas_rename_column/2.2.3+galaxy0:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • python3 '/tmp/shed_dir/toolshed.g2.bx.psu.edu/repos/recetox/table_pandas_rename_column/3f54cd56a65e/table_pandas_rename_column/table_pandas_rename_column.py' --input_dataset '/tmp/tmpa0q9y2tx/files/8/9/3/dataset_893d2474-7586-4ae0-9f26-468396f05959.dat' 'tabular' --rename 1=mapped_reads 2=consensus_length 3=min_depth 4=max_depth 5=mean_depth --output_dataset '/tmp/tmpa0q9y2tx/job_working_directory/000/33/outputs/dataset_f5ff3eed-6fb5-4bfd-9ee9-74cca15e4269.dat' 'tabular'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              columns_selection [{"__index__": 0, "column": "1", "new_name": "mapped_reads"}, {"__index__": 1, "column": "2", "new_name": "consensus_length"}, {"__index__": 2, "column": "3", "new_name": "min_depth"}, {"__index__": 3, "column": "4", "new_name": "max_depth"}, {"__index__": 4, "column": "5", "new_name": "mean_depth"}]
              dbkey "?"
      • Step 5: Reference sequence:

        • step_state: scheduled
      • Step 6: Forward primer:

        • step_state: scheduled
      • Step 7: Reverse primer:

        • step_state: scheduled
      • Step 8: toolshed.g2.bx.psu.edu/repos/devteam/fastqc/fastqc/0.74+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • ln -s '/tmp/tmpa0q9y2tx/files/4/a/8/dataset_4a899d09-4768-4202-9869-e6dfb1a74628.dat' 'Banque-02_S2_L001_R1_001_fastq' && mkdir -p '/tmp/tmpa0q9y2tx/job_working_directory/000/4/outputs/dataset_8514c286-4e45-47d6-9787-45c8b9260e89_files' && fastqc --outdir '/tmp/tmpa0q9y2tx/job_working_directory/000/4/outputs/dataset_8514c286-4e45-47d6-9787-45c8b9260e89_files'   --threads ${GALAXY_SLOTS:-2} --dir ${TEMP:-$_GALAXY_JOB_TMP_DIR} --quiet --extract  --kmers 7 -f 'fastq' 'Banque-02_S2_L001_R1_001_fastq'  && cp '/tmp/tmpa0q9y2tx/job_working_directory/000/4/outputs/dataset_8514c286-4e45-47d6-9787-45c8b9260e89_files'/*/fastqc_data.txt output.txt && cp '/tmp/tmpa0q9y2tx/job_working_directory/000/4/outputs/dataset_8514c286-4e45-47d6-9787-45c8b9260e89_files'/*\.html output.html

            Exit Code:

            • 0

            Standard Output:

            • null
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              adapters None
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              contaminants None
              dbkey "?"
              kmers "7"
              limits None
              min_length None
              nogroup false
      • Step 9: toolshed.g2.bx.psu.edu/repos/devteam/fastqc/fastqc/0.74+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • ln -s '/tmp/tmpa0q9y2tx/files/d/3/d/dataset_d3d260e7-8cd9-481c-bfc2-7811867e7278.dat' 'Banque-02_S2_L001_R2_001_fastq' && mkdir -p '/tmp/tmpa0q9y2tx/job_working_directory/000/5/outputs/dataset_9f3298d9-a805-4996-a791-d37d2c4d0142_files' && fastqc --outdir '/tmp/tmpa0q9y2tx/job_working_directory/000/5/outputs/dataset_9f3298d9-a805-4996-a791-d37d2c4d0142_files'   --threads ${GALAXY_SLOTS:-2} --dir ${TEMP:-$_GALAXY_JOB_TMP_DIR} --quiet --extract  --kmers 7 -f 'fastq' 'Banque-02_S2_L001_R2_001_fastq'  && cp '/tmp/tmpa0q9y2tx/job_working_directory/000/5/outputs/dataset_9f3298d9-a805-4996-a791-d37d2c4d0142_files'/*/fastqc_data.txt output.txt && cp '/tmp/tmpa0q9y2tx/job_working_directory/000/5/outputs/dataset_9f3298d9-a805-4996-a791-d37d2c4d0142_files'/*\.html output.html

            Exit Code:

            • 0

            Standard Output:

            • null
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              adapters None
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              contaminants None
              dbkey "?"
              kmers "7"
              limits None
              min_length None
              nogroup false
      • Step 10: toolshed.g2.bx.psu.edu/repos/iuc/pear/iuc_pear/0.9.6.4:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • pear -f '/tmp/tmpa0q9y2tx/files/4/a/8/dataset_4a899d09-4768-4202-9869-e6dfb1a74628.dat' -r '/tmp/tmpa0q9y2tx/files/d/3/d/dataset_d3d260e7-8cd9-481c-bfc2-7811867e7278.dat' --phred-base 33  --output pear --p-value 0.01 --min-overlap 10 --min-asm-length 50 --min-trim-length 1 --quality-theshold 0 --max-uncalled-base 1.0 --test-method 1 --threads "${GALAXY_SLOTS:-8}" --score-method 2 --cap 40

            Exit Code:

            • 0

            Standard Output:

            •  ____  _____    _    ____ 
              |  _ \| ____|  / \  |  _ \
              | |_) |  _|   / _ \ | |_) |
              |  __/| |___ / ___ \|  _ <
              |_|   |_____/_/   \_\_| \_\
              
              PEAR v0.9.6 [January 15, 2015]
              
              Citation - PEAR: a fast and accurate Illumina Paired-End reAd mergeR
              Zhang et al (2014) Bioinformatics 30(5): 614-620 | doi:10.1093/bioinformatics/btt593
              
              Forward reads file.................: /tmp/tmpa0q9y2tx/files/4/a/8/dataset_4a899d09-4768-4202-9869-e6dfb1a74628.dat
              Reverse reads file.................: /tmp/tmpa0q9y2tx/files/d/3/d/dataset_d3d260e7-8cd9-481c-bfc2-7811867e7278.dat
              PHRED..............................: 33
              Using empirical frequencies........: YES
              Statistical method.................: OES
              Maximum assembly length............: 999999
              Minimum assembly length............: 50
              p-value............................: 0.010000
              Quality score threshold (trimming).: 0
              Minimum read size after trimming...: 1
              Maximal ratio of uncalled bases....: 1.000000
              Minimum overlap....................: 10
              Scoring method.....................: Scaled score
              Threads............................: 1
              
              Allocating memory..................: 200,000,000 bytes
              Computing empirical frequencies....: DONE
                A: 0.308304
                C: 0.192901
                G: 0.198973
                T: 0.299822
                0 uncalled bases
              Assemblying reads: 0%
              Assemblying reads: 100%
              
              Assembled reads ...................: 8,562 / 8,839 (96.866%)
              Discarded reads ...................: 0 / 8,839 (0.000%)
              Not assembled reads ...............: 277 / 8,839 (3.134%)
              Assembled reads file...............: pear.assembled.fastq
              Discarded reads file...............: pear.discarded.fastq
              Unassembled forward reads file.....: pear.unassembled.forward.fastq
              Unassembled reverse reads file.....: pear.unassembled.reverse.fastq
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "73018df2daa211f0a3dd000d3a5ce991"
              cap "40"
              chromInfo "/tmp/tmpa0q9y2tx/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              empirical_freqs false
              library {"__current_case__": 0, "forward": {"values": [{"id": 1, "src": "hda"}]}, "reverse": {"values": [{"id": 2, "src": "hda"}]}, "type": "paired"}
              max_assembly_length "0"
              max_uncalled_base "1.0"
              min_assembly_length "50"
              min_overlap "10"
              min_trim_length "1"
              nbase false
              outputs "assembled"
              pvalue "0.01"
              quality_threshold "0"
              score_method "2"
              test_method "1"
    • Other invocation details
      • history_id

        • c1f821c882229aee
      • history_state

        • ok
      • invocation_id

        • c1f821c882229aee
      • invocation_state

        • scheduled
      • workflow_id

        • c1f821c882229aee

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants