-
Notifications
You must be signed in to change notification settings - Fork 9
Refactor nf-tests #161
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor nf-tests #161
Conversation
As far as I understand, stub tests are usefull to mock specific modules if they take too long. Here however we test both the real module and the stubbed version, which I think is overkill.
Testing `--output-chunk-reads` can be done in pixelator instead
These tests are almost identical to the pipeline level tests
|
johandahlberg
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me. However, I don't know if there are any nf-core standards that require us to have separate stub tests and/or subworkflow tests. Perhaps we should check in a bit on that on slack, so we don't end up having to add them back when we do a release?
|
While I fully agree that a speedup is needed I think we should check that we are not removing useful tests. On the module level we follow the nf-core/module test requirements. These require stub tests to be present. Stubs are usefull for quickly trying out workflow logic without actually running the tools. The module level stub tests are fast anyways, mainly the startup/teardown logic you pay for each test anyways. I also would not remove subworkflow tests, even thought they indeed largely overlap with the full pipeline tests. Being able to run the subworkflows independently through the subworkflows unit tests is very handy during development. On the pipeline level there are indeed multiple tests because the output structure looks quite different when using I think a big win would be to specialize the testdata for each module test. Now we use a rather big subset 300k reads that can run through the whole pipeline. By specializating the subset for just that step the processing time can be cut by quite a bit. I also thing further subsampling of the minimal full run sample further can help a lot here. Instead of removing tests that are usefull only in selected development scenarios we can also use tags to exclude those from running on each PR. |
|
To me the main issue is that the extensive snapshots now can require a lot of |
|
Wrapping up what we discussed today and what needs to be done
I'm gonna close this PR and make a new one with the changes above, thanks for good discussions today! |
Nf-tests take a really long time to complete (ca 40min locally). This is too long to comfortably iterate and develop new features.
This PR does the following:
PR checklist
nf-core pipelines lint).nextflow run . -profile test,docker --outdir <OUTDIR>).nextflow run . -profile debug,test,docker --outdir <OUTDIR>).CHANGELOG.mdis updated.