Skip to content

[TASK] Create regression test for main processing/analysis workflow #100

@FlicAnderson

Description

@FlicAnderson

Brief task description: - What needs to be done for this task/feature

  • create new branch from main v1
  • describe/document order in which main scripts need to be run so I can identify where testing needs to happen and what the inputs/outputs are
  • implement (if not currently) a 'run on minimum set of repos' option (e.g run on supplied repo-list file) in all relevant scripts/functions
  • create a 'test list' repo names file and add to tests/testdata/ folder; potentially copy across the commits/issues/etc data files for those repos into testdata folder.
  • add test script for workflow
  • set up test script to read in test list of repo names, and run on those repos, generating output files
  • compare output files to expected files in test script
  • once test results are same, document this test into README or other docs.

Does it relate to any other work / issues: - Yes (details...) #97 #96 (need regression test to ensure I'm correctly implementing new features and refactoring)

Why is it needed? - Justification details for why this will be time well-spent and how it progresses my project work.
The codebase is pretty complex, and it's tricky to ensure my refactoring hasn't broken anything... I need a simple-ish and quick way of checking that.

How should it be done? - Give a rough idea here, follow up with more details in further tickets as needed.
See points above.

Is your feature request related to a problem? Please describe.
Currently I can't easily make sure that the workflow is ACTUALLY working (or even giving consistent results between changes).
I also don't want to have to re-gather all the data files and run across the whole set of datafiles.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

How will I know when it's done? - When should this task/feature be considered 'finished'?
When I can run a pytest file to test the 'all the things' processing and analysis workflows and the test passes when comparing v1.0 results and this new branch's results.

Any other details of relevance? - Extra info, relevant links, prioritisation discussion etc.

Metadata

Metadata

Assignees

Labels

documentationImprovements or additions to documentation

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions