Skip to content

Testing across a range of config options #16

@liz-is

Description

@liz-is

It would be good to have a thorough set of end-to-end tests that use different config parameters, to make it easier to spot accidental breaking changes and/or weird interactions between options. See tests/_explanation_test.py for an example, that uses snapshot testing to ensure that code changes don't introduce unintended changes in the output, as well as checking that the full workflow runs with the given set of config parameters.

What parameters and parameter values should we focus on for this? Some initial thoughts:

  • different model input approaches: onnx, pytorch
  • different explanation strategies: global, spatial
  • check that --analyse option works for both onnx and pytorch (requires bugfix, see Analyse broken for onnx format  #1 )
  • check that all plot types work (but may be difficult to ensure that results are consistent)
  • different distribution types for splitting boxes - at the moment probably only uniform works

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions