-
Notifications
You must be signed in to change notification settings - Fork 8
Open
Description
It would be good to have a thorough set of end-to-end tests that use different config parameters, to make it easier to spot accidental breaking changes and/or weird interactions between options. See tests/_explanation_test.py for an example, that uses snapshot testing to ensure that code changes don't introduce unintended changes in the output, as well as checking that the full workflow runs with the given set of config parameters.
What parameters and parameter values should we focus on for this? Some initial thoughts:
- different model input approaches: onnx, pytorch
- different explanation strategies: global, spatial
- check that
--analyseoption works for both onnx and pytorch (requires bugfix, see Analyse broken for onnx format #1 ) - check that all plot types work (but may be difficult to ensure that results are consistent)
- different distribution types for splitting boxes - at the moment probably only uniform works
Metadata
Metadata
Assignees
Labels
No labels