-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Description
Currently, we're checking that the examples pass and that the images and segmentation look appropriate visually. We'll likely still change some parameters over time, thus I wouldn't expect to fully reproduce the same results (e.g. the segmentation parameters aren't optimized for all examples yet => makes sense to change them).
It would nevertheless make sense to have a script we can run for some examples (e.g. just example 01 to start with) which checks some of the content of the results.
Ideas for what to check:
- Is the OME-Zarr structure still the same (how to we serialize the structure?)
- Do we get the same number of segmented objects (unique labels or number of measurements in the measurement table)
- Either compare full measurement tables or some summary statistics (for 01 feasible to just check against the expected table. Wouldn't make sense for larger examples)
Those then are a bit like integration tests, but we probably don't want to run them at every commit. Can be valuable though to run e.g. before some new releases.
Metadata
Metadata
Assignees
Labels
No labels
Type
Projects
Status
TODO