Skip to content

Add logic to validate the outcome of the examples #37

@jluethi

Description

@jluethi

Currently, we're checking that the examples pass and that the images and segmentation look appropriate visually. We'll likely still change some parameters over time, thus I wouldn't expect to fully reproduce the same results (e.g. the segmentation parameters aren't optimized for all examples yet => makes sense to change them).

It would nevertheless make sense to have a script we can run for some examples (e.g. just example 01 to start with) which checks some of the content of the results.

Ideas for what to check:

  • Is the OME-Zarr structure still the same (how to we serialize the structure?)
  • Do we get the same number of segmented objects (unique labels or number of measurements in the measurement table)
  • Either compare full measurement tables or some summary statistics (for 01 feasible to just check against the expected table. Wouldn't make sense for larger examples)

Those then are a bit like integration tests, but we probably don't want to run them at every commit. Can be valuable though to run e.g. before some new releases.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    Status

    TODO

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions