Skip to content

Run and report full benchmarks on sensor/formula releases #3

@Inkedstinct

Description

@Inkedstinct

Current benchmarks aims at evaluating the stability of tools

In the future, a proposition of automation could be :

  • At first, to run a given set of benchmarks on release patterns
    • For example : full set on major release, mandatory set on minor release, no set on fix release
    • Goal : have an automatic reporting in order to identify issues, none blocking at first until satisfying
  • Then, to integrate this run in releases CI, as a blocking job, where the success factor is "No significant degradation of $METRICS"
    • Goal : Automate the stability (and other further metrics recorded) factor as part of the CI checked element

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions