-
Notifications
You must be signed in to change notification settings - Fork 20
Open
Labels
part:testsAffects the unit, integration and performance (benchmarks) testsAffects the unit, integration and performance (benchmarks) testspart:toolingAffects the development tooling (CI, deployment, dependency management, etc.)Affects the development tooling (CI, deployment, dependency management, etc.)type:enhancementNew feature or enhancement visitble to usersNew feature or enhancement visitble to users
Milestone
Description
What's needed?
We need to make sure that examples in examples/ and docstrings do really work. For now only examples/ is being linted, which is a start, but there still could be issues with them when ran.
Proposed solution
We probably need a script to extract the docstring exampmles and put them in a shape where they can be ran:
- We need to be able to specify which pip dependencies need to be installed (by default only the current package should be installed)
- We need to wrap examples in a (probably
async) main function.
We probably need some script to control how to run examples.
- To speed things up, examples could be run in parallel, as most of the time they'll probably waiting for data from the microgrid API.
- We should have a timer watching for the example to finish, and kill it if it times out (and treat this as a failure)
- We probably need some other basic validation technique. Maybe assert that the script produced at least X lines of output?
Alternatively we can adapt examples to run as pytest tests, then we have at least the parallelization and timeout part solved (via pytest-xdist and pytest-timeout).
We probably need to adapt the examples:
- Some examples run forever, so they should be adapted to only run for a fixed amount of time (maybe 1 minute?)
- We probably want to keep the run forever as an option, as it might be useful when running manually
We need a workflow to run this:
- Run the examples only for
pushworkflow events: since these examples depend on the microgrid sandbox, we shouldn't run them for every PR, as will probably be too flaky and too slow. If running for every push is still too flaky, we can run nightly. The most important thing is that we are sure they can run before a release. - Configure the workflow to also timeout after a reasonable amount of time based on how long we expect each example to take.
- Maybe we can make a generic workflow that takes the example to run as a configuration and spawn a job for each example from another orchestrating workflow.
Use cases
No response
Alternatives and workarounds
No response
Additional context
No response
Metadata
Metadata
Assignees
Labels
part:testsAffects the unit, integration and performance (benchmarks) testsAffects the unit, integration and performance (benchmarks) testspart:toolingAffects the development tooling (CI, deployment, dependency management, etc.)Affects the development tooling (CI, deployment, dependency management, etc.)type:enhancementNew feature or enhancement visitble to usersNew feature or enhancement visitble to users
Type
Projects
Status
To do