-
Notifications
You must be signed in to change notification settings - Fork 2
HTTP REST API for Protoplaster server mode #9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
f8f1659 to
eece06e
Compare
One way that may make this a non-issue is to let our config files consist of other config files. That way if we want to run one/two tests out of a test suite, we just make a config file for those one/two tests, and we don't duplicate configuration: |
I can see how this would be useful for debugging (editing a specific parameter on a webpage, for example). However, I don't think we need this feature for production level testing. Since in those tests we're going to want to be very explicit about the test configs we're running. |
|
|
||
|
|
||
| @test_runs_blueprint.route("/api/v1/test-runs/<int:identifier>/report") | ||
| def fetch_test_run_report(identifier: int): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our test outputs are going to be more than a boolean pass/fail. We should be prepared to associate each test with its own dataset and pull that dataset when we read out the test report.
Something folks are concerned about is building boards and running tests at scale. There might be a situation where our performance is slowly degrading, so we don't just want a pass/fail test report from each board. An example of this would be a eyescan test data for something like a JESD link.
One of the challenging parts of this is that we aren't fully aware of all the tests we're going to need to run at a production level right now. This information is likely going to reveal itself once we finish testing multiple boards. But we want the hooks in place for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We assume no proper database will be used, everything should operate on the available files on the device.
The comment above may be related to this concern. We'll have more than just pass/fail coming out of these tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should not be an issue, we could let the tests generate arbitrary files, which could be listed via the API or simply bundled in a single archive, and would be available for download once the test is complete, would that be a viable solution?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think the test summary csv should point to the collateral for each test.
| "run_id": 2, | ||
| "config_name": "config2.yaml" | ||
| "status": "running", | ||
| "created": "Mon, 25 Aug 2025 16:58:35 +0200", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Date is good metadata, but there may be other metadata associated with a test that we'll want to record. The first thing that comes to mind is the Git SHA of the bitstream. A separate Git SHA associated with the current OS version is another one.
|
Thanks for the input @bkueffle - we came up with some changes to how the test configs could work to better fit your use case:
So as an example we could have 3 yamls: base:
tests:
simple_test:
i2c:
- bus: 0
devices:
- name: "Sensor name"
address: 0x1C
gpio:
- number: 20
value: 1
metadata:
bitstream_sha:
- name: "bitstream-sha"
- run: "cat /dev/bistream-sha"
bsp_sha:
- name: "bsp-sha"
- run: "cat /etc/bsp-sha"camera.yml: base:
tests:
camera_test:
- device: "/dev/video0"
camera_name: "vivid"
driver_name: "vivid"test-suites.yml: includes:
- base.yml
- camera.yml
base:
test-suites:
simple-test-suite:
tests: simple_test
metadata: bitstream_sha bsp_sha
extended-test-suite:
tests: simple_test camera_test
metadata: bitstream_sha bsp_shaThe API would be changed so that triggering a test would require selecting the config where test suites are defined, and the name of the desired test suite would have to be provided. |
eece06e to
9432e6d
Compare
|
The API has been implemented and merged outside of this PR, closing. |
This PR contains a proposal of HTTP REST API for remote control of test execution. The goal is to enable: configuration upload, test triggering, progress monitoring and retrieval of test reports.
API Overview
Configuration
GET /api/v1/configs- list available configuration filesPOST /api/v1/configs- upload new/overwrite existing configuration fileGET /api/v1/configs/(string: config_name)- fetch information about a single configuration fileDELETE /api/v1/configs/(string: config_name)- remove a configuration fileTest runs
POST /api/v1/test-runs- trigger a new test run based on a selected configuration (can be optionally modified using theoverridesfield)GET /api/v1/test-runs- list all test runsGET /api/v1/test-runs/(int: identifier)- fetch information about a single test runGET /api/v1/test-runs/(int: identifier)/report- download the test report from a completed test runTest run states
Test runs progress through a set of states:
pending- accepted but not startedrunning- currently executingfinished- completed successfullyfailed- error during executionaborted- stopped by user or systemDocumentation
The API is described in detail in docstrings for each endpoint, documentation generated from the docstrings is available at: https://antmicro.github.io/protoplaster-docs-preview/api.html# (note that this is a temporary URL used for this PR only, it will be merged with the main documentation once this PR is merged).
Discussion points