Skip to content

Conversation

@msobkowski
Copy link
Member

This PR contains a proposal of HTTP REST API for remote control of test execution. The goal is to enable: configuration upload, test triggering, progress monitoring and retrieval of test reports.

API Overview

  • Configuration

    • GET /api/v1/configs - list available configuration files
    • POST /api/v1/configs - upload new/overwrite existing configuration file
    • GET /api/v1/configs/(string: config_name) - fetch information about a single configuration file
    • DELETE /api/v1/configs/(string: config_name) - remove a configuration file
  • Test runs

    • POST /api/v1/test-runs - trigger a new test run based on a selected configuration (can be optionally modified using the overrides field)
    • GET /api/v1/test-runs - list all test runs
    • GET /api/v1/test-runs/(int: identifier) - fetch information about a single test run
    • GET /api/v1/test-runs/(int: identifier)/report - download the test report from a completed test run

Test run states

Test runs progress through a set of states:

  • pending - accepted but not started
  • running - currently executing
  • finished - completed successfully
  • failed - error during execution
  • aborted - stopped by user or system

Documentation

The API is described in detail in docstrings for each endpoint, documentation generated from the docstrings is available at: https://antmicro.github.io/protoplaster-docs-preview/api.html# (note that this is a temporary URL used for this PR only, it will be merged with the main documentation once this PR is merged).

Discussion points

  • The current approach to config files is kept with this API proposal. The meaning is that configuration files store a number of tests (i.e. a single config is more of a full test suite). The API currently allows starting full "suites" only (either with stock config or with a modified config). Question: Should there be a way of triggering a single test from the selected configuration?
  • The proposed overrides mechanism allows the user to override all the parts of the configuration file. This means that a similar mechanism could be used to trigger arbitrary test variations without any config file on the device - would this be useful? If so, should both configs/arbitrary runs be available or should we prefer one over the other?
  • We assume no proper database will be used, everything should operate on the available files on the device.

@msobkowski msobkowski force-pushed the 82348-remote-api-proposal branch 2 times, most recently from f8f1659 to eece06e Compare September 3, 2025 17:36
@bkueffle
Copy link

bkueffle commented Sep 8, 2025

The current approach to config files is kept with this API proposal. The meaning is that configuration files store a number of tests (i.e. a single config is more of a full test suite). The API currently allows starting full "suites" only (either with stock config or with a modified config). Question: Should there be a way of triggering a single test from the selected configuration?

One way that may make this a non-issue is to let our config files consist of other config files. That way if we want to run one/two tests out of a test suite, we just make a config file for those one/two tests, and we don't duplicate configuration:

simple_smoke_test: test_0, test_1
complex_long_test: simple_smoke_test, test_2, test_3, test_4

@bkueffle
Copy link

bkueffle commented Sep 8, 2025

The proposed overrides mechanism allows the user to override all the parts of the configuration file. This means that a similar mechanism could be used to trigger arbitrary test variations without any config file on the device - would this be useful? If so, should both configs/arbitrary runs be available or should we prefer one over the other?

I can see how this would be useful for debugging (editing a specific parameter on a webpage, for example). However, I don't think we need this feature for production level testing. Since in those tests we're going to want to be very explicit about the test configs we're running.



@test_runs_blueprint.route("/api/v1/test-runs/<int:identifier>/report")
def fetch_test_run_report(identifier: int):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our test outputs are going to be more than a boolean pass/fail. We should be prepared to associate each test with its own dataset and pull that dataset when we read out the test report.

Something folks are concerned about is building boards and running tests at scale. There might be a situation where our performance is slowly degrading, so we don't just want a pass/fail test report from each board. An example of this would be a eyescan test data for something like a JESD link.

One of the challenging parts of this is that we aren't fully aware of all the tests we're going to need to run at a production level right now. This information is likely going to reveal itself once we finish testing multiple boards. But we want the hooks in place for it.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We assume no proper database will be used, everything should operate on the available files on the device.

The comment above may be related to this concern. We'll have more than just pass/fail coming out of these tests.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should not be an issue, we could let the tests generate arbitrary files, which could be listed via the API or simply bundled in a single archive, and would be available for download once the test is complete, would that be a viable solution?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think the test summary csv should point to the collateral for each test.

"run_id": 2,
"config_name": "config2.yaml"
"status": "running",
"created": "Mon, 25 Aug 2025 16:58:35 +0200",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Date is good metadata, but there may be other metadata associated with a test that we'll want to record. The first thing that comes to mind is the Git SHA of the bitstream. A separate Git SHA associated with the current OS version is another one.

@msobkowski
Copy link
Member Author

msobkowski commented Sep 10, 2025

Thanks for the input @bkueffle - we came up with some changes to how the test configs could work to better fit your use case:

  • test configs could have the ability to include other configs (as you proposed)
  • besides tests themselves, configs could be extended to also define commands which would generate metadata for a test run
  • a "test suite" element could be introduced to the config, and instead of running every test in a specified config, a specific test suite would need to be selected

So as an example we could have 3 yamls:
base.yml:

base:
  tests:
    simple_test:
      i2c:
      - bus: 0
        devices:
        - name: "Sensor name"
          address: 0x1C
      gpio:
      - number: 20
        value: 1
  metadata:
    bitstream_sha:
    - name: "bitstream-sha"
    - run: "cat /dev/bistream-sha"
    bsp_sha:
    - name: "bsp-sha"
    - run: "cat /etc/bsp-sha"

camera.yml:

base:
  tests:
    camera_test:
    - device: "/dev/video0"
      camera_name: "vivid"
      driver_name: "vivid"

test-suites.yml:

includes:
  - base.yml
  - camera.yml

base:
  test-suites:
    simple-test-suite:
      tests: simple_test
      metadata: bitstream_sha bsp_sha
    extended-test-suite:
      tests: simple_test camera_test
      metadata: bitstream_sha bsp_sha

The API would be changed so that triggering a test would require selecting the config where test suites are defined, and the name of the desired test suite would have to be provided.

@msobkowski msobkowski force-pushed the 82348-remote-api-proposal branch from eece06e to 9432e6d Compare September 11, 2025 10:16
@tgorochowik
Copy link
Member

The API has been implemented and merged outside of this PR, closing.

@msobkowski msobkowski deleted the 82348-remote-api-proposal branch October 23, 2025 13:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants