How to collect / save Test results from testcases #1386
Closed
MaximilianSoerenPollak
started this conversation in
Operational Community
Replies: 2 comments
-
what about collecting test descriptions ? how do we document those - for example Rust tests
|
Beta Was this translation helpful? Give feedback.
0 replies
-
Since nothing more came of this, neither in discussion nor in private messages, I would close this. => Decission: We will use XML to read out test attributes & results |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Currently there is the question on how to save test results from the testruns so that they can be parsed locally to enable linking & checking of attributes by a Sphinx extension.
The best suggestion so far was JunitXML as this is supported (as far as my knowledge goes) by all three required languages' test framework (Python, Rust, C++)
Reasons for XML:
Reasons for another format:
Note:
This is NOT the end Human Readable Report. That wasn't even agreed yet if this is needed and if so in what capacity, at least to my knowledge.
This is only the output of the tests, that will be read by a Sphinx Extension (or whatever else we come up with) that then enables the linking to requirements & checking & setting of attributes in requirements.
More information
eclipse-score/docs-as-code#113 => Implementation details / breakdown
eclipse-score/process_description#36 (comment) => What is actually needed from the tests.
Reposted discussion from slack channel:
Nils Eberhardt

Hi everyone,
We are working on applying the Trustable Software Framework (TSF) to the nlohmann_json library, which shall be integrated into the baselibs module. We would like to ask what the best option is for including the TSF deliverables in the documentation.
At the core of the TSF is the linking of expectations to test evidence. In S-Core, we have found that tests should be linked to requirements using RecordProperty, but we have not yet found out whether a report is also compiled and where it might be available.
The trudag tool, developed by the TSF authors, can generate an intuitive report linking test cases to expectations (see the attached screenshot from here). We could easily convert trudag's output into the reStructuredText files used in S-Core documentation.
We think that it might be most effective to use the trudag tool to generate the report and annotate the tests with RecordProperty to align with S-Core processes.
We look forward to discussing the best approach.
10 Antworten
Maximilian Soeren Pollak
Heute um 15:37 Uhr
So far it was only agreed to (from tooling team side) that the things need to be in the XML.
We have changed how this works (in recent meetings) so the current process does not reflect this. And we haven't yet implemented the XML parser & test linking.
However, from our side, it would be amazing if you could somehow find a way to link the properties to the test name inside the XML (via additional argument or somehow)
The main benefit there is that we can see the properties even when the test was skipped, which we can't if it would just be inside of normal properties.
If there is any questions regarding this let me know, glad to have a call too.
Regarding 'gathering' these XML files & creating a repoort, that was not yet really discussed. So far (at least as I'm aware).
Paul Sherwood
Heute um 15:42 Uhr
What is the reasoning for XML?
15:43 Uhr
(it just makes content hard to read, imo :breites_lächeln:)
Maximilian Soeren Pollak
Heute um 15:59 Uhr
The XML is just the format most test frameworks can output. It will not be the format we will put up for humans to read, just a 'universal' format that our parser will read and transform that into stuff we can link to the requirements or put into a report.
15:59 Uhr
It's mainly the 'simplest' format we found so far that's by default supported that we don't need to write 3 different parsers / extensions.
16:04 Uhr
If you have another way we are all ears. Now would be the time to come up with one, as we didn't yet start on the implementation.
Paul Sherwood
vor 1 Stunde
I just try to steer people towards YAML (and away from XML)
Maximilian Soeren Pollak
vor 39 Minuten
You will never write it yourself though.
Do all of the testing frameworks support yaml?
I know pytest doesn't, so we would then have to write a XML => YAML parser to then parse it. Which seems kinda useless work.
Maximilian Soeren Pollak
vor 34 Minuten
I understand not wanting to write XML or do something with it manually, there I'm with you that YAML would be better.
But without more plugins (that we didn't write / control) pytest at least can only output STDOUT or JunitXML.
And at least last I checked JunitXML was also supported as output format by the C++ and Rust frameworks.
So it would be a common denominator.
Beta Was this translation helpful? Give feedback.
All reactions