Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/develop/test/figures/twister_test_project.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
65 changes: 33 additions & 32 deletions doc/develop/test/twister.rst
Original file line number Diff line number Diff line change
Expand Up @@ -219,57 +219,60 @@ Tests

Tests are detected by the presence of a ``testcase.yaml`` or a ``sample.yaml``
files in the application's project directory. This test application
configuration file may contain one or more entries in the tests section each
identifying a test scenario.
configuration file may contain one or more entries in the ``tests:`` section each
identifying a Test Scenario.

.. _twister_test_project_diagram:

.. figure:: figures/twister_test_project.svg
:alt: Twister and a Test applications' project.
:alt: Twister and a Test application project.
:figclass: align-center

Twister and a Test applications' project.
Twister and a Test application project.


Test application configurations are written using the YAML syntax and share the
same structure as samples.

A test scenario is a set of conditions or variables, defined in test scenario
entry, under which a set of test suites will be executed. Can be used
interchangeably with test scenario entry.
A Test Scenario is a set of conditions and variables defined in a Test Scenario
entry, under which a set of Test Suites will be built and executed.

A test suite is a collection of test cases that are intended to be used to test
a software program to ensure it meets certain requirements. The test cases in a
test suite are often related or meant to be executed together.
A Test Suite is a collection of Test Cases which are intended to be used to test
a software program to ensure it meets certain requirements. The Test Cases in a
Test Suite are either related or meant to be executed together.

The name of each test scenario needs to be unique in the context of the overall
The name of each Test Scenario needs to be unique in the context of the overall
test application and has to follow basic rules:

#. The format of the test scenario identifier shall be a string without any spaces or
#. The format of the Test Scenario identifier shall be a string without any spaces or
special characters (allowed characters: alphanumeric and [\_=]) consisting
of multiple sections delimited with a dot (.).
of multiple sections delimited with a dot (``.``).

#. Each test scenario identifier shall start with a section followed by a
subsection separated by a dot. For example, a test scenario that covers
semaphores in the kernel shall start with ``kernel.semaphore``.
#. Each Test Scenario identifier shall start with a section name followed by a
subsection names delimited with a dot (``.``). For example, a test scenario
that covers semaphores in the kernel shall start with ``kernel.semaphore``.

#. All test scenario identifiers within a ``testcase.yaml`` file need to be unique. For
example a ``testcase.yaml`` file covering semaphores in the kernel can have:
#. All Test Scenario identifiers within a ``testcase.yaml`` file need to be unique.
For example a ``testcase.yaml`` file covering semaphores in the kernel can have:

* ``kernel.semaphore``: For general semaphore tests
* ``kernel.semaphore.stress``: Stress testing semaphores in the kernel.

#. Depending on the nature of the test, an identifier can consist of at least
two sections:
#. The full canonical name of a Test Suite is:
``<Test Application Project path>/<Test Scenario identifier>``

* Ztest tests: The individual test cases in the ztest testsuite will be
concatenated by dot (``.``) to the identifier in the ``testcase.yaml`` file
generating unique identifiers for every test case in the suite.
#. Depending on the Test Suite implementation, its Test Case identifiers consist
of **at least three sections** delimited with a dot (``.``):

* Standalone tests and samples: This type of test should at least have 3
sections concatnated by dot (``.``) in the test scenario identifier in the
``testcase.yaml`` (or ``sample.yaml``) file.
The last section of the name shall signify the test case itself.
* **Ztest tests**:
a Test Scenario identifier from the corresponding ``testcase.yaml`` file,
a Ztest suite name, and a Ztest test name:
``<Test Scenario identifier>.<Ztest suite name>.<Ztest test name>``

* **Standalone tests and samples**:
a Test Scenario identifier from the corresponding ``testcase.yaml`` (or
``sample.yaml``) file where the last section signifies the standalone
Test Case name, for example: ``debug.coredump.logging_backend``.


The following is an example test configuration with a few options that are
Expand Down Expand Up @@ -312,12 +315,10 @@ related to the sample and what is being demonstrated:
tags: tests
min_ram: 16
The full canonical name for each test scenario is:``<path to test application>/<test scenario identifier>``

A test scenario entry is a a block or entry starting with test scenario
identifier in the YAML files.
A Test Scenario entry in the ``tests:`` YAML dictionary has its Test Scenario
identifier as a key.

Each test scenario entry in the test application configuration can define the
Each Test Scenario entry in the Test Application configuration can define the
following key/value pairs:

.. _test_config_args:
Expand Down
24 changes: 15 additions & 9 deletions scripts/pylib/twister/twisterlib/environment.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/usr/bin/env python3
# vim: set syntax=python ts=4 :
#
# Copyright (c) 2018 Intel Corporation
# Copyright (c) 2018-2024 Intel Corporation
# Copyright 2022 NXP
# Copyright (c) 2024 Arm Limited (or its affiliates). All rights reserved.
#
Expand Down Expand Up @@ -149,7 +149,8 @@ def add_parse_arguments(parser = None):
test_plan_report_xor.add_argument("--list-tests", action="store_true",
help="""List of all sub-test functions recursively found in
all --testsuite-root arguments. Note different sub-tests can share
the same section name and come from different directories.
the same test scenario identifier (section.subsection)
and come from different directories.
The output is flattened and reports --sub-test names only,
not their directories. For instance net.socket.getaddrinfo_ok
and net.socket.fd_set belong to different directories.
Expand Down Expand Up @@ -239,17 +240,22 @@ def add_parse_arguments(parser = None):

test_xor_subtest.add_argument(
"-s", "--test", "--scenario", action="append", type = norm_path,
help="Run only the specified testsuite scenario. These are named by "
"<path/relative/to/Zephyr/base/section.name.in.testcase.yaml>")
help="""Run only the specified test suite scenario. These are named by
'path/relative/to/Zephyr/base/section.subsection_in_testcase_yaml',
or just 'section.subsection' identifier. With '--testsuite-root' option
the scenario will be found faster.
""")

test_xor_subtest.add_argument(
"--sub-test", action="append",
help="""Recursively find sub-test functions and run the entire
test section where they were found, including all sibling test
help="""Recursively find sub-test functions (test cases) and run the entire
test scenario (section.subsection) where they were found, including all sibling test
functions. Sub-tests are named by:
section.name.in.testcase.yaml.function_name_without_test_prefix
Example: In kernel.fifo.fifo_loop: 'kernel.fifo' is a section name
and 'fifo_loop' is a name of a function found in main.c without test prefix.
'section.subsection_in_testcase_yaml.ztest_suite.ztest_without_test_prefix'.
Example_1: 'kernel.fifo.fifo_api_1cpu.fifo_loop' where 'kernel.fifo' is a test scenario
name (section.subsection) and 'fifo_api_1cpu.fifo_loop' is
a Ztest suite_name.test_name identificator.
Example_2: 'debug.coredump.logging_backend' is a standalone test scenario name.
""")

parser.add_argument(
Expand Down
159 changes: 127 additions & 32 deletions scripts/pylib/twister/twisterlib/harness.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@
_WINDOWS = platform.system() == 'Windows'


result_re = re.compile(r".*(PASS|FAIL|SKIP) - (test_)?(\S*) in (\d*[.,]?\d*) seconds")
class Harness:
GCOV_START = "GCOV_COVERAGE_DUMP_START"
GCOV_END = "GCOV_COVERAGE_DUMP_END"
Expand Down Expand Up @@ -59,12 +58,19 @@ def __init__(self):
self.ztest = False
self.detected_suite_names = []
self.run_id = None
self.started_suites = {}
self.started_cases = {}
self.matched_run_id = False
self.run_id_exists = False
self.instance: TestInstance | None = None
self.testcase_output = ""
self._match = False


@property
def trace(self) -> bool:
return self.instance.handler.options.verbose > 2

@property
def status(self) -> TwisterStatus:
return self._status
Expand Down Expand Up @@ -710,42 +716,124 @@ def _check_result(self, line):

class Test(Harness):
__test__ = False # for pytest to skip this class when collects tests
RUN_PASSED = "PROJECT EXECUTION SUCCESSFUL"
RUN_FAILED = "PROJECT EXECUTION FAILED"
test_suite_start_pattern = r"Running TESTSUITE (?P<suite_name>.*)"
ZTEST_START_PATTERN = r"START - (test_)?([a-zA-Z0-9_-]+)"

def handle(self, line):
test_suite_match = re.search(self.test_suite_start_pattern, line)
if test_suite_match:
suite_name = test_suite_match.group("suite_name")
test_suite_start_pattern = re.compile(r"Running TESTSUITE (?P<suite_name>\S*)")
test_suite_end_pattern = re.compile(r"TESTSUITE (?P<suite_name>\S*)\s+(?P<suite_status>succeeded|failed)")
test_case_start_pattern = re.compile(r"START - (test_)?([a-zA-Z0-9_-]+)")
test_case_end_pattern = re.compile(r".*(PASS|FAIL|SKIP) - (test_)?(\S*) in (\d*[.,]?\d*) seconds")
test_suite_summary_pattern = re.compile(r"SUITE (?P<suite_status>\S*) - .* \[(?P<suite_name>\S*)\]: .* duration = (\d*[.,]?\d*) seconds")
test_case_summary_pattern = re.compile(r" - (PASS|FAIL|SKIP) - \[([^\.]*).(test_)?(\S*)\] duration = (\d*[.,]?\d*) seconds")


def get_testcase(self, tc_name, phase, ts_name=None):
""" Search a Ztest case among detected in the test image binary
expecting the same test names as already known from the ELF.
Track suites and cases unexpectedly found in the log.
"""
ts_names = self.started_suites.keys()
if ts_name:
if ts_name not in self.instance.testsuite.ztest_suite_names:
logger.warning(f"On {phase}: unexpected Ztest suite '{ts_name}' "
f"not present among: {self.instance.testsuite.ztest_suite_names}")
if ts_name not in self.detected_suite_names:
if self.trace:
logger.debug(f"On {phase}: detected new Ztest suite '{ts_name}'")
self.detected_suite_names.append(ts_name)
ts_names = [ ts_name ] if ts_name in ts_names else []

# Firstly try to match the test case ID to the first running Ztest suite with this test name.
for ts_name_ in ts_names:
if self.started_suites[ts_name_]['count'] < (0 if phase == 'TS_SUM' else 1):
continue
tc_fq_id = "{}.{}.{}".format(self.id, ts_name_, tc_name)
if tc := self.instance.get_case_by_name(tc_fq_id):
if self.trace:
logger.debug(f"On {phase}: Ztest case '{tc_name}' matched to '{tc_fq_id}")
return tc
logger.debug(f"On {phase}: Ztest case '{tc_name}' is not known in {self.started_suites} running suite(s).")
tc_id = "{}.{}".format(self.id, tc_name)
return self.instance.get_case_or_create(tc_id)

def start_suite(self, suite_name):
if suite_name not in self.detected_suite_names:
self.detected_suite_names.append(suite_name)
if suite_name not in self.instance.testsuite.ztest_suite_names:
logger.warning(f"Unexpected Ztest suite '{suite_name}'")
if suite_name in self.started_suites:
if self.started_suites[suite_name]['count'] > 0:
logger.warning(f"Already STARTED '{suite_name}':{self.started_suites[suite_name]}")
elif self.trace:
logger.debug(f"START suite '{suite_name}'")
self.started_suites[suite_name]['count'] += 1
self.started_suites[suite_name]['repeat'] += 1
else:
self.started_suites[suite_name] = { 'count': 1, 'repeat': 0 }

def end_suite(self, suite_name, phase='', suite_status=None):
if suite_name in self.started_suites:
if phase == 'TS_SUM' and self.started_suites[suite_name]['count'] == 0:
return
if self.started_suites[suite_name]['count'] < 1:
logger.error(f"Already ENDED {phase} suite '{suite_name}':{self.started_suites[suite_name]}")
elif self.trace:
logger.debug(f"END {phase} suite '{suite_name}':{self.started_suites[suite_name]}")
self.started_suites[suite_name]['count'] -= 1
elif suite_status == 'SKIP':
self.start_suite(suite_name) # register skipped suites at their summary end
self.started_suites[suite_name]['count'] -= 1
else:
logger.warning(f"END {phase} suite '{suite_name}' without START detected")

testcase_match = re.search(self.ZTEST_START_PATTERN, line)
if testcase_match:
name = "{}.{}".format(self.id, testcase_match.group(2))
tc = self.instance.get_case_or_create(name)
def start_case(self, tc_name):
if tc_name in self.started_cases:
if self.started_cases[tc_name]['count'] > 0:
logger.warning(f"Already STARTED '{tc_name}':{self.started_cases[tc_name]}")
self.started_cases[tc_name]['count'] += 1
else:
self.started_cases[tc_name] = { 'count': 1 }

def end_case(self, tc_name, phase=''):
if tc_name in self.started_cases:
if phase == 'TS_SUM' and self.started_cases[tc_name]['count'] == 0:
return
if self.started_cases[tc_name]['count'] < 1:
logger.error(f"Already ENDED {phase} case '{tc_name}':{self.started_cases[tc_name]}")
elif self.trace:
logger.debug(f"END {phase} case '{tc_name}':{self.started_cases[tc_name]}")
self.started_cases[tc_name]['count'] -= 1
elif phase != 'TS_SUM':
logger.warning(f"END {phase} case '{tc_name}' without START detected")


def handle(self, line):
testcase_match = None
if self._match:
self.testcase_output += line + "\n"

if test_suite_start_match := re.search(self.test_suite_start_pattern, line):
self.start_suite(test_suite_start_match.group("suite_name"))
elif test_suite_end_match := re.search(self.test_suite_end_pattern, line):
suite_name=test_suite_end_match.group("suite_name")
self.end_suite(suite_name, 'TS_END')
elif testcase_match := re.search(self.test_case_start_pattern, line):
tc_name = testcase_match.group(2)
tc = self.get_testcase(tc_name, 'TC_START')
self.start_case(tc.name)
# Mark the test as started, if something happens here, it is mostly
# due to this tests, for example timeout. This should in this case
# be marked as failed and not blocked (not run).
tc.status = TwisterStatus.STARTED

if testcase_match or self._match:
self.testcase_output += line + "\n"
self._match = True

result_match = result_re.match(line)
if not self._match:
self.testcase_output += line + "\n"
self._match = True
# some testcases are skipped based on predicates and do not show up
# during test execution, however they are listed in the summary. Parse
# the summary for status and use that status instead.

summary_re = re.compile(r"- (PASS|FAIL|SKIP) - \[([^\.]*).(test_)?(\S*)\] duration = (\d*[.,]?\d*) seconds")
summary_match = summary_re.match(line)

if result_match:
elif result_match := self.test_case_end_pattern.match(line):
matched_status = result_match.group(1)
name = "{}.{}".format(self.id, result_match.group(3))
tc = self.instance.get_case_or_create(name)
tc_name = result_match.group(3)
tc = self.get_testcase(tc_name, 'TC_END')
self.end_case(tc.name)
tc.status = TwisterStatus[matched_status]
if tc.status == TwisterStatus.SKIP:
tc.reason = "ztest skip"
Expand All @@ -755,15 +843,22 @@ def handle(self, line):
self.testcase_output = ""
self._match = False
self.ztest = True
elif summary_match:
matched_status = summary_match.group(1)
self.detected_suite_names.append(summary_match.group(2))
name = "{}.{}".format(self.id, summary_match.group(4))
tc = self.instance.get_case_or_create(name)
elif test_suite_summary_match := self.test_suite_summary_pattern.match(line):
suite_name=test_suite_summary_match.group("suite_name")
suite_status=test_suite_summary_match.group("suite_status")
self._match = False
self.ztest = True
self.end_suite(suite_name, 'TS_SUM', suite_status=suite_status)
elif test_case_summary_match := self.test_case_summary_pattern.match(line):
matched_status = test_case_summary_match.group(1)
suite_name = test_case_summary_match.group(2)
tc_name = test_case_summary_match.group(4)
tc = self.get_testcase(tc_name, 'TS_SUM', suite_name)
self.end_case(tc.name, 'TS_SUM')
tc.status = TwisterStatus[matched_status]
if tc.status == TwisterStatus.SKIP:
tc.reason = "ztest skip"
tc.duration = float(summary_match.group(5))
tc.duration = float(test_case_summary_match.group(5))
if tc.status == TwisterStatus.FAIL:
tc.output = self.testcase_output
self.testcase_output = ""
Expand Down
Loading
Loading