diff --git a/Runner/suites/Kernel/FunctionalArea/baseport/Kernel_Selftests/README.md b/Runner/suites/Kernel/FunctionalArea/baseport/Kernel_Selftests/README.md new file mode 100644 index 00000000..b92c6235 --- /dev/null +++ b/Runner/suites/Kernel/FunctionalArea/baseport/Kernel_Selftests/README.md @@ -0,0 +1,94 @@ +# Kernel Selftest Runner + +This directory contains the automation scripts for running **Linux Kernel Selftests** on Qualcomm-based platforms (e.g., RB3GEN2) using the standardized `qcom-linux-testkit` framework. + +## Overview + +The selftest runner (`run.sh`) is designed to: +- Automatically detect and run enabled kernel selftests using a whitelist file (`enabled_tests.list`) +- Ensure all required dependencies and directories are present +- Log detailed test results for CI/CD integration +- Support both standalone execution and integration with LAVA/automated test infrastructure + +## Files + +- **run.sh**: Main test runner script. Executes all tests listed in `enabled_tests.list` and produces `.res` and `.run` result files. +- **enabled_tests.list**: Whitelist of test suites or binaries to run (one per line). Supports comments and blank lines. +- **Kernel_Selftests.res**: Detailed result file, capturing PASS/FAIL/SKIP for each test or subtest. +- **Kernel_Selftests.run**: Cumulative PASS or FAIL summary, used by CI pipelines for quick parsing. + +## Prerequisites + +- `/kselftest` directory must be present on the device/target. This is where the selftest binaries are located (usually deployed by the build or CI). +- All test scripts must have the executable bit set (`chmod +x run.sh`), enforced by GitHub Actions. +- `enabled_tests.list` must exist and contain at least one non-comment, non-blank entry. + +## Script Flow + +1. **Environment Setup**: + Dynamically locates and sources `init_env` and `functestlib.sh` to ensure robust, path-independent execution. + +2. **Dependency Checks**: + - Checks for required commands (e.g., `find`) + - Ensures `/kselftest` directory exists + - Validates `enabled_tests.list` is present and usable + +3. **Test Discovery & Execution**: + - Parses `enabled_tests.list`, ignoring comments and blanks + - Supports both directory and subtest (e.g., `timers/thread_test`) entries + - Executes test binaries or `run_test.sh` for each listed test + - Logs individual and summary results + +4. **Result Logging**: + - Writes detailed results to `Kernel_Selftests.res` + - Writes overall PASS/FAIL to `Kernel_Selftests.run` + +5. **CI Compatibility**: + - Designed for direct invocation by LAVA or any CI/CD harness + - Fails early and logs meaningful errors if prerequisites are missing + +## enabled_tests.list Format + +```text +# This is a comment +timers +thread +timers/thread_test +# Add more test names or subtests as needed +``` + +- Each non-comment, non-blank line specifies a test directory or a test binary under `/kselftest`. + +## Example Usage + +```sh +sh run.sh +``` + +or from a higher-level testkit runner: + +```sh +./run-test.sh Kernel_Selftests +``` + +## Troubleshooting + +- **Missing executable bit**: + If you see permission errors, ensure all scripts are `chmod +x`. + +- **Missing `/kselftest` directory**: + Ensure selftests are built and deployed to the target system. + +- **Missing or empty `enabled_tests.list`**: + Add test entries as needed. The runner will fail if this file is absent or empty. + +## Contribution Guidelines + +- Update `enabled_tests.list` as test coverage expands. +- Follow the coding and structure conventions enforced by CI (see `CONTRIBUTING.md`). +- All changes should pass permission checks and shellcheck lints in CI. + +## License + +SPDX-License-Identifier: BSD-3-Clause-Clear +Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. diff --git a/Runner/suites/Kernel/FunctionalArea/baseport/Kernel_Selftests/enabled_tests.list b/Runner/suites/Kernel/FunctionalArea/baseport/Kernel_Selftests/enabled_tests.list new file mode 100755 index 00000000..aa4f43e8 --- /dev/null +++ b/Runner/suites/Kernel/FunctionalArea/baseport/Kernel_Selftests/enabled_tests.list @@ -0,0 +1,88 @@ +acct:acct_syscall +arm64:btitest +arm64:check_prctl +arm64:fake_sigreturn_bad_magic +arm64:fake_sigreturn_bad_size +arm64:fake_sigreturn_bad_size_for_magic0 +arm64:fake_sigreturn_duplicated_fpsimd +arm64:fake_sigreturn_misaligned_sp +arm64:fake_sigreturn_missing_fpsimd +arm64:fpmr_siginfo +arm64:fp-ptrace +arm64:hwcap +arm64:mangle_pstate_invalid_compat_toggle +arm64:mangle_pstate_invalid_daif_bits +arm64:mangle_pstate_invalid_mode_el1h +arm64:mangle_pstate_invalid_mode_el1t +arm64:mangle_pstate_invalid_mode_el2h +arm64:mangle_pstate_invalid_mode_el2t +arm64:mangle_pstate_invalid_mode_el3h +arm64:mangle_pstate_invalid_mode_el3t +arm64:nobtitest +arm64:pac +arm64:poe_siginfo +arm64:ptrace +arm64:sme_trap_za +arm64:syscall-abi +arm64:tags_test +arm64:tpidr2 +arm64:tpidr2_siginfo +arm64:vec-syscfg +arm64:za-fork +breakpoints:breakpoint_test_arm64 +cachestat:test_cachestat +cgroup:test_core +cgroup:test_cpuset +cgroup:test_freezer +cgroup:test_kill +clone3:clone3 +clone3:clone3_clear_sighand +clone3:clone3_set_tid +clone3:close_range_test +damon:sysfs.sh +damon:sysfs_update_removed_scheme_dir.sh +net:forwarding/router_mpath_nh_res.sh +net:forwarding/router_mpath_nh.sh +net:hsr/hsr_ping.sh +net:hsr/hsr_redbox.sh +net:netfilter/bridge_brouter.sh +net:netfilter/br_netfilter_queue.sh +net:netfilter/br_netfilter.sh +net:netfilter/conntrack_icmp_related.sh +net:netfilter/conntrack_sctp_collision.sh +net:netfilter/conntrack_tcp_unreplied.sh +net:netfilter/conntrack_vrf.sh +net:netfilter/nf_conntrack_packetdrill.sh +net:netfilter/nft_conntrack_helper.sh +net:netfilter/nft_fib.sh +net:netfilter/nft_nat.sh +net:netfilter/nft_nat_zones.sh +net:netfilter/nft_queue.sh +net:netfilter/nft_tproxy_tcp.sh +net:netfilter/nft_tproxy_udp.sh +net:netfilter/nft_zones_many.sh +pidfd:pidfd_fdinfo_test +pidfd:pidfd_file_handle_test +pidfd:pidfd_getfd_test +pidfd:pidfd_open_test +pidfd:pidfd_poll_test +pidfd:pidfd_setns_test +pidfd:pidfd_test +pidfd:pidfd_wait +proc:fd-001-lookup +proc:fd-002-posix-eq +proc:fd-003-kthread +proc:proc-2-is-kthread +proc:proc-loadavg-001 +proc:proc-self-isnt-kthread +proc:proc-self-map-files-001 +proc:proc-self-map-files-002 +proc:proc-self-syscall +proc:proc-self-wchan +proc:proc-subset-pid +proc:proc-tid0 +proc:proc-uptime-001 +proc:proc-uptime-002 +timers:inconsistency-check +timers:nanosleep +timers:nsleep-lat diff --git a/Runner/suites/Kernel/FunctionalArea/baseport/Kernel_Selftests/run.sh b/Runner/suites/Kernel/FunctionalArea/baseport/Kernel_Selftests/run.sh new file mode 100755 index 00000000..688a4648 --- /dev/null +++ b/Runner/suites/Kernel/FunctionalArea/baseport/Kernel_Selftests/run.sh @@ -0,0 +1,243 @@ +#!/bin/sh + +# Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. +# SPDX-License-Identifier: BSD-3-Clause-Clear + +# Robustly find and source init_env +SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" +INIT_ENV="" +SEARCH="$SCRIPT_DIR" +while [ "$SEARCH" != "/" ]; do + if [ -f "$SEARCH/init_env" ]; then + INIT_ENV="$SEARCH/init_env" + break + fi + SEARCH=$(dirname "$SEARCH") +done +if [ -z "$INIT_ENV" ]; then + echo "[ERROR] Could not find init_env (starting at $SCRIPT_DIR)" >&2 + exit 1 +fi +if [ -z "$__INIT_ENV_LOADED" ]; then + # shellcheck disable=SC1090 + . "$INIT_ENV" +fi +# shellcheck disable=SC1090,SC1091 +. "$TOOLS/functestlib.sh" + +# --- Helper: Run command with timeout, POSIX fallback --- +run_with_timeout() { + secs="$1" + shift + if command -v timeout >/dev/null 2>&1; then + timeout "$secs" "$@" + return $? + fi + "$@" & + pid=$! + ( + sleep "$secs" + kill -9 "$pid" 2>/dev/null + ) > /dev/null 2>&1 & + killer=$! + wait "$pid" 2>/dev/null + status=$? + kill -9 "$killer" 2>/dev/null + return $status +} + +ensure_log_dir() { + log_path="$1" + dir_path=$(dirname "$log_path") + [ -d "$dir_path" ] || mkdir -p "$dir_path" +} + +format_time() { + secs="$1" + if [ "$secs" -ge 60 ]; then + mins=$((secs / 60)) + rem=$((secs % 60)) + printf "%dm %02ds" "$mins" "$rem" + else + printf "%ds" "$secs" + fi +} + +TESTNAME="Kernel_Selftests" +test_path=$(find_test_case_by_name "$TESTNAME") +cd "$test_path" || exit 1 + +res_file="./$TESTNAME.res" +per_test_file="./$TESTNAME.tests" +whitelist="./enabled_tests.list" +selftest_dir="/kselftest" +arch="$(uname -m)" +skip_arch="x86 powerpc mips sparc" + +pass=0 +fail=0 +skip=0 + +rm -f "$res_file" "$per_test_file" ./*.log + +log_info "--------------------------------------------------------" +log_info "Starting $TESTNAME..." + +check_dependencies find + +if [ ! -d "$selftest_dir" ]; then + log_skip "$TESTNAME: $selftest_dir not found" + echo "$TESTNAME SKIP" > "$res_file" + exit 0 +fi + +if [ ! -f "$whitelist" ]; then + log_skip "$TESTNAME: whitelist $whitelist not found" + echo "$TESTNAME SKIP" > "$res_file" + exit 0 +fi + +run_suite_tests() { + suite="$1" + dir="$selftest_dir/$suite" + ran_any_local=0 + + if [ -x "$dir/run_test.sh" ]; then + log_info "Running $suite/run_test.sh" + logfile="$suite/run_test.sh.log" + ensure_log_dir "$logfile" + start=$(date +%s) + run_with_timeout 300 "$dir/run_test.sh" > "$logfile" 2>&1 + status=$? + end=$(date +%s) + elapsed=$((end - start)) + duration_text=$(format_time "$elapsed") + if [ "$status" -eq 0 ]; then + log_pass "PASS $suite/run_test.sh ($duration_text)" + echo "PASS $suite/run_test.sh $duration_text" >> "$per_test_file" + pass=$((pass + 1)) + else + log_fail "FAIL $suite/run_test.sh ($duration_text)" + echo "FAIL $suite/run_test.sh $duration_text" >> "$per_test_file" + fail=$((fail + 1)) + fi + ran_any_local=1 + fi + + test_bins=$(find "$dir" -type f -name '*test' -executable 2>/dev/null) + for bin in $test_bins; do + binname="${bin#"$selftest_dir"/}" + case "$binname" in */run_test.sh) continue ;; esac + + log_info "Running $binname" + logfile="${binname}.log" + ensure_log_dir "$logfile" + start=$(date +%s) + run_with_timeout 300 "$bin" > "$logfile" 2>&1 + status=$? + end=$((end - start)) + elapsed=$((end - start)) + duration_text=$(format_time "$elapsed") + if [ "$status" -eq 0 ]; then + log_pass "PASS $binname ($duration_text)" + echo "PASS $binname $duration_text" >> "$per_test_file" + pass=$((pass + 1)) + else + log_fail "FAIL $binname ($duration_text)" + echo "FAIL $binname $duration_text" >> "$per_test_file" + fail=$((fail + 1)) + fi + ran_any_local=1 + done + + if [ "$ran_any_local" -eq 0 ]; then + log_skip "$suite: no test binaries" + echo "SKIP $suite (no test binaries)" >> "$per_test_file" + skip=$((skip + 1)) + fi +} + +while IFS= read -r test || [ -n "$test" ]; do + case "$test" in + ''|\#*) continue ;; + esac + + for a in $skip_arch; do + if [ "$test" = "$a" ]; then + log_skip "$test skipped on $arch" + echo "SKIP $test (unsupported arch)" >> "$per_test_file" + skip=$((skip + 1)) + continue 2 + fi + done + + if echo "$test" | grep -q ':'; then + suite=$(echo "$test" | cut -d: -f1) + testbin=$(echo "$test" | cut -d: -f2) + bin_path="$selftest_dir/$suite/$testbin" + elif echo "$test" | grep -q '/'; then + suite=$(echo "$test" | cut -d/ -f1) + testbin=$(echo "$test" | cut -d/ -f2-) + bin_path="$selftest_dir/$suite/$testbin" + else + suite="$test" + testbin="" + bin_path="" + fi + + if [ -n "$testbin" ]; then + if [ ! -d "$selftest_dir/$suite" ]; then + log_skip "$suite not found" + echo "SKIP $suite (directory not found)" >> "$per_test_file" + skip=$((skip + 1)) + continue + fi + if [ -x "$bin_path" ]; then + log_info "Running $suite/$testbin" + logfile="$suite/$testbin.log" + ensure_log_dir "$logfile" + start=$(date +%s) + run_with_timeout 300 "$bin_path" > "$logfile" 2>&1 + status=$? + end=$(date +%s) + elapsed=$((end - start)) + duration_text=$(format_time "$elapsed") + if [ "$status" -eq 0 ]; then + log_pass "PASS $suite/$testbin ($duration_text)" + echo "PASS $suite/$testbin $duration_text" >> "$per_test_file" + pass=$((pass + 1)) + else + log_fail "FAIL $suite/$testbin ($duration_text)" + echo "FAIL $suite/$testbin $duration_text" >> "$per_test_file" + fail=$((fail + 1)) + fi + else + log_skip "SKIP $suite/$testbin (not found or not executable)" + echo "SKIP $suite/$testbin (not found or not executable)" >> "$per_test_file" + skip=$((skip + 1)) + fi + continue + fi + + if [ -d "$selftest_dir/$suite" ]; then + run_suite_tests "$suite" + else + log_skip "$suite not found" + echo "SKIP $suite (not found)" >> "$per_test_file" + skip=$((skip + 1)) + fi + +done < "$whitelist" + +if [ "$fail" -eq 0 ] && [ "$pass" -gt 0 ]; then + echo "$TESTNAME PASS" > "$res_file" + log_pass "$TESTNAME: all tests passed" + exit 0 +else + echo "$TESTNAME FAIL" > "$res_file" + log_fail "$TESTNAME: one or more tests failed" + exit 1 +fi + +log_info "Per-test results written to $per_test_file" +log_info "------------------- Completed $TESTNAME Testcase -----------"