Skip to content

[CI] Setup generate_report to describe ninja failures #152621

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

boomanaiden154
Copy link
Contributor

This patch makes it so that generate_report will add information about
failed build actions to the summary report. This makes it significantly
easier to find compilation failures, especially given we run ninja with
-k 0.

This patch only does the integration into generate_report (along with
testing). Actual utilization in the script is split into a separate
patch to try and keep things clean.

boomanaiden154 added a commit to boomanaiden154/llvm-project that referenced this pull request Aug 8, 2025
This patch makes it so that generate_report will add information about
failed build actions to the summary report. This makes it significantly
easier to find compilation failures, especially given we run ninja with
-k 0.

This patch only does the integration into generate_report (along with
testing). Actual utilization in the script is split into a separate
patch to try and keep things clean.

Pull Request: llvm#152621
boomanaiden154 added a commit to boomanaiden154/llvm-project that referenced this pull request Aug 8, 2025
This patch makes it so that generate_report will add information about
failed build actions to the summary report. This makes it significantly
easier to find compilation failures, especially given we run ninja with
-k 0.

This patch only does the integration into generate_report (along with
testing). Actual utilization in the script is split into a separate
patch to try and keep things clean.

Pull Request: llvm#152621
Created using spr 1.3.6

[skip ci]
Created using spr 1.3.6
Created using spr 1.3.6
boomanaiden154 added a commit to boomanaiden154/llvm-project that referenced this pull request Aug 8, 2025
This patch makes it so that generate_report will add information about
failed build actions to the summary report. This makes it significantly
easier to find compilation failures, especially given we run ninja with
-k 0.

This patch only does the integration into generate_report (along with
testing). Actual utilization in the script is split into a separate
patch to try and keep things clean.

Pull Request: llvm#152621
Copy link
Collaborator

@DavidSpickett DavidSpickett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have trouble keeping all the combinations of build and test state in my head, but I think you've got good test coverage anyway and we're bound to find corner cases in practice.

So this LGTM.

Thanks for working on this, it's so much neater than I expected when you said you would be parsing logs.

@DavidSpickett
Copy link
Collaborator

especially given we run ninja with -k 0.

To explain to future readers / current reviewers, add: "-k 0 builds as much as possible until everything has completed or failed, rather than stopping after 1 command fails". At least that's my understanding of it.

This gets me in llvm-test-suite logs too. Clang will crash and print a bit of the error report, then in the long time it takes to generate the backtrace, other logs get printed. Then again, it's nice for the build to go as far as possible so you can fix multiple things each time.

Created using spr 1.3.6
Created using spr 1.3.6

[skip ci]
Created using spr 1.3.6
Created using spr 1.3.6
Created using spr 1.3.6

[skip ci]
Created using spr 1.3.6
Created using spr 1.3.6
boomanaiden154 added a commit to boomanaiden154/llvm-project that referenced this pull request Aug 8, 2025
This patch makes it so that generate_report will add information about
failed build actions to the summary report. This makes it significantly
easier to find compilation failures, especially given we run ninja with
-k 0.

This patch only does the integration into generate_report (along with
testing). Actual utilization in the script is split into a separate
patch to try and keep things clean.

Pull Request: llvm#152621
Created using spr 1.3.6
@llvmbot
Copy link
Member

llvmbot commented Aug 8, 2025

@llvm/pr-subscribers-github-workflow

Author: Aiden Grossman (boomanaiden154)

Changes

This patch makes it so that generate_report will add information about
failed build actions to the summary report. This makes it significantly
easier to find compilation failures, especially given we run ninja with
-k 0.

This patch only does the integration into generate_report (along with
testing). Actual utilization in the script is split into a separate
patch to try and keep things clean.


Full diff: https://github.com/llvm/llvm-project/pull/152621.diff

3 Files Affected:

  • (modified) .ci/generate_test_report_lib.py (+71-20)
  • (modified) .ci/generate_test_report_lib_test.py (+202-5)
  • (modified) .github/workflows/premerge.yaml (+9)
diff --git a/.ci/generate_test_report_lib.py b/.ci/generate_test_report_lib.py
index df95db6a1d6b0..75e3ca0e7d32d 100644
--- a/.ci/generate_test_report_lib.py
+++ b/.ci/generate_test_report_lib.py
@@ -73,6 +73,25 @@ def find_failure_in_ninja_logs(ninja_logs: list[list[str]]) -> list[tuple[str, s
     return failures
 
 
+def _format_ninja_failures(ninja_failures: list[tuple[str, str]]) -> list[str]:
+    """Formats ninja failures into summary views for the report."""
+    output = []
+    for build_failure in ninja_failures:
+        failed_action, failure_message = build_failure
+        output.extend(
+            [
+                "<details>",
+                f"<summary>{failed_action}</summary>",
+                "",
+                "```",
+                failure_message,
+                "```",
+                "</details>",
+            ]
+        )
+    return output
+
+
 # Set size_limit to limit the byte size of the report. The default is 1MB as this
 # is the most that can be put into an annotation. If the generated report exceeds
 # this limit and failures are listed, it will be generated again without failures
@@ -83,6 +102,7 @@ def generate_report(
     title,
     return_code,
     junit_objects,
+    ninja_logs: list[list[str]],
     size_limit=1024 * 1024,
     list_failures=True,
 ):
@@ -120,15 +140,34 @@ def generate_report(
                 ]
             )
         else:
-            report.extend(
-                [
-                    "The build failed before running any tests.",
-                    "",
-                    SEE_BUILD_FILE_STR,
-                    "",
-                    UNRELATED_FAILURES_STR,
-                ]
-            )
+            ninja_failures = find_failure_in_ninja_logs(ninja_logs)
+            if not ninja_failures:
+                report.extend(
+                    [
+                        "The build failed before running any tests. Detailed "
+                        "information about the build failure could not be "
+                        "automatically obtained.",
+                        "",
+                        SEE_BUILD_FILE_STR,
+                        "",
+                        UNRELATED_FAILURES_STR,
+                    ]
+                )
+            else:
+                report.extend(
+                    [
+                        "The build failed before running any tests. Click on a "
+                        "failure below to see the details.",
+                        "",
+                    ]
+                )
+                report.extend(_format_ninja_failures(ninja_failures))
+                report.extend(
+                    [
+                        "",
+                        UNRELATED_FAILURES_STR,
+                    ]
+                )
         return "\n".join(report)
 
     tests_passed = tests_run - tests_skipped - tests_failed
@@ -173,14 +212,28 @@ def plural(num_tests):
     elif return_code != 0:
         # No tests failed but the build was in a failed state. Bring this to the user's
         # attention.
-        report.extend(
-            [
-                "",
-                "All tests passed but another part of the build **failed**.",
-                "",
-                SEE_BUILD_FILE_STR,
-            ]
-        )
+        ninja_failures = find_failure_in_ninja_logs(ninja_logs)
+        if not ninja_failures:
+            report.extend(
+                [
+                    "",
+                    "All tests passed but another part of the build **failed**. "
+                    "Information about the build failure could not be automatically "
+                    "obtained.",
+                    "",
+                    SEE_BUILD_FILE_STR,
+                ]
+            )
+        else:
+            report.extend(
+                [
+                    "",
+                    "All tests passed but another part of the build **failed**. Click on "
+                    "a failure below to see the details.",
+                    "",
+                ]
+            )
+            report.extend(_format_ninja_failures(ninja_failures))
 
     if failures or return_code != 0:
         report.extend(["", UNRELATED_FAILURES_STR])
@@ -200,7 +253,5 @@ def plural(num_tests):
 
 def generate_report_from_files(title, return_code, junit_files):
     return generate_report(
-        title,
-        return_code,
-        [JUnitXml.fromfile(p) for p in junit_files],
+        title, return_code, [JUnitXml.fromfile(p) for p in junit_files], []
     )
diff --git a/.ci/generate_test_report_lib_test.py b/.ci/generate_test_report_lib_test.py
index 41f3eae591e19..389d781042e23 100644
--- a/.ci/generate_test_report_lib_test.py
+++ b/.ci/generate_test_report_lib_test.py
@@ -126,7 +126,7 @@ def test_ninja_log_multiple_failures(self):
 
     def test_title_only(self):
         self.assertEqual(
-            generate_test_report_lib.generate_report("Foo", 0, []),
+            generate_test_report_lib.generate_report("Foo", 0, [], []),
             dedent(
                 """\
                 # Foo
@@ -137,12 +137,12 @@ def test_title_only(self):
 
     def test_title_only_failure(self):
         self.assertEqual(
-            generate_test_report_lib.generate_report("Foo", 1, []),
+            generate_test_report_lib.generate_report("Foo", 1, [], []),
             dedent(
                 """\
             # Foo
 
-            The build failed before running any tests.
+            The build failed before running any tests. Detailed information about the build failure could not be automatically obtained.
 
             Download the build's log file to see the details.
 
@@ -150,6 +150,45 @@ def test_title_only_failure(self):
             ),
         )
 
+    def test_title_only_failure_ninja_log(self):
+        self.assertEqual(
+            generate_test_report_lib.generate_report(
+                "Foo",
+                1,
+                [],
+                [
+                    [
+                        "[1/5] test/1.stamp",
+                        "[2/5] test/2.stamp",
+                        "[3/5] test/3.stamp",
+                        "[4/5] test/4.stamp",
+                        "FAILED: test/4.stamp",
+                        "touch test/4.stamp",
+                        "Wow! Risk!",
+                        "[5/5] test/5.stamp",
+                    ]
+                ],
+            ),
+            dedent(
+                """\
+            # Foo
+
+            The build failed before running any tests. Click on a failure below to see the details.
+
+            <details>
+            <summary>test/4.stamp</summary>
+
+            ```
+            FAILED: test/4.stamp
+            touch test/4.stamp
+            Wow! Risk!
+            ```
+            </details>
+            
+            If these failures are unrelated to your changes (for example tests are broken or flaky at HEAD), please open an issue at https://github.com/llvm/llvm-project/issues and add the `infrastructure` label."""
+            ),
+        )
+
     def test_no_tests_in_testsuite(self):
         self.assertEqual(
             generate_test_report_lib.generate_report(
@@ -167,12 +206,13 @@ def test_no_tests_in_testsuite(self):
                         )
                     )
                 ],
+                [],
             ),
             dedent(
                 """\
                 # Foo
 
-                The build failed before running any tests.
+                The build failed before running any tests. Detailed information about the build failure could not be automatically obtained.
 
                 Download the build's log file to see the details.
 
@@ -198,6 +238,7 @@ def test_no_failures(self):
                         )
                     )
                 ],
+                [],
             ),
             (
                 dedent(
@@ -227,6 +268,7 @@ def test_no_failures_build_failed(self):
                         )
                     )
                 ],
+                [],
             ),
             (
                 dedent(
@@ -235,7 +277,7 @@ def test_no_failures_build_failed(self):
 
               * 1 test passed
 
-              All tests passed but another part of the build **failed**.
+              All tests passed but another part of the build **failed**. Information about the build failure could not be automatically obtained.
 
               Download the build's log file to see the details.
               
@@ -244,6 +286,155 @@ def test_no_failures_build_failed(self):
             ),
         )
 
+    def test_no_failures_build_failed_ninja_log(self):
+        self.assertEqual(
+            generate_test_report_lib.generate_report(
+                "Foo",
+                1,
+                [
+                    junit_from_xml(
+                        dedent(
+                            """\
+          <?xml version="1.0" encoding="UTF-8"?>
+          <testsuites time="0.00">
+          <testsuite name="Passed" tests="1" failures="0" skipped="0" time="0.00">
+          <testcase classname="Bar/test_1" name="test_1" time="0.00"/>
+          </testsuite>
+          </testsuites>"""
+                        )
+                    )
+                ],
+                [
+                    [
+                        "[1/5] test/1.stamp",
+                        "[2/5] test/2.stamp",
+                        "[3/5] test/3.stamp",
+                        "[4/5] test/4.stamp",
+                        "FAILED: test/4.stamp",
+                        "touch test/4.stamp",
+                        "Wow! Close To You!",
+                        "[5/5] test/5.stamp",
+                    ]
+                ],
+            ),
+            (
+                dedent(
+                    """\
+                    # Foo
+
+                    * 1 test passed
+
+                    All tests passed but another part of the build **failed**. Click on a failure below to see the details.
+
+                    <details>
+                    <summary>test/4.stamp</summary>
+
+                    ```
+                    FAILED: test/4.stamp
+                    touch test/4.stamp
+                    Wow! Close To You!
+                    ```
+                    </details>
+
+                    If these failures are unrelated to your changes (for example tests are broken or flaky at HEAD), please open an issue at https://github.com/llvm/llvm-project/issues and add the `infrastructure` label."""
+                )
+            ),
+        )
+
+    def test_no_failures_multiple_build_failed_ninja_log(self):
+        test = generate_test_report_lib.generate_report(
+            "Foo",
+            1,
+            [
+                junit_from_xml(
+                    dedent(
+                        """\
+          <?xml version="1.0" encoding="UTF-8"?>
+          <testsuites time="0.00">
+          <testsuite name="Passed" tests="1" failures="0" skipped="0" time="0.00">
+          <testcase classname="Bar/test_1" name="test_1" time="0.00"/>
+          </testsuite>
+          </testsuites>"""
+                    )
+                )
+            ],
+            [
+                [
+                    "[1/5] test/1.stamp",
+                    "[2/5] test/2.stamp",
+                    "FAILED: touch test/2.stamp",
+                    "Wow! Be Kind!",
+                    "[3/5] test/3.stamp",
+                    "[4/5] test/4.stamp",
+                    "FAILED: touch test/4.stamp",
+                    "Wow! I Dare You!",
+                    "[5/5] test/5.stamp",
+                ]
+            ],
+        )
+        print(test)
+        self.assertEqual(
+            generate_test_report_lib.generate_report(
+                "Foo",
+                1,
+                [
+                    junit_from_xml(
+                        dedent(
+                            """\
+          <?xml version="1.0" encoding="UTF-8"?>
+          <testsuites time="0.00">
+          <testsuite name="Passed" tests="1" failures="0" skipped="0" time="0.00">
+          <testcase classname="Bar/test_1" name="test_1" time="0.00"/>
+          </testsuite>
+          </testsuites>"""
+                        )
+                    )
+                ],
+                [
+                    [
+                        "[1/5] test/1.stamp",
+                        "[2/5] test/2.stamp",
+                        "FAILED: touch test/2.stamp",
+                        "Wow! Be Kind!",
+                        "[3/5] test/3.stamp",
+                        "[4/5] test/4.stamp",
+                        "FAILED: touch test/4.stamp",
+                        "Wow! I Dare You!",
+                        "[5/5] test/5.stamp",
+                    ]
+                ],
+            ),
+            (
+                dedent(
+                    """\
+                    # Foo
+
+                    * 1 test passed
+
+                    All tests passed but another part of the build **failed**. Click on a failure below to see the details.
+
+                    <details>
+                    <summary>test/2.stamp</summary>
+
+                    ```
+                    FAILED: touch test/2.stamp
+                    Wow! Be Kind!
+                    ```
+                    </details>
+                    <details>
+                    <summary>test/4.stamp</summary>
+
+                    ```
+                    FAILED: touch test/4.stamp
+                    Wow! I Dare You!
+                    ```
+                    </details>
+
+                    If these failures are unrelated to your changes (for example tests are broken or flaky at HEAD), please open an issue at https://github.com/llvm/llvm-project/issues and add the `infrastructure` label."""
+                )
+            ),
+        )
+
     def test_report_single_file_single_testsuite(self):
         self.assertEqual(
             generate_test_report_lib.generate_report(
@@ -271,6 +462,7 @@ def test_report_single_file_single_testsuite(self):
                         )
                     )
                 ],
+                [],
             ),
             (
                 dedent(
@@ -366,6 +558,7 @@ def test_report_single_file_multiple_testsuites(self):
                         )
                     )
                 ],
+                [],
             ),
             self.MULTI_SUITE_OUTPUT,
         )
@@ -407,6 +600,7 @@ def test_report_multiple_files_multiple_testsuites(self):
                         )
                     ),
                 ],
+                [],
             ),
             self.MULTI_SUITE_OUTPUT,
         )
@@ -431,6 +625,7 @@ def test_report_dont_list_failures(self):
                         )
                     )
                 ],
+                [],
                 list_failures=False,
             ),
             (
@@ -467,6 +662,7 @@ def test_report_dont_list_failures_link_to_log(self):
                         )
                     )
                 ],
+                [],
                 list_failures=False,
             ),
             (
@@ -506,6 +702,7 @@ def test_report_size_limit(self):
                         )
                     )
                 ],
+                [],
                 size_limit=512,
             ),
             (
diff --git a/.github/workflows/premerge.yaml b/.github/workflows/premerge.yaml
index d0518fa6879e2..9dbb2dfe66480 100644
--- a/.github/workflows/premerge.yaml
+++ b/.github/workflows/premerge.yaml
@@ -70,6 +70,12 @@ jobs:
           export SCCACHE_IDLE_TIMEOUT=0
           sccache --start-server
 
+          export projects_to_build=polly
+          export project_check_targets=check-polly
+          export runtimes_to_build=""
+          export runtimes_check_targets=""
+          export runtimes_check_targets_needs_reconfig=""
+
           ./.ci/monolithic-linux.sh "${projects_to_build}" "${project_check_targets}" "${runtimes_to_build}" "${runtimes_check_targets}" "${runtimes_check_targets_needs_reconfig}" "${enable_cir}"
       - name: Upload Artifacts
         if: '!cancelled()'
@@ -106,6 +112,9 @@ jobs:
           echo "Building projects: ${projects_to_build}"
           echo "Running project checks targets: ${project_check_targets}"
 
+          export projects_to_build=polly
+          export project_check_targets=check-polly
+
           echo "windows-projects=${projects_to_build}" >> $GITHUB_OUTPUT
           echo "windows-check-targets=${project_check_targets}" >> $GITHUB_OUTPUT
       - name: Build and Test

Created using spr 1.3.6
@boomanaiden154 boomanaiden154 changed the base branch from users/boomanaiden154/main.ci-setup-generate_report-to-describe-ninja-failures to main August 8, 2025 16:43
@boomanaiden154 boomanaiden154 merged commit 869bce2 into main Aug 8, 2025
8 of 10 checks passed
@boomanaiden154 boomanaiden154 deleted the users/boomanaiden154/ci-setup-generate_report-to-describe-ninja-failures branch August 8, 2025 16:44
llvm-sync bot pushed a commit to arm/arm-toolchain that referenced this pull request Aug 8, 2025
This patch makes it so that generate_report will add information about
failed build actions to the summary report. This makes it significantly
easier to find compilation failures, especially given we run ninja with
-k 0.

This patch only does the integration into generate_report (along with
testing). Actual utilization in the script is split into a separate
patch to try and keep things clean.

Reviewers: dschuff, cmtice, DavidSpickett, Keenuts, lnihlen, gburgessiv

Reviewed By: cmtice, DavidSpickett

Pull Request: llvm/llvm-project#152621
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants