Skip to content

Conversation

@tstellar
Copy link
Collaborator

@tstellar tstellar commented Dec 8, 2024

This fixes the llvm-support build that generates the profile data, and wraps the whole cmake --build command with perf instead of wrapping each individual clang invocation. This limits the number of profile files generated and reduces the time spent running perf2bolt.

This fixes the llvm-support build that generates the profile data.
However, I'm wondering if maybe we should disable llvm-suppot and
only run hello-world with -DCLANG_BOLT=perf.  The bolt optimizations
with perf only give about a 3% performance increase (although maybe
with hw counters this would be better) and it takes a very long
time to convert all the perf profiles to the fdata format.
@tstellar tstellar requested a review from aaupov December 8, 2024 06:08
@llvmbot llvmbot added the clang Clang issues not falling into any other category label Dec 8, 2024
@llvmbot
Copy link
Member

llvmbot commented Dec 8, 2024

@llvm/pr-subscribers-clang

Author: Tom Stellard (tstellar)

Changes

This fixes the llvm-support build that generates the profile data. However, I'm wondering if maybe we should disable llvm-suppot and only run hello-world with -DCLANG_BOLT=perf. The bolt optimizations with perf only give about a 3% performance increase (although maybe with hw counters this would be better) and it takes a very long time to convert all the perf profiles to the fdata format.


Full diff: https://github.com/llvm/llvm-project/pull/119117.diff

2 Files Affected:

  • (modified) clang/utils/perf-training/bolt.lit.cfg (+18-4)
  • (modified) clang/utils/perf-training/llvm-support/build.test (+2-2)
diff --git a/clang/utils/perf-training/bolt.lit.cfg b/clang/utils/perf-training/bolt.lit.cfg
index 1d0cf9a8a17a8e..7687a5a5cd2e68 100644
--- a/clang/utils/perf-training/bolt.lit.cfg
+++ b/clang/utils/perf-training/bolt.lit.cfg
@@ -8,21 +8,32 @@ import subprocess
 
 clang_bolt_mode = config.clang_bolt_mode.lower()
 clang_binary = "clang"
-perf_wrapper = f"{config.python_exe} {config.perf_helper_dir}/perf-helper.py perf "
+perf_wrapper = f"{config.python_exe} {config.perf_helper_dir}/perf-helper.py perf"
 
 if clang_bolt_mode == "instrument":
     perf_wrapper = ""
     clang_binary = config.clang_bolt_name
 elif clang_bolt_mode == "lbr":
-    perf_wrapper += " --lbr -- "
+    perf_wrapper += " --lbr --"
 elif clang_bolt_mode == "perf":
-    perf_wrapper += " -- "
+    perf_wrapper += " --"
 else:
     assert 0, "Unsupported CLANG_BOLT_MODE variable"
 
-config.clang = perf_wrapper + os.path.realpath(
+clang_nowrapper = os.path.realpath(
     lit.util.which(clang_binary, config.clang_tools_dir)
 ).replace("\\", "/")
+config.clang = f'{perf_wrapper} {clang_nowrapper}'
+
+# We need to limit the number of build jobs with perf in order to avoid this
+# error:
+# 
+# | Permission error mapping pages.
+# | Consider increasing /proc/sys/kernel/perf_event_mlock_kb,
+# | or try again with a smaller value of -m/--mmap_pages.
+ninja_args = ""
+if ninja_args != "instrument":
+    ninja_args = "-j1"
 
 config.name = "Clang Perf Training"
 config.suffixes = [
@@ -52,3 +63,6 @@ config.substitutions.append(("%test_root", config.test_exec_root))
 config.substitutions.append(('%cmake_generator', config.cmake_generator))
 config.substitutions.append(('%cmake', config.cmake_exe))
 config.substitutions.append(('%llvm_src_dir', config.llvm_src_dir))
+config.substitutions.append(('%perf_cmake_compiler_launcher', perf_wrapper.replace(' ', ';')))
+config.substitutions.append(('%nowrapper_clang', clang_nowrapper))
+config.substitutions.append(('%ninja_args', ninja_args))
diff --git a/clang/utils/perf-training/llvm-support/build.test b/clang/utils/perf-training/llvm-support/build.test
index f29a594c846869..1f4d76502a3757 100644
--- a/clang/utils/perf-training/llvm-support/build.test
+++ b/clang/utils/perf-training/llvm-support/build.test
@@ -1,2 +1,2 @@
-RUN: %cmake -G %cmake_generator -B %t -S %llvm_src_dir -DCMAKE_C_COMPILER=%clang -DCMAKE_CXX_COMPILER=%clang -DCMAKE_CXX_FLAGS="--driver-mode=g++" -DCMAKE_BUILD_TYPE=Release
-RUN: %cmake --build %t -v --target LLVMSupport
+RUN: %cmake -G %cmake_generator -B %t -S %llvm_src_dir -DCMAKE_CXX_COMPILER_LAUNCHER="%perf_cmake_compiler_launcher" -DCMAKE_C_COMPILER="%nowrapper_clang" -DCMAKE_CXX_COMPILER="%nowrapper_clang" -DCMAKE_CXX_FLAGS="--driver-mode=g++" -DCMAKE_BUILD_TYPE=Release
+RUN: %cmake --build %t %ninja_args -v --target LLVMSupport

Copy link
Contributor

@aaupov aaupov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LG

@aaupov
Copy link
Contributor

aaupov commented Dec 9, 2024

However, I'm wondering if maybe we should disable llvm-suppot and only run hello-world with -DCLANG_BOLT=perf.

Existing perf training is inadequate for collecting sampled profile – we simply don't get enough samples, and no-LBR mode further drops the performance. If we wanted to pursue perf sampling further, we'd need to extend perf training with either building LLVM subtargets or llvm-test-suite.

The bolt optimizations with perf only give about a 3% performance increase (although maybe with hw counters this would be better)

Yes, no-LBR mode has very limited benefit due to missing edge counts.

and it takes a very long time to convert all the perf profiles to the fdata format.

As discussed on Discord, we may be able to reduce the time by dropping profile-format=yaml which is not required in this case. By the way, although I understand the overhead adds up, we shouldn't be converting many profiles: I guess just two - one for cxx/helloworld and another for llvm-support.

@tstellar
Copy link
Collaborator Author

@aaupov When we build llvm-support there is one perf.data file generate for each cpp file compiled, so we end up without about 150 files. Is there some way to merge those together before running perf2bolt?

@aaupov
Copy link
Contributor

aaupov commented Dec 10, 2024

@aaupov When we build llvm-support there is one perf.data file generate for each cpp file compiled, so we end up without about 150 files. Is there some way to merge those together before running perf2bolt?

I see. The best way would be to run perf once so that all clang invocations are under it. If llvm-support is configured as cmake rule for llvm external project, perf wrapper could be set as CMAKE_COMPILER_LAUNCHER.

@tstellar
Copy link
Collaborator Author

@aaupov When we build llvm-support there is one perf.data file generate for each cpp file compiled, so we end up without about 150 files. Is there some way to merge those together before running perf2bolt?

I see. The best way would be to run perf once so that all clang invocations are under it. If llvm-support is configured as cmake rule for llvm external project, perf wrapper could be set as CMAKE_COMPILER_LAUNCHER.

Even with using CMAKE_CXX_COMPILER_LAUNCHER, we still get one invocation of perf for each invocation of clang. I just tested out %perf_wrapper %cmake --build %t -v --target LLVMSupport and that seems to work.

@tstellar tstellar merged commit ea6e135 into llvm:main Dec 12, 2024
8 checks passed
@llvm-ci
Copy link
Collaborator

llvm-ci commented Dec 13, 2024

LLVM Buildbot has detected a new failure on builder llvm-clang-aarch64-darwin running on doug-worker-4 while building clang at step 6 "test-build-unified-tree-check-all".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/190/builds/11269

Here is the relevant piece of the build log for the reference
Step 6 (test-build-unified-tree-check-all) failure: test (failure)
...
PASS: lld :: MachO/arm64-objc-stubs-fix.s (79799 of 80341)
PASS: lld :: MachO/arm64-objc-stubs.s (79800 of 80341)
PASS: lld :: MachO/arm64-reloc-got-load.s (79801 of 80341)
PASS: lld :: MachO/arm64-reloc-pointer-to-got.s (79802 of 80341)
PASS: lld :: MachO/arm64-reloc-tlv-load.s (79803 of 80341)
PASS: lld :: MachO/arm64-relocs.s (79804 of 80341)
PASS: lld :: MachO/arm64-stubs.s (79805 of 80341)
PASS: lit :: shtest-define.py (79806 of 80341)
PASS: lld :: ELF/aarch64-thunk-reuse2.s (79807 of 80341)
UNRESOLVED: lld :: MachO/arm64-thunk-visibility.s (79808 of 80341)
******************** TEST 'lld :: MachO/arm64-thunk-visibility.s' FAILED ********************
Exception during script execution:
OSError: [Errno 28] No space left on device

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/buildbot/buildbot-root/aarch64-darwin/llvm-project/llvm/utils/lit/lit/worker.py", line 76, in _execute_test_handle_errors
    result = test.config.test_format.execute(test, lit_config)
  File "/Users/buildbot/buildbot-root/aarch64-darwin/llvm-project/llvm/utils/lit/lit/formats/shtest.py", line 29, in execute
    return lit.TestRunner.executeShTest(
  File "/Users/buildbot/buildbot-root/aarch64-darwin/llvm-project/llvm/utils/lit/lit/TestRunner.py", line 2326, in executeShTest
    return _runShTest(test, litConfig, useExternalSh, script, tmpBase)
  File "/Users/buildbot/buildbot-root/aarch64-darwin/llvm-project/llvm/utils/lit/lit/TestRunner.py", line 2270, in _runShTest
    res = runOnce(execdir)
  File "/Users/buildbot/buildbot-root/aarch64-darwin/llvm-project/llvm/utils/lit/lit/TestRunner.py", line 2244, in runOnce
    res = executeScript(test, litConfig, tmpBase, scriptCopy, execdir)
  File "/Users/buildbot/buildbot-root/aarch64-darwin/llvm-project/llvm/utils/lit/lit/TestRunner.py", line 1277, in executeScript
    f.close()
OSError: [Errno 28] No space left on device


********************
UNRESOLVED: lld :: MachO/arm64-thunks.s (79809 of 80341)
******************** TEST 'lld :: MachO/arm64-thunks.s' FAILED ********************
Exception during script execution:
Traceback (most recent call last):
  File "/Users/buildbot/buildbot-root/aarch64-darwin/llvm-project/llvm/utils/lit/lit/worker.py", line 76, in _execute_test_handle_errors
    result = test.config.test_format.execute(test, lit_config)
  File "/Users/buildbot/buildbot-root/aarch64-darwin/llvm-project/llvm/utils/lit/lit/formats/shtest.py", line 29, in execute
    return lit.TestRunner.executeShTest(
  File "/Users/buildbot/buildbot-root/aarch64-darwin/llvm-project/llvm/utils/lit/lit/TestRunner.py", line 2326, in executeShTest
    return _runShTest(test, litConfig, useExternalSh, script, tmpBase)
  File "/Users/buildbot/buildbot-root/aarch64-darwin/llvm-project/llvm/utils/lit/lit/TestRunner.py", line 2270, in _runShTest
    res = runOnce(execdir)
  File "/Users/buildbot/buildbot-root/aarch64-darwin/llvm-project/llvm/utils/lit/lit/TestRunner.py", line 2244, in runOnce
    res = executeScript(test, litConfig, tmpBase, scriptCopy, execdir)
  File "/Users/buildbot/buildbot-root/aarch64-darwin/llvm-project/llvm/utils/lit/lit/TestRunner.py", line 1210, in executeScript
    f = open(script, mode, **open_kwargs)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

clang Clang issues not falling into any other category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants