-
Notifications
You must be signed in to change notification settings - Fork 15.3k
[DFAJumpThreading] Enable DFAJumpThread by default. #157646
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DFAJumpThreading] Enable DFAJumpThread by default. #157646
Conversation
Co-Authored-By: YinZd <[email protected]> Co-Authored-By: ict-ql <[email protected]> Co-Authored-By: Lin Wang <[email protected]>
|
Thank you for submitting a Pull Request (PR) to the LLVM Project! This PR will be automatically labeled and the relevant teams will be notified. If you wish to, you can add reviewers by using the "Reviewers" section on this page. If this is not working for you, it is probably because you do not have write permissions for the repository. In which case you can instead tag reviewers by name in a comment by using If you have received no comments on your PR for a week, you can request a review by "ping"ing the PR by adding a comment “Ping”. The common courtesy "ping" rate is once a week. Please remember that you are asking for valuable time from other developers. If you have further questions, they may be answered by the LLVM GitHub User Guide. You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums. |
|
See previous discussion on #83033. |
|
This pass currently does not have a maintainer. We should have one if we enable it by default. @XChy What is your opinion on the current quality of this pass? @buggfg Beyond running SPEC, have you performed any due diligence to ensure this pass is ready for default enablement? Have you performed any targeted fuzzing for this enablement? I'll provide new compile-time numbers. |
|
The current implementation is conservative and optimizes only one switch. After numerous fixes, this pass has become more stable and safe. For me, it's acceptable to enable this pass now if there is no severe compile-time regression. And enabling it will uncover more hidden problems. |
|
Generally looks okay, with some outliers that need to be investigated (libclamav_nsis_LZMADecode.c +95%, NativeInlineSiteSymbol.cpp +33%). These might be fine assuming the increase is due to unavoidable second order effects from the transformation being performed. |
Hi, I’m really glad to hear that both reviewers support enabling DFAJumpThreading by default. I haven’t had the chance to do any additional research beyond the SPEC2017 benchmarks yet. However, I believe that enabling this optimization by default is essential for the embedded domain, as it will help increase LLVM's impact and uncover other potential issues. :) |
|
@nikic, I can be a candidate for maintainer, as I am familiar with the codebase of this pass and actively involved in it. But it would be better if there were someone else to co-maintain. I'm concerned that only one person will be working on it, actually. @dybv-sc @UsmanNadeem, do you volunteer too? |
Will look into these cases. |
|
I believe #161632 has resolved most outliers. The bottleneck mostly lies in the later optimizations in the pipeline after duplicating blocks, instead of lying in DFAJumpThreading itself. |
|
After effort on reducing compile-time, we get a better number: https://llvm-compile-time-tracker.com/compare.php?from=ed113e7904943565b4cd05588f6b639e40187510&to=2b901ca2e1b77d2b7a31cbcb57a921aa662341f9&stat=instructions:u. |
Yeah, those results look great. I also started a compile-time run on llvm-opt-benchmark with these results: dtcxzyw/llvm-opt-benchmark#2906 (comment) If you have time, looking at the big outlier there (openssl/smime.ll with +175%) would be good. |
|
@zyw-bot csmith-fuzz |
#162447 resolves it partially, but the main cause remains there: the number of phi nodes increases massively, and thus SLPVectorizer slows down. Not come up with a solution for DFAJumpThreading yet. Maybe we can redesign the SSA updating method there. |
|
@XChy Can you share how the before/after IR for that case looks like? |
@nikic Sure, see also https://gist.github.com/XChy/9fd567c44be7f5097b7edb973ef236f8. |
|
The IR file may be too big to read. To illustrate it more clearly, we can imagine a threadable path |
|
Would it make sense to limit the number of phi nodes? Looking at this example, I'm also somewhat surprised that we perform the optimization in a case where the result still has multiple large dispatch switches -- is that expected/profitable? |
Yes, but the problem is that it's hard to predict the exact number of inserted phi nodes before threading. Coarsely, we can set a threshold for the number of unduplicated successors.
The multiple large dispatch switches are not the targets to thread over; it's just like a sub-switch under the switch. It's semantically correct to duplicate such switches. For the small sub-switch, it's as profitable as common branches. But for such a big sub-switch, I am not quite sure without testing on real benchmarks. If the threading path is cold, I guess it's unprofitable due to the increase in code size (compares and jump tables?). |
|
It occurs to me that the cost models of some targets, like X86/AArch64, assume the codesize cost of branches to be the basic cost. Thus, the duplication cost of switches is not correctly estimated. We can adjust the threading cost here. |
|
Looks like we have resolved most outliers. How do you think about the current status of this pass? @nikic @UsmanNadeem |
nikic
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From https://llvm.org/docs/DeveloperPolicy.html#adding-or-enabling-a-new-llvm-pass:
Maintenance: The pass (and any analyses it depends on) must have at least one maintainer.
The pass has two maintainers: https://github.com/llvm/llvm-project/blob/main/llvm/Maintainers.md#dfajumpthreading
Usefulness: There should be evidence that the pass improves performance (or whatever metric it optimizes for) on real-world workloads. Improvements seen only on synthetic benchmarks may be insufficient.
There are improvements in benchmarks as indicated in the PR description. I also know that it helps zlib-rs performance. Generally I think it's pretty clear that this will provide real-world performance benefits for state machine type code. Which will only affect a small subset of code, but...
Compile-Time: The pass should not have a large impact on compile-time, where the evaluation of what “large” means is up to reviewer discretion, and may differ based on the value the pass provides. In any case, it is expected that a concerted effort has been made to mitigate the compile-time impact, both for the average case, and for pathological cases.
...the latest compile-time numbers indicate that the pass is essentially free.
On llvm-opt-benchmark we do see some regressions (up to 20%) on specific files, but these are all cases where the transform triggers. Regressions mostly seem to be due to second order effects, where further compilation slows down after the transform. It's likely that this can be further mitigated, but I think the current state is good enough.
Correctness: The pass should have no known correctness issues (except global correctness issues that affect all of LLVM). If an old pass is being enabled (rather than implementing a new one incrementally), additional due diligence is required. The pass should be fully reviewed to ensure that it still complies with current quality standards. Fuzzing with disabled profitability checks may help gain additional confidence in the implementation.
There are no open DFAJumpThreading issues that affect correctness, only one missed optimization issue (#70767).
I've not reviewed this code myself, but I believe @XChy has been looking at it a lot. There is probably more we could do to gain confidence in the correctness of the pass, but I think at this point I'm fine with flipping the switch.
LGTM
|
One thing that I am wondering about is the position of the pass. It's currently in the function simplification pipeline, but I'm not sure whether this is the right position. It might make more sense to perform it as part of the optimization rather than simplification pipeline. But think doesn't need to block the default enablement, we can always adjust the position later. |
|
@nikic Thanks for your help in pushing this optimization forward! I have opened a meta issue #165984 to track the problems to be solved at its initial stage. I am happy to see the enablement of this pass. @UsmanNadeem, just to double-check, do you think it's prepared to enable DFAJumpThreading at this stage? |
Sorry for the delayed replies, I was busy with the LLVM dev conference/travel. Yes it looks good to me. All the issues I know of have been resolved and any refinements can be done later. I think some downstream users have already been using this pass (judging from the occasional bug/perf issue reports), wider usage after default enablement might uncover more opportunities. |
XChy
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, cheers.
|
@buggfg Congratulations on having your first Pull Request (PR) merged into the LLVM Project! Your changes will be combined with recent changes from other authors, then tested by our build bots. If there is a problem with a build, you may receive a report in an email or a comment on this PR. Please check whether problems have been caused by your change specifically, as the builds can include changes from many authors. It is not uncommon for your change to be included in a build that fails due to someone else's changes, or infrastructure issues. How to do this, and the rest of the post-merge process, is covered in detail here. If your change does cause a problem, it may be reverted, or you can revert it yourself. This is a normal part of LLVM development. You can fix your changes and open a new PR to merge them again. If you don't get any reports, no action is required from you. Your changes are working as expected, well done! |
|
We have started seeing a test failure in We confirmed that reverting this patch resolves the issue. For context, the failure only reproduces in builders configured to run a two-stage build with PGO enabled. I'm working on gathering more detailed information, but I wanted to give you a quick heads-up. |
This reverts commit 0ba7bfc.
…." (#167352) Reverts llvm/llvm-project#157646, DFAJumpThread is causing miscompiles when building Clang with PGO, see llvm/llvm-project#166868 for details.


We recommend setting
dfa-jump-threadto be enabled by default. It’s a mature optimization that’s been supported since GCC 9.1.0. At the-O2opt level, both the GCC and ICX compilers have this optimization enabled by default.Once it’s enabled, we saw a 13% performance improvement in the CoreMark benchmark on the X86 platform (Intel i9-11900K Rocket Lake), and even a 15% increase on the KunMingHu FPGA. Additionally, we verified the correctness of this pass using SPEC 2017.