Skip to content

Conversation

buggfg
Copy link

@buggfg buggfg commented Sep 9, 2025

We recommend setting dfa-jump-thread to be enabled by default. It’s a mature optimization that’s been supported since GCC 9.1.0. At the -O2 opt level, both the GCC and ICX compilers have this optimization enabled by default.

Once it’s enabled, we saw a 13% performance improvement in the CoreMark benchmark on the X86 platform (Intel i9-11900K Rocket Lake), and even a 15% increase on the KunMingHu FPGA. Additionally, we verified the correctness of this pass using SPEC 2017.

Co-Authored-By: YinZd <[email protected]>
Co-Authored-By: ict-ql <[email protected]>
Co-Authored-By: Lin Wang <[email protected]>
Copy link

github-actions bot commented Sep 9, 2025

Thank you for submitting a Pull Request (PR) to the LLVM Project!

This PR will be automatically labeled and the relevant teams will be notified.

If you wish to, you can add reviewers by using the "Reviewers" section on this page.

If this is not working for you, it is probably because you do not have write permissions for the repository. In which case you can instead tag reviewers by name in a comment by using @ followed by their GitHub username.

If you have received no comments on your PR for a week, you can request a review by "ping"ing the PR by adding a comment “Ping”. The common courtesy "ping" rate is once a week. Please remember that you are asking for valuable time from other developers.

If you have further questions, they may be answered by the LLVM GitHub User Guide.

You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums.

@efriedma-quic
Copy link
Collaborator

See previous discussion on #83033.

@nikic nikic requested a review from XChy September 10, 2025 08:37
@nikic
Copy link
Contributor

nikic commented Sep 10, 2025

This pass currently does not have a maintainer. We should have one if we enable it by default.

@XChy What is your opinion on the current quality of this pass?

@buggfg Beyond running SPEC, have you performed any due diligence to ensure this pass is ready for default enablement? Have you performed any targeted fuzzing for this enablement?

I'll provide new compile-time numbers.

@XChy
Copy link
Member

XChy commented Sep 10, 2025

The current implementation is conservative and optimizes only one switch. After numerous fixes, this pass has become more stable and safe. For me, it's acceptable to enable this pass now if there is no severe compile-time regression. And enabling it will uncover more hidden problems.
My main concern is that compile-time may increase massively if we optimize multiple switches in the future, though we don't allow it currently.
Anyway, I am not against enabling DFAJumpThreading by default.

@nikic
Copy link
Contributor

nikic commented Sep 10, 2025

Compile-time: https://llvm-compile-time-tracker.com/compare.php?from=d685508bec02a676383b284d268fe8a2e4cbf7f3&to=ba220372f5c68b12c15ce2efbccf4edc9ea300f1&stat=instructions:u

Generally looks okay, with some outliers that need to be investigated (libclamav_nsis_LZMADecode.c +95%, NativeInlineSiteSymbol.cpp +33%). These might be fine assuming the increase is due to unavoidable second order effects from the transformation being performed.

@buggfg
Copy link
Author

buggfg commented Sep 11, 2025

This pass currently does not have a maintainer. We should have one if we enable it by default.

@XChy What is your opinion on the current quality of this pass?

@buggfg Beyond running SPEC, have you performed any due diligence to ensure this pass is ready for default enablement? Have you performed any targeted fuzzing for this enablement?

I'll provide new compile-time numbers.

Hi, I’m really glad to hear that both reviewers support enabling DFAJumpThreading by default. I haven’t had the chance to do any additional research beyond the SPEC2017 benchmarks yet. However, I believe that enabling this optimization by default is essential for the embedded domain, as it will help increase LLVM's impact and uncover other potential issues. :)

@XChy
Copy link
Member

XChy commented Sep 27, 2025

@nikic, I can be a candidate for maintainer, as I am familiar with the codebase of this pass and actively involved in it. But it would be better if there were someone else to co-maintain. I'm concerned that only one person will be working on it, actually. @dybv-sc @UsmanNadeem, do you volunteer too?

@UsmanNadeem
Copy link
Contributor

@XChy Yes, I can volunteer to be a co-maintainer as well.

We are also interested in seeing DFA Jumpthreading being enabled by default. I think some others have also shown interest in it as well. See my previous patch: #83033

@XChy
Copy link
Member

XChy commented Oct 1, 2025

Generally looks okay, with some outliers that need to be investigated (libclamav_nsis_LZMADecode.c +95%, NativeInlineSiteSymbol.cpp +33%)

Will look into these cases.

@XChy
Copy link
Member

XChy commented Oct 1, 2025

I profiled libclamav_nsis_LZMADecode with valgrind. I found that the SLPVectorizer becomes slower after DFAJumpThreading. Specifically, the time proportion of SLPVectorizer::BoUpSLP::buildTreeRec increases from 10% to 37%.

Before enableing DFAJumpThreading:
image

After:
image

@XChy
Copy link
Member

XChy commented Oct 3, 2025

I believe #161632 has resolved most outliers. The bottleneck mostly lies in the later optimizations in the pipeline after duplicating blocks, instead of lying in DFAJumpThreading itself.

@XChy XChy requested a review from UsmanNadeem October 7, 2025 15:20
@XChy
Copy link
Member

XChy commented Oct 7, 2025

After effort on reducing compile-time, we get a better number: https://llvm-compile-time-tracker.com/compare.php?from=ed113e7904943565b4cd05588f6b639e40187510&to=2b901ca2e1b77d2b7a31cbcb57a921aa662341f9&stat=instructions:u.
@nikic, are you satisfied with the current compile-time?

@nikic
Copy link
Contributor

nikic commented Oct 7, 2025

After effort on reducing compile-time, we get a better number: https://llvm-compile-time-tracker.com/compare.php?from=ed113e7904943565b4cd05588f6b639e40187510&to=2b901ca2e1b77d2b7a31cbcb57a921aa662341f9&stat=instructions:u. @nikic, are you satisfied with the current compile-time?

Yeah, those results look great.

I also started a compile-time run on llvm-opt-benchmark with these results: dtcxzyw/llvm-opt-benchmark#2906 (comment) If you have time, looking at the big outlier there (openssl/smime.ll with +175%) would be good.

@nikic
Copy link
Contributor

nikic commented Oct 7, 2025

@zyw-bot csmith-fuzz

@XChy
Copy link
Member

XChy commented Oct 8, 2025

looking at the big outlier there (openssl/smime.ll with +175%) would be good.

#162447 resolves it partially, but the main cause remains there: the number of phi nodes increases massively, and thus SLPVectorizer slows down. Not come up with a solution for DFAJumpThreading yet. Maybe we can redesign the SSA updating method there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants