Skip to content

Add optional GCC static analyzer support to CMake and Conan builds#817

Merged
pnoltes merged 7 commits intoapache:masterfrom
moelksasbyahmed:master
Feb 7, 2026
Merged

Add optional GCC static analyzer support to CMake and Conan builds#817
pnoltes merged 7 commits intoapache:masterfrom
moelksasbyahmed:master

Conversation

@moelksasbyahmed
Copy link
Contributor

This PR adds optional support for GCC’s static analyzer (-fanalyzer) to the Celix build system.

What’s included

Introduces a new CMake option:
ENABLE_GCC_ANALYZER (OFF by default)

When enabled and using GCC, -fanalyzer is added to:

CMAKE_C_FLAGS

CMAKE_CXX_FLAGS

Extends conanfile.py with a corresponding option (enable_gcc_analyzer) so the analyzer can also be enabled in Conan-based builds.

What’s not included

  • No analyzer warnings are fixed in this PR.

  • The analyzer is not enabled in CI.

  • No behavior or runtime changes.

Motivation

GCC’s static analyzer has been improving significantly in recent releases and can help detect issues such as possible null dereferences and resource leaks.
This PR lays the groundwork for experimenting with the analyzer locally and for potential future CI integration.

Analyzer warnings can be addressed incrementally in follow-up PRs once there is agreement on scope and expectations.

Copy link
Contributor

@pnoltes pnoltes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR :)

Some small remarks and a question.

Is it also possible to enable one of the CI builds (ubuntu, gcc) with the analyzer option?
And then suppress the warnings we have not yet fixed (-Wno-analyzer-double-fclose, etc).

This can also help in follow up task (enable the analyzer warnings, flag per flag).

set(CMAKE_C_FLAGS "-fanalyzer ${CMAKE_C_FLAGS}")
set(CMAKE_CXX_FLAGS "-fanalyzer ${CMAKE_CXX_FLAGS}")
else()
message(WARNING "ENABLE_GCC_ANALYZER is only supported with GCC compiler 10.0.0 OR higher.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
message(WARNING "ENABLE_GCC_ANALYZER is only supported with GCC compiler 10.0.0 OR higher.")
message(WARNING "ENABLE_GCC_ANALYZER is only supported with GCC")

I would not mention the version, because that is not what is tested in the if.

@moelksasbyahmed
Copy link
Contributor Author

Hi @pnoltes, thanks for the review. i can surely enable the analyzer in the Ubuntu GCC CI build and suppress the warnings, but i have a question. By reviewing the docs of -fanalyzer see Options That Control Static Analysis, how do you suggest adding these flags to the build in efficient way like to make it easy to suppress every flag one after the other ? my approach is 07c3f50 if there is a better and more efficient approach i will be happy to learn and apply

@moelksasbyahmed moelksasbyahmed force-pushed the master branch 2 times, most recently from 02744ae to 2f9845d Compare February 3, 2026 15:30
@pnoltes
Copy link
Contributor

pnoltes commented Feb 3, 2026

lag one after the other ?

I think this approach is good. I like to usage of 'add_compile_options' and the split-up of suppressing a warning per line.

@moelksasbyahmed
Copy link
Contributor Author

the CI build Ubuntu / linux-build-apt debug or RelWithDepInfo is failing due to error code 143. I'm really wondering why is that i think it due to analyzer taking alot of time while building in the ci should i remove it from the CI build ? or if im wrong what could be the mistake i did in the CI workflow that caused this ?

@pnoltes
Copy link
Contributor

pnoltes commented Feb 4, 2026

the CI build Ubuntu / linux-build-apt debug or RelWithDepInfo is failing due to error code 143. I'm really wondering why is that i think it due to analyzer taking alot of time while building in the ci should i remove it from the CI build ? or if im wrong what could be the mistake i did in the CI workflow that caused this ?

I am not sure, I was also looking at this. I should not be a time limit, this is set on 120 minutes.

Seeing actions/runner-images#6680 (comment) I expect it is maybe due to reaching a the memory resource limit. Could you try using ninja -j8 or ninja -j4 instead of ninja?

@moelksasbyahmed
Copy link
Contributor Author

hi @pnoltes i tried ninja -j2 and RelWithDepInfo and it worked but debug didnt should i make ninja -j1 but it will make the build so slow or should i just remove analyzer from the CI work flow

@pnoltes
Copy link
Contributor

pnoltes commented Feb 5, 2026

hi @pnoltes i tried ninja -j2 and RelWithDepInfo and it worked but debug didnt should i make ninja -j1 but it will make the build so slow or should i just remove analyzer from the CI work flow

I see that the debug one ran over the configured 120minute limit.

I think we can test a bit more and then discuss whether we want gcc analyse in our builds and if so how. Could you provide an update with -j4 to check if that helps (a bit faster, but hopefully not parallel enough to eat too much memory)?

I had a quick look and there are also ASF infra hosted runners (https://cwiki.apache.org/confluence/display/INFRA/ASF+Infra+provided+self-hosted+runners), so this could be another route. Not yet sure if this is (memory wise) a step up from the GitHub hosted runners.

Maybe already good to minimise the usage of the gcc static analyzer to a single build? I think, not sure, that there is no benefit to run the gcc static analyzer in the debug build (compared to the RelWithDebInfo build) and seeing the latest build I expect the analyzer performs better with a RelWithDebInfo build.

@PengZheng
Copy link
Contributor

PengZheng commented Feb 5, 2026

Maybe already good to minimise the usage of the gcc static analyzer to a single build? I think, not sure, that there is no benefit to run the gcc static analyzer in the debug build (compared to the RelWithDebInfo build) and seeing the latest build I expect the analyzer performs better with a RelWithDebInfo build.

+1

Moreover, it does not make sense to enable analyzer for tests.
Note that half of our code base is for testing code and compilation (and static analysis) of C++ code is very slow.

WDYT @pnoltes

added the -wno for the shift registers

Fix: suppress static analyzer warning in tlsf.c

added the suppresser for shift count reister

added the suppresser for shift count register

added -j8 for ninja build

testing ninja -j2

testing ninja -j4 static analyzer only on RelWithDebInfo only

testing ninja -j8 static analyzer only on RelWithDebInfo only

testing ninja  static analyzer only on RelWithDebInfo only

testing ninja -j8  static analyzer only on RelWithDebInfo only

enabling the gcc static analyzer on RelWithDebInfo

enabline gcc static analyzer on RelWithDebInfo build only
@moelksasbyahmed
Copy link
Contributor Author

i tried to build with ninja -j4 and it worked and i tried with -j8 and it worked too
i limited the analyzer build to only RelWithDepiInfo build by adding -DENABLE_GCC_ANALYZER=${{matrix.type == 'RelWithDebInfo' && 'ON' || 'OFF'}} to ubutnu.yml i tested the build on actions on my forked github repo and it works

@moelksasbyahmed
Copy link
Contributor Author

well the CI workflow RelWithDebInfo ran at my forked repo fine and in here it failed in web sockets i searched but i couldnt figure out the problem any idea ?

@PengZheng
Copy link
Contributor

PengZheng commented Feb 6, 2026

well the CI workflow RelWithDebInfo ran at my forked repo fine and in here it failed in web sockets i searched but i couldnt figure out the problem any idea ?

HttpInfoTestSuite.http_admin_info_test is coded to use port 45112, which may be used by someone else.
Nothing to worry about.

@pnoltes
Copy link
Contributor

pnoltes commented Feb 6, 2026

Maybe already good to minimise the usage of the gcc static analyzer to a single build? I think, not sure, that there is no benefit to run the gcc static analyzer in the debug build (compared to the RelWithDebInfo build) and seeing the latest build I expect the analyzer performs better with a RelWithDebInfo build.

+1

Moreover, it does not make sense to enable analyzer for tests. Note that half of our code base is for testing code and compilation (and static analysis) of C++ code is very slow.

WDYT @pnoltes

I’m actually in favor of including the test code. While we could create a specific CI pipeline with tests disabled, the goal of enabling the GCC static analyzer is to catch and fix issues as we write new code. Since development happens with tests enabled, the test code should be compatible with the analyzer as well.

In my view, we should either accept a slower build (limiting parallelism to (.i.e.) -j4) or postpone the analyzer's introduction. I personally prefer the slower build. We can look into ASF-hosted runners later to optimize.

Another option is running the analyzer only on PRs, but I’m not a fan of that approach as it loses immediate feedback during development.

-DCMAKE_BUILD_TYPE=${{ matrix.type }}
-DENABLE_CCACHE=ON
-DCMAKE_POLICY_VERSION_MINIMUM=3.5
-DENABLE_GCC_ANALYZER=${{matrix.type == 'RelWithDebInfo' && 'ON' || 'OFF'}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 , Nice clean way to do this

Copy link
Contributor

@pnoltes pnoltes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. IMO using -j8 for gcc static analyzer and only using it on RelWithDebInfo is for now the correct approach.

@moelksasbyahmed
Copy link
Contributor Author

moelksasbyahmed commented Feb 6, 2026

In my view, we should either accept a slower build (limiting parallelism to (.i.e.) -j4) or postpone the analyzer's introduction. I personally prefer the slower build. We can look into ASF-hosted runners later to optimize.

well i tried to make it slow by enable -j2 and i ran the workflow on my forked repo the debug build reached memory limit and has exit code of 143 check forked repo workflow so i guess the valid options is making the build so slow by making ninja -j1 or making a specific Ci pipeline with test disabled but i agree with you the build should be compatible with test as well so i guess the only valid option is making the build slow by ninja -j1 and it can be optimized later i will search for valid solutions

@pnoltes
Copy link
Contributor

pnoltes commented Feb 6, 2026

In my view, we should either accept a slower build (limiting parallelism to (.i.e.) -j4) or postpone the analyzer's introduction. I personally prefer the slower build. We can look into ASF-hosted runners later to optimize.

well i tried to make it slow by enable -j2 and i ran the workflow on my forked repo the debug build reached memory limit and has exit code of 143 check forked repo workflow so i guess the valid options is making the build so slow by making ninja -j1 or making a specific Ci pipeline with test disabled but i agree with you the build should be compatible with test as well so i guess the only valid option is making the build slow by ninja -j1 and it can be optimized later i will search for valid solutions

Maybe I missing something, but the current build uses gcc static analyzer with RelWithDebInfo and ninja -j8: https://github.com/apache/celix/actions/runs/21716569744/job/62701713167?pr=817#step:6:23

So looks good to me. I want to wait till we have 2 approvals and then merge the PR.

Copy link
Contributor

@PengZheng PengZheng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice addition! LGTM

@pnoltes
Copy link
Contributor

pnoltes commented Feb 7, 2026

@moelksasbyahmed Thanks for the contribution :)

@pnoltes pnoltes merged commit 78dadfa into apache:master Feb 7, 2026
23 of 24 checks passed
@moelksasbyahmed
Copy link
Contributor Author

@pnoltes @PengZheng thanks for giving me oppertunity to contributoe to celix and answer my question

@pnoltes pnoltes linked an issue Feb 10, 2026 that may be closed by this pull request
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Utilize latest GCC/clang's capability in CI

3 participants