-
Notifications
You must be signed in to change notification settings - Fork 110
Add support for c++20 #486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
|
/ok to test 4a4a2d4 |
|
/ok to test 5f8449b |
📝 WalkthroughWalkthroughUpgrades C++ and CUDA language standards from 17 to 20 across multiple CMake configuration files, globally disables C++20 module scanning, and updates copyright year headers to 2024-2026. No build logic or dependency changes. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
/ok to test 864fd2c |
|
/ok to test 526aed5 |
|
/ok to test 88272d5 |
|
/ok to test a8292d0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
cpp/CMakeLists.txt (1)
427-432: Consider removing redundant per-target CXX_SCAN_FOR_MODULES settings.Since
CMAKE_CXX_SCAN_FOR_MODULESis already set globally toOFFat lines 32-33, the per-targetCXX_SCAN_FOR_MODULES OFFsettings forcuopt_cli,solve_MIP, andsolve_LPmay be redundant. The global setting should apply to all targets unless explicitly overridden.However, if explicit per-target settings are intentional for clarity or to guard against future changes to the global default, this is acceptable.
♻️ Optional simplification
If the global setting is sufficient, you could remove the per-target
CXX_SCAN_FOR_MODULES OFFlines:set_target_properties(cuopt_cli PROPERTIES CXX_STANDARD 20 CXX_STANDARD_REQUIRED ON - CXX_SCAN_FOR_MODULES OFF )Apply similar changes to
solve_MIPandsolve_LPtargets.Also applies to: 475-480, 507-514
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
.github/workflows/build.yaml.github/workflows/pr.yaml.github/workflows/test.yaml.github/workflows/trigger-breaking-change-alert.yamlconda/environments/all_cuda-131_arch-aarch64.yamlconda/environments/all_cuda-131_arch-x86_64.yamlcpp/CMakeLists.txtcpp/libmps_parser/CMakeLists.txtcpp/libmps_parser/tests/CMakeLists.txtcpp/src/utilities/driver_helpers.cuhdependencies.yaml
🚧 Files skipped from review as they are similar to previous changes (1)
- cpp/libmps_parser/CMakeLists.txt
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{cu,cuh}
📄 CodeRabbit inference engine (.github/.coderabbit_review_guide.md)
**/*.{cu,cuh}: Every CUDA kernel launch and memory operation must have error checking with CUDA_CHECK or equivalent verification
Avoid reinventing functionality already available in Thrust, CCCL, or RMM libraries; prefer standard library utilities over custom implementations
Files:
cpp/src/utilities/driver_helpers.cuh
**/*.{cu,cuh,cpp,hpp,h}
📄 CodeRabbit inference engine (.github/.coderabbit_review_guide.md)
**/*.{cu,cuh,cpp,hpp,h}: Track GPU device memory allocations and deallocations to prevent memory leaks; ensure cudaMalloc/cudaFree balance and cleanup of streams/events
Validate algorithm correctness in optimization logic: simplex pivots, branch-and-bound decisions, routing heuristics, and constraint/objective handling must produce correct results
Check numerical stability: prevent overflow/underflow, precision loss, division by zero/near-zero, and use epsilon comparisons for floating-point equality checks
Validate correct initialization of variable bounds, constraint coefficients, and algorithm state before solving; ensure reset when transitioning between algorithm phases (presolve, simplex, diving, crossover)
Ensure variables and constraints are accessed from the correct problem context (original vs presolve vs folded vs postsolve); verify index mapping consistency across problem transformations
For concurrent CUDA operations (barriers, async operations), explicitly create and manage dedicated streams instead of reusing the default stream; document stream lifecycle
Eliminate unnecessary host-device synchronization (cudaDeviceSynchronize) in hot paths that blocks GPU pipeline; use streams and events for async execution
Assess algorithmic complexity for large-scale problems (millions of variables/constraints); ensure O(n log n) or better complexity, not O(n²) or worse
Verify correct problem size checks before expensive GPU/CPU operations; prevent resource exhaustion on oversized problems
Identify assertions with overly strict numerical tolerances that fail on legitimate degenerate/edge cases (near-zero pivots, singular matrices, empty problems)
Ensure race conditions are absent in multi-GPU code and multi-threaded server implementations; verify proper synchronization of shared state
Refactor code duplication in solver components (3+ occurrences) into shared utilities; for GPU kernels, use templated device functions to avoid duplication
Check that hard-coded GPU de...
Files:
cpp/src/utilities/driver_helpers.cuh
🧠 Learnings (14)
📓 Common learnings
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Check that hard-coded GPU device IDs and resource limits are made configurable; abstract multi-backend support for different CUDA versions
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Check that hard-coded GPU device IDs and resource limits are made configurable; abstract multi-backend support for different CUDA versions
Applied to files:
cpp/src/utilities/driver_helpers.cuhcpp/CMakeLists.txtdependencies.yamlcpp/libmps_parser/tests/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Eliminate unnecessary host-device synchronization (cudaDeviceSynchronize) in hot paths that blocks GPU pipeline; use streams and events for async execution
Applied to files:
cpp/src/utilities/driver_helpers.cuh
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Verify error propagation from CUDA to user-facing APIs is complete; ensure CUDA errors are caught and mapped to meaningful user error codes
Applied to files:
cpp/src/utilities/driver_helpers.cuhcpp/libmps_parser/tests/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Track GPU device memory allocations and deallocations to prevent memory leaks; ensure cudaMalloc/cudaFree balance and cleanup of streams/events
Applied to files:
cpp/src/utilities/driver_helpers.cuhcpp/libmps_parser/tests/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Refactor code duplication in solver components (3+ occurrences) into shared utilities; for GPU kernels, use templated device functions to avoid duplication
Applied to files:
cpp/src/utilities/driver_helpers.cuhcpp/CMakeLists.txtcpp/libmps_parser/tests/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cpp,hpp,h} : Avoid inappropriate use of exceptions in performance-critical GPU operation paths; prefer error codes or CUDA error checking for latency-sensitive code
Applied to files:
cpp/src/utilities/driver_helpers.cuhcpp/libmps_parser/tests/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*test*.{cpp,cu,py} : Ensure test isolation: prevent GPU state, cached memory, and global variables from leaking between test cases; verify each test independently initializes its environment
Applied to files:
cpp/src/utilities/driver_helpers.cuh.github/workflows/test.yamlcpp/libmps_parser/tests/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Ensure race conditions are absent in multi-GPU code and multi-threaded server implementations; verify proper synchronization of shared state
Applied to files:
cpp/src/utilities/driver_helpers.cuhcpp/libmps_parser/tests/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.cu : Verify race conditions and correctness of GPU kernel shared memory, atomics, and warp-level operations
Applied to files:
cpp/src/utilities/driver_helpers.cuh
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Verify correct problem size checks before expensive GPU/CPU operations; prevent resource exhaustion on oversized problems
Applied to files:
cpp/src/utilities/driver_helpers.cuh
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : For concurrent CUDA operations (barriers, async operations), explicitly create and manage dedicated streams instead of reusing the default stream; document stream lifecycle
Applied to files:
cpp/src/utilities/driver_helpers.cuh
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh} : Avoid reinventing functionality already available in Thrust, CCCL, or RMM libraries; prefer standard library utilities over custom implementations
Applied to files:
cpp/CMakeLists.txtcpp/libmps_parser/tests/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*benchmark*.{cpp,cu,py} : Include performance benchmarks and regression detection for GPU operations; verify near real-time performance on million-variable problems
Applied to files:
cpp/CMakeLists.txtcpp/libmps_parser/tests/CMakeLists.txt
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.12, amd64, rockylinux8
- GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.13, arm64, rockylinux8
- GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.12, arm64, rockylinux8
- GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.13, amd64, rockylinux8
- GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.10, amd64, rockylinux8
- GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.11, arm64, rockylinux8
- GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.11, amd64, rockylinux8
- GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.10, arm64, rockylinux8
- GitHub Check: wheel-build-cuopt-sh-client / 13.1.0, 3.10, amd64, rockylinux8
🔇 Additional comments (16)
conda/environments/all_cuda-131_arch-x86_64.yaml (1)
20-20: LGTM - consistent with aarch64 environment.The CUDA 13.1 version update mirrors the changes in the aarch64 environment file, maintaining consistency across architectures.
Also applies to: 78-78
.github/workflows/trigger-breaking-change-alert.yaml (2)
1-1: Copyright year update is appropriate.
18-18: The pinning to @cuda-13.1.0 is consistently applied across all workflow references.This is an intentional strategy across the repository: all 37 reusable workflow references in the codebase (across trigger-breaking-change-alert.yaml, test.yaml, pr.yaml, and build.yaml) use the same @cuda-13.1.0 pin. No @main references remain. While this provides stability, the team should be aware that manual version bumps will be required when updating to newer CUDA versions.
cpp/libmps_parser/tests/CMakeLists.txt (2)
2-2: Copyright year update is appropriate.
16-16: C++20 upgrade correctly applied with module scanning disabled.The C++20 standard upgrade and
CXX_SCAN_FOR_MODULES OFFsetting are properly implemented and consistent across the project. The global setting is defined in bothcpp/CMakeLists.txtandcpp/libmps_parser/CMakeLists.txtwith clear documentation ("Disable C++20 module scanning"), and is consistently applied to all targets including these test targets. Disabling module scanning is the standard approach for C++20 projects to avoid experimental CMake features and prevent race conditions in parallel builds..github/workflows/build.yaml (2)
1-1: Copyright year update is appropriate.
47-47: Workflow pinning applied consistently across all jobs.All 14 reusable workflow references have been updated from
@mainto@cuda-13.1.0, maintaining consistency throughout the build workflow. This aligns with the same pinning strategy applied intrigger-breaking-change-alert.yaml.Note: The maintenance implications of this pinning strategy were flagged in the review of
trigger-breaking-change-alert.yaml.Also applies to: 57-57, 67-67, 75-75, 90-90, 101-101, 114-114, 125-125, 137-137, 147-147, 162-162, 173-173, 187-187, 203-203
conda/environments/all_cuda-131_arch-aarch64.yaml (1)
20-20: CUDA 13.1 version bump is correct.The CUDA version update and corresponding environment name change are consistent. The source file
dependencies.yamlcontains the CUDA 13.1 configuration (lines 10, 676, 678), confirming this auto-generated file reflects the proper updates. CUDA Toolkit 13.1 supports C++20 with appropriate host compiler versions (GCC ≥ 10.x, Clang ≥ 11.x, etc.), ensuring compatibility if C++20 is adopted in future CMakeLists updates.Also applies to: 78-78
.github/workflows/test.yaml (1)
1-1: LGTM: Workflow references consistently updated to CUDA 13.1.0.The workflow now references the
cuda-13.1.0branch of shared workflows across all test jobs, aligning with the PR's objective to upgrade CUDA support. The copyright year update is also appropriate.Also applies to: 31-31, 40-40, 50-50, 59-59, 68-68
cpp/src/utilities/driver_helpers.cuh (2)
18-18: Good defensive initialization.Initializing
functonullptrprevents undefined behavior if the function returns early.
21-29: The approach is correct and follows NVIDIA's supported pattern for CUDA 13.x ABI compatibility.The hardcoded version
13000is appropriate here becausecudaGetDriverEntryPointByVersionis specifically designed to request a particular ABI version of a symbol at runtime. CUDA 13.x guarantees minor-version ABI stability within the 13.* family, meaning symbols available in CUDA 13.0 remain available (and compatible) across 13.1, 13.2, and future 13.x releases. The error handling viadriver_statuscheck already covers cases where a symbol is unavailable. Requesting v13000 ensures the binary works when built against CUDA 13.1+ but run on CUDA 13.0+, which is exactly the intended behavior described in the comment.Likely an incorrect or invalid review comment.
dependencies.yaml (1)
10-10: LGTM: CUDA 13.1 correctly added to dependency matrix.The addition of CUDA 13.1 follows the existing pattern and is consistent with other CUDA version entries in the matrix.
Also applies to: 675-678
.github/workflows/pr.yaml (2)
1-1: LGTM: Workflow references consistently updated to CUDA 13.1.0.All reusable workflow references have been updated to use the
cuda-13.1.0branch, aligning with the PR's CUDA upgrade objective.Also applies to: 37-37, 111-111, 176-176, 182-182, 190-190, 199-199, 207-207, 217-217, 229-229, 241-241, 252-252, 262-262, 271-271, 283-283, 296-296
38-107: Well-designed lean-ci gating mechanism.The lean-ci feature provides a good developer experience by allowing faster test iterations while preventing incomplete CI runs from being merged. The implementation correctly:
- Detects the lean-ci label on PRs
- Reduces the test matrix when lean-ci is enabled
- Blocks merges when lean-ci is present
cpp/CMakeLists.txt (2)
32-33: Good practice: Global module scanning disabled.Disabling C++20 module scanning globally is appropriate since the codebase doesn't use modules, and this can improve CMake configuration performance.
222-227: LGTM: cuopt target upgraded to C++20/CUDA 20.The upgrade from C++17/CUDA 17 to C++20/CUDA 20 is correctly implemented with required standards and module scanning disabled.
aliceb-nv
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving engine
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @cpp/CMakeLists.txt:
- Around line 32-34: The CMake cache variable CMAKE_CXX_SCAN_FOR_MODULES is
being set without clarifying whether you intend to override an existing user
cache value; change the directive to explicitly express intent: if you always
want to disable module scanning, replace the current set(...) with
set(CMAKE_CXX_SCAN_FOR_MODULES OFF CACHE BOOL "Disable C++20 module scanning"
FORCE), otherwise make it a user-default by guarding with if(NOT DEFINED
CMAKE_CXX_SCAN_FOR_MODULES) before calling set(CMAKE_CXX_SCAN_FOR_MODULES OFF
CACHE BOOL "Disable C++20 module scanning") so users can opt in.
🧹 Nitpick comments (1)
cpp/CMakeLists.txt (1)
451-459: Good:cuopt_cliexplicitly sets C++20 and disables module scanning. Consider centralizing repeated target properties.You now repeat the same
set_target_properties(... CXX_STANDARD 20 ... CXX_SCAN_FOR_MODULES OFF)pattern in multiple places; a small helper function/macros would reduce drift.Example refactor (helper function)
+function(cuopt_apply_cxx20 TARGET_NAME) + set_target_properties(${TARGET_NAME} PROPERTIES + CXX_STANDARD 20 + CXX_STANDARD_REQUIRED ON + CXX_SCAN_FOR_MODULES OFF + ) +endfunction() + if(NOT BUILD_LP_ONLY) add_executable(cuopt_cli cuopt_cli.cpp) -set_target_properties(cuopt_cli - PROPERTIES - CXX_STANDARD 20 - CXX_STANDARD_REQUIRED ON - CXX_SCAN_FOR_MODULES OFF -) +cuopt_apply_cxx20(cuopt_cli)
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cpp/CMakeLists.txt
🧰 Additional context used
🧠 Learnings (6)
📓 Common learnings
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Check that hard-coded GPU device IDs and resource limits are made configurable; abstract multi-backend support for different CUDA versions
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Check that hard-coded GPU device IDs and resource limits are made configurable; abstract multi-backend support for different CUDA versions
Applied to files:
cpp/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh} : Avoid reinventing functionality already available in Thrust, CCCL, or RMM libraries; prefer standard library utilities over custom implementations
Applied to files:
cpp/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Refactor code duplication in solver components (3+ occurrences) into shared utilities; for GPU kernels, use templated device functions to avoid duplication
Applied to files:
cpp/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Verify error propagation from CUDA to user-facing APIs is complete; ensure CUDA errors are caught and mapped to meaningful user error codes
Applied to files:
cpp/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*benchmark*.{cpp,cu,py} : Include performance benchmarks and regression detection for GPU operations; verify near real-time performance on million-variable problems
Applied to files:
cpp/CMakeLists.txt
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.13, arm64, rockylinux8
- GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.11, arm64, rockylinux8
- GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.11, amd64, rockylinux8
- GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.12, amd64, rockylinux8
- GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.10, amd64, rockylinux8
- GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.13, amd64, rockylinux8
- GitHub Check: wheel-build-cuopt-sh-client / 13.1.0, 3.10, amd64, rockylinux8
- GitHub Check: checks / check-style
🔇 Additional comments (3)
cpp/CMakeLists.txt (3)
500-507: Good:solve_MIPcorrectly uses C++20; no mixed-standard builds detected. All benchmark targets (solve_MIP, solve_LP) consistently use CXX_STANDARD 20. No legacy C++17/CUDA17 pinning remains elsewhere in the build configuration.
532-541: C++20 standard setting is safe for this benchmark code.
run_pdlp.cuis a host-side benchmark utility with no device code, constexpr, concepts, or requires clauses that could trigger the known NVCC C++20 issues. The code uses straightforward argument parsing and solver APIs without problematic patterns. Setting bothCXX_STANDARD 20andCUDA_STANDARD 20is appropriate.
244-255: Document required host compiler versions for C++20 builds (GCC 10+, Clang 11+, or MSVC 2022).The C++20/CUDA 20 configuration is well-supported—CUDA 12.0+ and 13.0+ both support
CUDA_STANDARD 20, and your CMake version (3.30.4) exceeds the 3.18+ practical minimum. CI already runs with CUDA 13.1.0, and the codebase compiles successfully with these settings (confirmed in RELEASE-NOTES.md: "Fix compile error when using clang with C++20").However, the README documents CUDA 12.0+/13.0+ but does not explicitly list host compiler version requirements. Add minimum versions (GCC 10+, Clang 11+, MSVC 2022) to the system requirements section to prevent build failures from incompatible host compilers.
| # Disable C++20 module scanning as the codebase doesn't use modules | ||
| set(CMAKE_CXX_SCAN_FOR_MODULES OFF CACHE BOOL "Disable C++20 module scanning") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Be explicit about whether you want to override user cache values for module scanning.
set(CMAKE_CXX_SCAN_FOR_MODULES OFF CACHE BOOL ...) won’t change an existing cache value, so module scanning could remain enabled for targets that don’t explicitly set CXX_SCAN_FOR_MODULES OFF. If the intent is “always disable”, consider FORCE; if the intent is “default off, but user can opt in”, consider guarding with if(NOT DEFINED ...) to make that intent clear.
Proposed tweak (pick one)
-# Disable C++20 module scanning as the codebase doesn't use modules
-set(CMAKE_CXX_SCAN_FOR_MODULES OFF CACHE BOOL "Disable C++20 module scanning")
+# Disable C++20 module scanning as the codebase doesn't use modules
+# Option A: default-off (user can override via -D...)
+if(NOT DEFINED CMAKE_CXX_SCAN_FOR_MODULES)
+ set(CMAKE_CXX_SCAN_FOR_MODULES OFF CACHE BOOL "Disable C++20 module scanning")
+endif()
+
+# Option B: always-off (override any existing cache)
+# set(CMAKE_CXX_SCAN_FOR_MODULES OFF CACHE BOOL "Disable C++20 module scanning" FORCE)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Disable C++20 module scanning as the codebase doesn't use modules | |
| set(CMAKE_CXX_SCAN_FOR_MODULES OFF CACHE BOOL "Disable C++20 module scanning") | |
| # Disable C++20 module scanning as the codebase doesn't use modules | |
| # Option A: default-off (user can override via -D...) | |
| if(NOT DEFINED CMAKE_CXX_SCAN_FOR_MODULES) | |
| set(CMAKE_CXX_SCAN_FOR_MODULES OFF CACHE BOOL "Disable C++20 module scanning") | |
| endif() | |
| # Option B: always-off (override any existing cache) | |
| # set(CMAKE_CXX_SCAN_FOR_MODULES OFF CACHE BOOL "Disable C++20 module scanning" FORCE) | |
🤖 Prompt for AI Agents
In @cpp/CMakeLists.txt around lines 32 - 34, The CMake cache variable
CMAKE_CXX_SCAN_FOR_MODULES is being set without clarifying whether you intend to
override an existing user cache value; change the directive to explicitly
express intent: if you always want to disable module scanning, replace the
current set(...) with set(CMAKE_CXX_SCAN_FOR_MODULES OFF CACHE BOOL "Disable
C++20 module scanning" FORCE), otherwise make it a user-default by guarding with
if(NOT DEFINED CMAKE_CXX_SCAN_FOR_MODULES) before calling
set(CMAKE_CXX_SCAN_FOR_MODULES OFF CACHE BOOL "Disable C++20 module scanning")
so users can opt in.
Description
This PR switches from C++17 support to C++20
Checklist
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.