Skip to content

Conversation

@rgsl888prabhu
Copy link
Collaborator

@rgsl888prabhu rgsl888prabhu commented Oct 13, 2025

Description

This PR switches from C++17 support to C++20

Checklist

  • I am familiar with the Contributing Guidelines.
  • Testing
    • New or existing tests cover these changes
    • Added tests
    • Created an issue to follow-up
    • NA
  • Documentation
    • The documentation is up to date with these changes
    • Added new documentation
    • NA

Summary by CodeRabbit

  • Chores
    • Updated C++ and CUDA compilation standards to version 20 across the build system.
    • Disabled C++20 module scanning in build configuration.
    • Updated copyright year ranges.

✏️ Tip: You can customize this high-level summary in your review settings.

@rgsl888prabhu rgsl888prabhu added this to the 25.12 milestone Oct 13, 2025
@rgsl888prabhu rgsl888prabhu self-assigned this Oct 13, 2025
@rgsl888prabhu rgsl888prabhu added non-breaking Introduces a non-breaking change improvement Improves an existing functionality labels Oct 13, 2025
@copy-pr-bot
Copy link

copy-pr-bot bot commented Oct 13, 2025

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@rgsl888prabhu
Copy link
Collaborator Author

/ok to test 4a4a2d4

@rgsl888prabhu rgsl888prabhu changed the base branch from branch-25.10 to branch-25.12 October 22, 2025 15:18
@rgsl888prabhu
Copy link
Collaborator Author

/ok to test 5f8449b

@coderabbitai
Copy link

coderabbitai bot commented Oct 22, 2025

📝 Walkthrough

Walkthrough

Upgrades C++ and CUDA language standards from 17 to 20 across multiple CMake configuration files, globally disables C++20 module scanning, and updates copyright year headers to 2024-2026. No build logic or dependency changes.

Changes

Cohort / File(s) Summary
Main build targets
cpp/CMakeLists.txt
Upgrades cuopt target CXX_STANDARD and CUDA_STANDARD from 17 to 20; adds CXX_SCAN_FOR_MODULES OFF globally and to cuopt, cuopt_cli, solve_MIP, and solve_LP targets; sets CXX_STANDARD_REQUIRED ON for CLI and solver targets.
Parser library
cpp/libmps_parser/CMakeLists.txt
Upgrades mps_parser target CXX_STANDARD from 17 to 20; adds global CMAKE_CXX_SCAN_FOR_MODULES OFF setting; disables CXX_SCAN_FOR_MODULES in target properties; updates copyright year.
Test and example configurations
cpp/libmps_parser/tests/CMakeLists.txt, cpp/tests/examples/routing/CMakeLists.txt
Upgrades test target CXX_STANDARD from 17 to 20 with CXX_SCAN_FOR_MODULES OFF; upgrades routing example CUDA_STANDARD from 17 to 20; updates copyright years.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Add support for c++20' clearly and concisely summarizes the main objective of the pull request, which is to upgrade the project from C++17 to C++20 across multiple CMake files and targets.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@rgsl888prabhu rgsl888prabhu changed the base branch from branch-25.12 to main October 22, 2025 15:19
@rgsl888prabhu
Copy link
Collaborator Author

/ok to test 864fd2c

@aliceb-nv
Copy link
Contributor

/ok to test 526aed5

@rgsl888prabhu
Copy link
Collaborator Author

/ok to test 88272d5

@rgsl888prabhu rgsl888prabhu changed the base branch from main to release/25.12 November 17, 2025 21:36
@rgsl888prabhu rgsl888prabhu modified the milestones: 25.12, 26, 26.02 Nov 20, 2025
@rgsl888prabhu rgsl888prabhu changed the base branch from release/25.12 to main January 5, 2026 21:03
@rgsl888prabhu
Copy link
Collaborator Author

/ok to test a8292d0

@rgsl888prabhu rgsl888prabhu marked this pull request as ready for review January 7, 2026 20:00
@rgsl888prabhu rgsl888prabhu requested a review from a team as a code owner January 7, 2026 20:00
@rgsl888prabhu rgsl888prabhu requested a review from a team as a code owner January 7, 2026 20:00
@rgsl888prabhu rgsl888prabhu requested a review from a team as a code owner January 9, 2026 16:45
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
cpp/CMakeLists.txt (1)

427-432: Consider removing redundant per-target CXX_SCAN_FOR_MODULES settings.

Since CMAKE_CXX_SCAN_FOR_MODULES is already set globally to OFF at lines 32-33, the per-target CXX_SCAN_FOR_MODULES OFF settings for cuopt_cli, solve_MIP, and solve_LP may be redundant. The global setting should apply to all targets unless explicitly overridden.

However, if explicit per-target settings are intentional for clarity or to guard against future changes to the global default, this is acceptable.

♻️ Optional simplification

If the global setting is sufficient, you could remove the per-target CXX_SCAN_FOR_MODULES OFF lines:

 set_target_properties(cuopt_cli
   PROPERTIES
   CXX_STANDARD 20
   CXX_STANDARD_REQUIRED ON
-  CXX_SCAN_FOR_MODULES OFF
 )

Apply similar changes to solve_MIP and solve_LP targets.

Also applies to: 475-480, 507-514

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f4352e2 and 823d84f.

📒 Files selected for processing (11)
  • .github/workflows/build.yaml
  • .github/workflows/pr.yaml
  • .github/workflows/test.yaml
  • .github/workflows/trigger-breaking-change-alert.yaml
  • conda/environments/all_cuda-131_arch-aarch64.yaml
  • conda/environments/all_cuda-131_arch-x86_64.yaml
  • cpp/CMakeLists.txt
  • cpp/libmps_parser/CMakeLists.txt
  • cpp/libmps_parser/tests/CMakeLists.txt
  • cpp/src/utilities/driver_helpers.cuh
  • dependencies.yaml
🚧 Files skipped from review as they are similar to previous changes (1)
  • cpp/libmps_parser/CMakeLists.txt
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{cu,cuh}

📄 CodeRabbit inference engine (.github/.coderabbit_review_guide.md)

**/*.{cu,cuh}: Every CUDA kernel launch and memory operation must have error checking with CUDA_CHECK or equivalent verification
Avoid reinventing functionality already available in Thrust, CCCL, or RMM libraries; prefer standard library utilities over custom implementations

Files:

  • cpp/src/utilities/driver_helpers.cuh
**/*.{cu,cuh,cpp,hpp,h}

📄 CodeRabbit inference engine (.github/.coderabbit_review_guide.md)

**/*.{cu,cuh,cpp,hpp,h}: Track GPU device memory allocations and deallocations to prevent memory leaks; ensure cudaMalloc/cudaFree balance and cleanup of streams/events
Validate algorithm correctness in optimization logic: simplex pivots, branch-and-bound decisions, routing heuristics, and constraint/objective handling must produce correct results
Check numerical stability: prevent overflow/underflow, precision loss, division by zero/near-zero, and use epsilon comparisons for floating-point equality checks
Validate correct initialization of variable bounds, constraint coefficients, and algorithm state before solving; ensure reset when transitioning between algorithm phases (presolve, simplex, diving, crossover)
Ensure variables and constraints are accessed from the correct problem context (original vs presolve vs folded vs postsolve); verify index mapping consistency across problem transformations
For concurrent CUDA operations (barriers, async operations), explicitly create and manage dedicated streams instead of reusing the default stream; document stream lifecycle
Eliminate unnecessary host-device synchronization (cudaDeviceSynchronize) in hot paths that blocks GPU pipeline; use streams and events for async execution
Assess algorithmic complexity for large-scale problems (millions of variables/constraints); ensure O(n log n) or better complexity, not O(n²) or worse
Verify correct problem size checks before expensive GPU/CPU operations; prevent resource exhaustion on oversized problems
Identify assertions with overly strict numerical tolerances that fail on legitimate degenerate/edge cases (near-zero pivots, singular matrices, empty problems)
Ensure race conditions are absent in multi-GPU code and multi-threaded server implementations; verify proper synchronization of shared state
Refactor code duplication in solver components (3+ occurrences) into shared utilities; for GPU kernels, use templated device functions to avoid duplication
Check that hard-coded GPU de...

Files:

  • cpp/src/utilities/driver_helpers.cuh
🧠 Learnings (14)
📓 Common learnings
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Check that hard-coded GPU device IDs and resource limits are made configurable; abstract multi-backend support for different CUDA versions
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Check that hard-coded GPU device IDs and resource limits are made configurable; abstract multi-backend support for different CUDA versions

Applied to files:

  • cpp/src/utilities/driver_helpers.cuh
  • cpp/CMakeLists.txt
  • dependencies.yaml
  • cpp/libmps_parser/tests/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Eliminate unnecessary host-device synchronization (cudaDeviceSynchronize) in hot paths that blocks GPU pipeline; use streams and events for async execution

Applied to files:

  • cpp/src/utilities/driver_helpers.cuh
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Verify error propagation from CUDA to user-facing APIs is complete; ensure CUDA errors are caught and mapped to meaningful user error codes

Applied to files:

  • cpp/src/utilities/driver_helpers.cuh
  • cpp/libmps_parser/tests/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Track GPU device memory allocations and deallocations to prevent memory leaks; ensure cudaMalloc/cudaFree balance and cleanup of streams/events

Applied to files:

  • cpp/src/utilities/driver_helpers.cuh
  • cpp/libmps_parser/tests/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Refactor code duplication in solver components (3+ occurrences) into shared utilities; for GPU kernels, use templated device functions to avoid duplication

Applied to files:

  • cpp/src/utilities/driver_helpers.cuh
  • cpp/CMakeLists.txt
  • cpp/libmps_parser/tests/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cpp,hpp,h} : Avoid inappropriate use of exceptions in performance-critical GPU operation paths; prefer error codes or CUDA error checking for latency-sensitive code

Applied to files:

  • cpp/src/utilities/driver_helpers.cuh
  • cpp/libmps_parser/tests/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*test*.{cpp,cu,py} : Ensure test isolation: prevent GPU state, cached memory, and global variables from leaking between test cases; verify each test independently initializes its environment

Applied to files:

  • cpp/src/utilities/driver_helpers.cuh
  • .github/workflows/test.yaml
  • cpp/libmps_parser/tests/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Ensure race conditions are absent in multi-GPU code and multi-threaded server implementations; verify proper synchronization of shared state

Applied to files:

  • cpp/src/utilities/driver_helpers.cuh
  • cpp/libmps_parser/tests/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.cu : Verify race conditions and correctness of GPU kernel shared memory, atomics, and warp-level operations

Applied to files:

  • cpp/src/utilities/driver_helpers.cuh
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Verify correct problem size checks before expensive GPU/CPU operations; prevent resource exhaustion on oversized problems

Applied to files:

  • cpp/src/utilities/driver_helpers.cuh
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : For concurrent CUDA operations (barriers, async operations), explicitly create and manage dedicated streams instead of reusing the default stream; document stream lifecycle

Applied to files:

  • cpp/src/utilities/driver_helpers.cuh
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh} : Avoid reinventing functionality already available in Thrust, CCCL, or RMM libraries; prefer standard library utilities over custom implementations

Applied to files:

  • cpp/CMakeLists.txt
  • cpp/libmps_parser/tests/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*benchmark*.{cpp,cu,py} : Include performance benchmarks and regression detection for GPU operations; verify near real-time performance on million-variable problems

Applied to files:

  • cpp/CMakeLists.txt
  • cpp/libmps_parser/tests/CMakeLists.txt
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
  • GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.12, amd64, rockylinux8
  • GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.13, arm64, rockylinux8
  • GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.12, arm64, rockylinux8
  • GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.13, amd64, rockylinux8
  • GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.10, amd64, rockylinux8
  • GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.11, arm64, rockylinux8
  • GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.11, amd64, rockylinux8
  • GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.10, arm64, rockylinux8
  • GitHub Check: wheel-build-cuopt-sh-client / 13.1.0, 3.10, amd64, rockylinux8
🔇 Additional comments (16)
conda/environments/all_cuda-131_arch-x86_64.yaml (1)

20-20: LGTM - consistent with aarch64 environment.

The CUDA 13.1 version update mirrors the changes in the aarch64 environment file, maintaining consistency across architectures.

Also applies to: 78-78

.github/workflows/trigger-breaking-change-alert.yaml (2)

1-1: Copyright year update is appropriate.


18-18: The pinning to @cuda-13.1.0 is consistently applied across all workflow references.

This is an intentional strategy across the repository: all 37 reusable workflow references in the codebase (across trigger-breaking-change-alert.yaml, test.yaml, pr.yaml, and build.yaml) use the same @cuda-13.1.0 pin. No @main references remain. While this provides stability, the team should be aware that manual version bumps will be required when updating to newer CUDA versions.

cpp/libmps_parser/tests/CMakeLists.txt (2)

2-2: Copyright year update is appropriate.


16-16: C++20 upgrade correctly applied with module scanning disabled.

The C++20 standard upgrade and CXX_SCAN_FOR_MODULES OFF setting are properly implemented and consistent across the project. The global setting is defined in both cpp/CMakeLists.txt and cpp/libmps_parser/CMakeLists.txt with clear documentation ("Disable C++20 module scanning"), and is consistently applied to all targets including these test targets. Disabling module scanning is the standard approach for C++20 projects to avoid experimental CMake features and prevent race conditions in parallel builds.

.github/workflows/build.yaml (2)

1-1: Copyright year update is appropriate.


47-47: Workflow pinning applied consistently across all jobs.

All 14 reusable workflow references have been updated from @main to @cuda-13.1.0, maintaining consistency throughout the build workflow. This aligns with the same pinning strategy applied in trigger-breaking-change-alert.yaml.

Note: The maintenance implications of this pinning strategy were flagged in the review of trigger-breaking-change-alert.yaml.

Also applies to: 57-57, 67-67, 75-75, 90-90, 101-101, 114-114, 125-125, 137-137, 147-147, 162-162, 173-173, 187-187, 203-203

conda/environments/all_cuda-131_arch-aarch64.yaml (1)

20-20: CUDA 13.1 version bump is correct.

The CUDA version update and corresponding environment name change are consistent. The source file dependencies.yaml contains the CUDA 13.1 configuration (lines 10, 676, 678), confirming this auto-generated file reflects the proper updates. CUDA Toolkit 13.1 supports C++20 with appropriate host compiler versions (GCC ≥ 10.x, Clang ≥ 11.x, etc.), ensuring compatibility if C++20 is adopted in future CMakeLists updates.

Also applies to: 78-78

.github/workflows/test.yaml (1)

1-1: LGTM: Workflow references consistently updated to CUDA 13.1.0.

The workflow now references the cuda-13.1.0 branch of shared workflows across all test jobs, aligning with the PR's objective to upgrade CUDA support. The copyright year update is also appropriate.

Also applies to: 31-31, 40-40, 50-50, 59-59, 68-68

cpp/src/utilities/driver_helpers.cuh (2)

18-18: Good defensive initialization.

Initializing func to nullptr prevents undefined behavior if the function returns early.


21-29: The approach is correct and follows NVIDIA's supported pattern for CUDA 13.x ABI compatibility.

The hardcoded version 13000 is appropriate here because cudaGetDriverEntryPointByVersion is specifically designed to request a particular ABI version of a symbol at runtime. CUDA 13.x guarantees minor-version ABI stability within the 13.* family, meaning symbols available in CUDA 13.0 remain available (and compatible) across 13.1, 13.2, and future 13.x releases. The error handling via driver_status check already covers cases where a symbol is unavailable. Requesting v13000 ensures the binary works when built against CUDA 13.1+ but run on CUDA 13.0+, which is exactly the intended behavior described in the comment.

Likely an incorrect or invalid review comment.

dependencies.yaml (1)

10-10: LGTM: CUDA 13.1 correctly added to dependency matrix.

The addition of CUDA 13.1 follows the existing pattern and is consistent with other CUDA version entries in the matrix.

Also applies to: 675-678

.github/workflows/pr.yaml (2)

1-1: LGTM: Workflow references consistently updated to CUDA 13.1.0.

All reusable workflow references have been updated to use the cuda-13.1.0 branch, aligning with the PR's CUDA upgrade objective.

Also applies to: 37-37, 111-111, 176-176, 182-182, 190-190, 199-199, 207-207, 217-217, 229-229, 241-241, 252-252, 262-262, 271-271, 283-283, 296-296


38-107: Well-designed lean-ci gating mechanism.

The lean-ci feature provides a good developer experience by allowing faster test iterations while preventing incomplete CI runs from being merged. The implementation correctly:

  • Detects the lean-ci label on PRs
  • Reduces the test matrix when lean-ci is enabled
  • Blocks merges when lean-ci is present
cpp/CMakeLists.txt (2)

32-33: Good practice: Global module scanning disabled.

Disabling C++20 module scanning globally is appropriate since the codebase doesn't use modules, and this can improve CMake configuration performance.


222-227: LGTM: cuopt target upgraded to C++20/CUDA 20.

The upgrade from C++17/CUDA 17 to C++20/CUDA 20 is correctly implemented with required standards and module scanning disabled.

Copy link
Contributor

@aliceb-nv aliceb-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving engine

@rgsl888prabhu rgsl888prabhu changed the base branch from main to release/25.12 January 9, 2026 17:11
@rgsl888prabhu rgsl888prabhu changed the base branch from release/25.12 to main January 9, 2026 17:12
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In @cpp/CMakeLists.txt:
- Around line 32-34: The CMake cache variable CMAKE_CXX_SCAN_FOR_MODULES is
being set without clarifying whether you intend to override an existing user
cache value; change the directive to explicitly express intent: if you always
want to disable module scanning, replace the current set(...) with
set(CMAKE_CXX_SCAN_FOR_MODULES OFF CACHE BOOL "Disable C++20 module scanning"
FORCE), otherwise make it a user-default by guarding with if(NOT DEFINED
CMAKE_CXX_SCAN_FOR_MODULES) before calling set(CMAKE_CXX_SCAN_FOR_MODULES OFF
CACHE BOOL "Disable C++20 module scanning") so users can opt in.
🧹 Nitpick comments (1)
cpp/CMakeLists.txt (1)

451-459: Good: cuopt_cli explicitly sets C++20 and disables module scanning. Consider centralizing repeated target properties.

You now repeat the same set_target_properties(... CXX_STANDARD 20 ... CXX_SCAN_FOR_MODULES OFF) pattern in multiple places; a small helper function/macros would reduce drift.

Example refactor (helper function)
+function(cuopt_apply_cxx20 TARGET_NAME)
+  set_target_properties(${TARGET_NAME} PROPERTIES
+    CXX_STANDARD 20
+    CXX_STANDARD_REQUIRED ON
+    CXX_SCAN_FOR_MODULES OFF
+  )
+endfunction()
+
 if(NOT BUILD_LP_ONLY)
 add_executable(cuopt_cli cuopt_cli.cpp)
 
-set_target_properties(cuopt_cli
-  PROPERTIES
-  CXX_STANDARD 20
-  CXX_STANDARD_REQUIRED ON
-  CXX_SCAN_FOR_MODULES OFF
-)
+cuopt_apply_cxx20(cuopt_cli)
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 823d84f and 41416ce.

📒 Files selected for processing (1)
  • cpp/CMakeLists.txt
🧰 Additional context used
🧠 Learnings (6)
📓 Common learnings
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Check that hard-coded GPU device IDs and resource limits are made configurable; abstract multi-backend support for different CUDA versions
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Check that hard-coded GPU device IDs and resource limits are made configurable; abstract multi-backend support for different CUDA versions

Applied to files:

  • cpp/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh} : Avoid reinventing functionality already available in Thrust, CCCL, or RMM libraries; prefer standard library utilities over custom implementations

Applied to files:

  • cpp/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Refactor code duplication in solver components (3+ occurrences) into shared utilities; for GPU kernels, use templated device functions to avoid duplication

Applied to files:

  • cpp/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Verify error propagation from CUDA to user-facing APIs is complete; ensure CUDA errors are caught and mapped to meaningful user error codes

Applied to files:

  • cpp/CMakeLists.txt
📚 Learning: 2025-11-25T10:20:49.822Z
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*benchmark*.{cpp,cu,py} : Include performance benchmarks and regression detection for GPU operations; verify near real-time performance on million-variable problems

Applied to files:

  • cpp/CMakeLists.txt
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
  • GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.13, arm64, rockylinux8
  • GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.11, arm64, rockylinux8
  • GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.11, amd64, rockylinux8
  • GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.12, amd64, rockylinux8
  • GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.10, amd64, rockylinux8
  • GitHub Check: wheel-build-cuopt-mps-parser / 13.1.0, 3.13, amd64, rockylinux8
  • GitHub Check: wheel-build-cuopt-sh-client / 13.1.0, 3.10, amd64, rockylinux8
  • GitHub Check: checks / check-style
🔇 Additional comments (3)
cpp/CMakeLists.txt (3)

500-507: Good: solve_MIP correctly uses C++20; no mixed-standard builds detected. All benchmark targets (solve_MIP, solve_LP) consistently use CXX_STANDARD 20. No legacy C++17/CUDA17 pinning remains elsewhere in the build configuration.


532-541: C++20 standard setting is safe for this benchmark code.

run_pdlp.cu is a host-side benchmark utility with no device code, constexpr, concepts, or requires clauses that could trigger the known NVCC C++20 issues. The code uses straightforward argument parsing and solver APIs without problematic patterns. Setting both CXX_STANDARD 20 and CUDA_STANDARD 20 is appropriate.


244-255: Document required host compiler versions for C++20 builds (GCC 10+, Clang 11+, or MSVC 2022).

The C++20/CUDA 20 configuration is well-supported—CUDA 12.0+ and 13.0+ both support CUDA_STANDARD 20, and your CMake version (3.30.4) exceeds the 3.18+ practical minimum. CI already runs with CUDA 13.1.0, and the codebase compiles successfully with these settings (confirmed in RELEASE-NOTES.md: "Fix compile error when using clang with C++20").

However, the README documents CUDA 12.0+/13.0+ but does not explicitly list host compiler version requirements. Add minimum versions (GCC 10+, Clang 11+, MSVC 2022) to the system requirements section to prevent build failures from incompatible host compilers.

Comment on lines +32 to +34
# Disable C++20 module scanning as the codebase doesn't use modules
set(CMAKE_CXX_SCAN_FOR_MODULES OFF CACHE BOOL "Disable C++20 module scanning")

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Be explicit about whether you want to override user cache values for module scanning.

set(CMAKE_CXX_SCAN_FOR_MODULES OFF CACHE BOOL ...) won’t change an existing cache value, so module scanning could remain enabled for targets that don’t explicitly set CXX_SCAN_FOR_MODULES OFF. If the intent is “always disable”, consider FORCE; if the intent is “default off, but user can opt in”, consider guarding with if(NOT DEFINED ...) to make that intent clear.

Proposed tweak (pick one)
-# Disable C++20 module scanning as the codebase doesn't use modules
-set(CMAKE_CXX_SCAN_FOR_MODULES OFF CACHE BOOL "Disable C++20 module scanning")
+# Disable C++20 module scanning as the codebase doesn't use modules
+# Option A: default-off (user can override via -D...)
+if(NOT DEFINED CMAKE_CXX_SCAN_FOR_MODULES)
+  set(CMAKE_CXX_SCAN_FOR_MODULES OFF CACHE BOOL "Disable C++20 module scanning")
+endif()
+
+# Option B: always-off (override any existing cache)
+# set(CMAKE_CXX_SCAN_FOR_MODULES OFF CACHE BOOL "Disable C++20 module scanning" FORCE)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Disable C++20 module scanning as the codebase doesn't use modules
set(CMAKE_CXX_SCAN_FOR_MODULES OFF CACHE BOOL "Disable C++20 module scanning")
# Disable C++20 module scanning as the codebase doesn't use modules
# Option A: default-off (user can override via -D...)
if(NOT DEFINED CMAKE_CXX_SCAN_FOR_MODULES)
set(CMAKE_CXX_SCAN_FOR_MODULES OFF CACHE BOOL "Disable C++20 module scanning")
endif()
# Option B: always-off (override any existing cache)
# set(CMAKE_CXX_SCAN_FOR_MODULES OFF CACHE BOOL "Disable C++20 module scanning" FORCE)
🤖 Prompt for AI Agents
In @cpp/CMakeLists.txt around lines 32 - 34, The CMake cache variable
CMAKE_CXX_SCAN_FOR_MODULES is being set without clarifying whether you intend to
override an existing user cache value; change the directive to explicitly
express intent: if you always want to disable module scanning, replace the
current set(...) with set(CMAKE_CXX_SCAN_FOR_MODULES OFF CACHE BOOL "Disable
C++20 module scanning" FORCE), otherwise make it a user-default by guarding with
if(NOT DEFINED CMAKE_CXX_SCAN_FOR_MODULES) before calling
set(CMAKE_CXX_SCAN_FOR_MODULES OFF CACHE BOOL "Disable C++20 module scanning")
so users can opt in.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

improvement Improves an existing functionality non-breaking Introduces a non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants