-
Notifications
You must be signed in to change notification settings - Fork 112
Change solution object to return host memory instead of gpu memory from libcuopt for lp and milp #576
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change solution object to return host memory instead of gpu memory from libcuopt for lp and milp #576
Changes from all commits
79d5157
1ab34ab
2160bef
9ca1325
999a549
a72b785
1ea0702
1a5edbf
e735ee1
b9d5ea0
4a43461
0f4b100
7511a06
324b412
816cd21
582004d
5e62f50
d64f387
2880a7c
6e9f901
6632c56
0e66899
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,156 @@ | ||
| # AGENTS.md - AI Coding Agent Guidelines for cuOpt | ||
|
|
||
| > This file provides essential context for AI coding assistants (Codex, Cursor, GitHub Copilot, etc.) working with the NVIDIA cuOpt codebase. | ||
|
|
||
| > **For setup, building, testing, and contribution guidelines, see [CONTRIBUTING.md](../CONTRIBUTING.md).** | ||
|
|
||
| --- | ||
|
|
||
| ## Project Overview | ||
|
|
||
| **cuOpt** is NVIDIA's GPU-accelerated optimization engine for: | ||
| - **Mixed Integer Linear Programming (MILP)** | ||
| - **Linear Programming (LP)** | ||
| - **Quadratic Programming (QP)** | ||
| - **Vehicle Routing Problems (VRP)** including TSP and PDP | ||
|
|
||
| ### Architecture | ||
|
|
||
| ``` | ||
| cuopt/ | ||
| ├── cpp/ # Core C++ engine (libcuopt, libmps_parser) | ||
| │ ├── include/cuopt/ # Public C/C++ headers | ||
| │ ├── src/ # Implementation (CUDA kernels, algorithms) | ||
| │ └── tests/ # C++ unit tests (gtest) | ||
| ├── python/ | ||
| │ ├── cuopt/ # Python bindings and routing API | ||
| │ ├── cuopt_server/ # REST API server | ||
| │ ├── cuopt_self_hosted/ # Self-hosted deployment utilities | ||
| │ └── libcuopt/ # Python wrapper for C library | ||
| ├── ci/ # CI/CD scripts and Docker configurations | ||
| ├── conda/ # Conda recipes and environment files | ||
| ├── docs/ # Documentation source | ||
| ├── datasets/ # Test datasets for LP, MIP, routing | ||
| └── notebooks/ # Example Jupyter notebooks | ||
| ``` | ||
|
|
||
| ### Supported APIs | ||
|
|
||
| | API Type | LP | MILP | QP | Routing | | ||
| |----------|:--:|:----:|:--:|:-------:| | ||
| | C API | ✓ | ✓ | ✓ | ✗ | | ||
| | C++ API | ✓ | ✓ | ✓ | ✓ | | ||
| | Python | ✓ | ✓ | ✓ | ✓ | | ||
| | Server | ✓ | ✓ | ✗ | ✓ | | ||
|
|
||
| --- | ||
|
|
||
| ## Coding Style and Conventions | ||
|
|
||
| ### C++ Naming Conventions | ||
|
|
||
| - **Base style**: `snake_case` for all names (except test cases: PascalCase) | ||
| - **Prefixes/Suffixes**: | ||
| - `d_` → device data variables (e.g., `d_locations_`) | ||
| - `h_` → host data variables (e.g., `h_data_`) | ||
| - `_t` → template type parameters (e.g., `i_t`, `value_t`) | ||
| - `_` → private member variables (e.g., `n_locations_`) | ||
|
|
||
| ```cpp | ||
| // Example naming pattern | ||
| template <typename i_t> | ||
| class locations_t { | ||
| private: | ||
| i_t n_locations_{}; | ||
| i_t* d_locations_{}; // device pointer | ||
| i_t* h_locations_{}; // host pointer | ||
| }; | ||
| ``` | ||
|
|
||
| ### File Extensions | ||
|
|
||
| | Extension | Usage | | ||
| |-----------|-------| | ||
| | `.hpp` | C++ headers | | ||
| | `.cpp` | C++ source | | ||
| | `.cu` | CUDA C++ source (nvcc required) | | ||
| | `.cuh` | CUDA headers with device code | | ||
|
|
||
| ### Include Order | ||
|
|
||
| 1. Local headers | ||
| 2. RAPIDS headers | ||
| 3. Related libraries | ||
| 4. Dependencies | ||
| 5. STL | ||
|
|
||
| ### Python Style | ||
|
|
||
| - Follow PEP 8 | ||
| - Use type hints where applicable | ||
| - Tests use `pytest` framework | ||
|
|
||
| ### Formatting | ||
|
|
||
| - **C++**: Enforced by `clang-format` (config: `cpp/.clang-format`) | ||
| - **Python**: Enforced via pre-commit hooks | ||
| - See [CONTRIBUTING.md](../CONTRIBUTING.md) for pre-commit setup | ||
|
|
||
| --- | ||
|
|
||
| ## Error Handling Patterns | ||
|
|
||
| ### Runtime Assertions | ||
|
|
||
| ```cpp | ||
| // Use CUOPT_EXPECTS for runtime checks | ||
| CUOPT_EXPECTS(lhs.type() == rhs.type(), "Column type mismatch"); | ||
|
|
||
| // Use CUOPT_FAIL for unreachable code paths | ||
| CUOPT_FAIL("This code path should not be reached."); | ||
| ``` | ||
|
|
||
| ### CUDA Error Checking | ||
|
|
||
| ```cpp | ||
| // Always wrap CUDA calls | ||
| RAFT_CUDA_TRY(cudaMemcpy(&dst, &src, num_bytes)); | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ## Memory Management Guidelines | ||
|
|
||
| - **Never use raw `new`/`delete`** - Use RMM allocators | ||
| - **Prefer `rmm::device_uvector<T>`** for device memory | ||
| - **All operations should be stream-ordered** - Accept `cuda_stream_view` | ||
| - **Views (`*_view` suffix) are non-owning** - Don't manage their lifetime | ||
|
|
||
| --- | ||
|
|
||
| ## Key Files Reference | ||
|
|
||
| | Purpose | Location | | ||
| |---------|----------| | ||
| | Main build script | `build.sh` | | ||
| | Dependencies | `dependencies.yaml` | | ||
| | C++ formatting | `cpp/.clang-format` | | ||
| | Conda environments | `conda/environments/` | | ||
| | Test data download | `datasets/get_test_data.sh` | | ||
| | CI configuration | `ci/` | | ||
| | Version info | `VERSION` | | ||
|
|
||
| --- | ||
|
|
||
| ## Common Pitfalls | ||
|
|
||
| | Problem | Solution | | ||
| |---------|----------| | ||
| | Cython changes not reflected | Rerun: `./build.sh cuopt` | | ||
| | Missing `nvcc` | Set `$CUDACXX` or add CUDA to `$PATH` | | ||
| | CUDA out of memory | Reduce problem size or use streaming | | ||
| | Slow debug library loading | Device symbols cause delay; use selectively | | ||
|
|
||
| --- | ||
|
|
||
| *For detailed setup, build instructions, testing workflows, debugging, and contribution guidelines, see [CONTRIBUTING.md](../CONTRIBUTING.md).* |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1 @@ | ||
| This project has adopted the [Contributor Covenant Code of Conduct](https://docs.rapids.ai/resources/conduct/). |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,15 @@ | ||
| Security | ||
| --------- | ||
| NVIDIA is dedicated to the security and trust of our software products and services, including all source code repositories managed through our organization. | ||
|
|
||
| If you need to report a security issue, please use the appropriate contact points outlined below. Please do not report security vulnerabilities through GitHub/GitLab. | ||
|
|
||
| Reporting Potential Security Vulnerability in NVIDIA cuOpt | ||
| ---------------------------------------------------------- | ||
| To report a potential security vulnerability in NVIDIA cuOpt: | ||
|
|
||
| - Web: [Security Vulnerability Submission Form](https://www.nvidia.com/object/submit-security-vulnerability.html) | ||
| - E-Mail: [[email protected]](mailto:[email protected]) | ||
| - We encourage you to use the following PGP key for secure email communication: [NVIDIA public PGP Key for communication](https://www.nvidia.com/en-us/security/pgp-key) | ||
| - Please include the following information: | ||
| - Product/Driver name and version/branch that contains the vulnerability | ||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -1,6 +1,6 @@ | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| /* clang-format off */ | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| /* | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| * SPDX-FileCopyrightText: Copyright (c) 2023-2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| * SPDX-FileCopyrightText: Copyright (c) 2023-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| * SPDX-License-Identifier: Apache-2.0 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| */ | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| /* clang-format on */ | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
@@ -142,28 +142,21 @@ linear_programming_ret_t call_solve_lp( | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| const bool use_pdlp_solver_mode = true; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| auto solution = cuopt::linear_programming::solve_lp( | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| op_problem, solver_settings, problem_checking, use_pdlp_solver_mode, is_batch_mode); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| // Convert device vectors to host vectors for LP solution | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| linear_programming_ret_t lp_ret{ | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| std::make_unique<rmm::device_buffer>(solution.get_primal_solution().release()), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| std::make_unique<rmm::device_buffer>(solution.get_dual_solution().release()), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| std::make_unique<rmm::device_buffer>(solution.get_reduced_cost().release()), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| std::make_unique<rmm::device_buffer>( | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| solution.get_pdlp_warm_start_data().current_primal_solution_.release()), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| std::make_unique<rmm::device_buffer>( | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| solution.get_pdlp_warm_start_data().current_dual_solution_.release()), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| std::make_unique<rmm::device_buffer>( | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| solution.get_pdlp_warm_start_data().initial_primal_average_.release()), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| std::make_unique<rmm::device_buffer>( | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| solution.get_pdlp_warm_start_data().initial_dual_average_.release()), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| std::make_unique<rmm::device_buffer>( | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| solution.get_pdlp_warm_start_data().current_ATY_.release()), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| std::make_unique<rmm::device_buffer>( | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| solution.get_pdlp_warm_start_data().sum_primal_solutions_.release()), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| std::make_unique<rmm::device_buffer>( | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| solution.get_pdlp_warm_start_data().sum_dual_solutions_.release()), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| std::make_unique<rmm::device_buffer>( | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| solution.get_pdlp_warm_start_data().last_restart_duality_gap_primal_solution_.release()), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| std::make_unique<rmm::device_buffer>( | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| solution.get_pdlp_warm_start_data().last_restart_duality_gap_dual_solution_.release()), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| cuopt::host_copy(solution.get_primal_solution()), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| cuopt::host_copy(solution.get_dual_solution()), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| cuopt::host_copy(solution.get_reduced_cost()), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| cuopt::host_copy(solution.get_pdlp_warm_start_data().current_primal_solution_), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| cuopt::host_copy(solution.get_pdlp_warm_start_data().current_dual_solution_), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| cuopt::host_copy(solution.get_pdlp_warm_start_data().initial_primal_average_), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| cuopt::host_copy(solution.get_pdlp_warm_start_data().initial_dual_average_), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| cuopt::host_copy(solution.get_pdlp_warm_start_data().current_ATY_), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| cuopt::host_copy(solution.get_pdlp_warm_start_data().sum_primal_solutions_), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| cuopt::host_copy(solution.get_pdlp_warm_start_data().sum_dual_solutions_), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| cuopt::host_copy(solution.get_pdlp_warm_start_data().last_restart_duality_gap_primal_solution_), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| cuopt::host_copy(solution.get_pdlp_warm_start_data().last_restart_duality_gap_dual_solution_), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Comment on lines
+148
to
+159
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🏁 Script executed: #!/bin/bash
# Find host_copy implementation
find . -name "*.h" -o -name "*.hpp" -o -name "*.cu" -o -name "*.cpp" | head -20
echo "---"
# Search for host_copy definition
rg -nP 'inline.*host_copy|^[[:space:]]*host_copy.*{' --type=cpp --type=h -A 10 | head -50Repository: NVIDIA/cuopt Length of output: 2174 🏁 Script executed: #!/bin/bash
# Get full context of the file around lines 148-159
wc -l cpp/src/linear_programming/utilities/cython_solve.cuRepository: NVIDIA/cuopt Length of output: 112 🏁 Script executed: #!/bin/bash
# Show context around lines 148-159
sed -n '130,175p' cpp/src/linear_programming/utilities/cython_solve.cuRepository: NVIDIA/cuopt Length of output: 2788 🏁 Script executed: #!/bin/bash
# Search for host_copy in include files
find cpp/include -name "*.hpp" -o -name "*.h" | xargs grep -l "host_copy"Repository: NVIDIA/cuopt Length of output: 40 🏁 Script executed: #!/bin/bash
# Check for CUDA error checking patterns used in the file
rg -n 'CUDA_CHECK|RAFT_CUDA_TRY|cudaGetLastError|RAFT_CHECK_CUDA|host_copy' cpp/src/linear_programming/utilities/cython_solve.cu | head -30Repository: NVIDIA/cuopt Length of output: 1252 🏁 Script executed: #!/bin/bash
# Get full host_copy implementation
cat -n cpp/src/utilities/copy_helpers.hpp | head -200Repository: NVIDIA/cuopt Length of output: 6185 🏁 Script executed: #!/bin/bash
# Check if host_copy uses error checking internally or is wrapped with error checks
rg -n 'host_copy' cpp/src/utilities/copy_helpers.hpp -B 5 -A 15 | head -100Repository: NVIDIA/cuopt Length of output: 3206 🏁 Script executed: #!/bin/bash
# Look for the default host_copy overload (non-bool version)
rg -n 'template.*host_copy|^[[:space:]]*host_copy.*\{' cpp/src/utilities/copy_helpers.hpp -A 15Repository: NVIDIA/cuopt Length of output: 38 🏁 Script executed: #!/bin/bash
# Check if there's error handling around the solve call
sed -n '140,180p' cpp/src/linear_programming/utilities/cython_solve.cuRepository: NVIDIA/cuopt Length of output: 2594 🏁 Script executed: #!/bin/bash
# Check for async/batched copy patterns in similar code
rg -n 'cudaMemcpyAsync|rmm::device_uvector|thrust::copy' cpp/src/linear_programming/utilities/ --type=cu --type=cpp -C 3 | head -60Repository: NVIDIA/cuopt Length of output: 82 🏁 Script executed: #!/bin/bash
# Check if raft::copy has error checking
rg -n 'raft::copy' cpp/include/cuopt/ --type=cpp --type=h | head -5Repository: NVIDIA/cuopt Length of output: 38 🏁 Script executed: #!/bin/bash
# Verify if this is end-of-solve (not hot path) - check what comes after line 180
sed -n '160,180p' cpp/src/linear_programming/utilities/cython_solve.cuRepository: NVIDIA/cuopt Length of output: 1281 🏁 Script executed: #!/bin/bash
# Check if raft library provides CUDA error checking
find . -type f -name "*.hpp" -o -name "*.h" | xargs grep -l "raft::copy" | head -3Repository: NVIDIA/cuopt Length of output: 166 🏁 Script executed: #!/bin/bash
# Check if there are any CUDA_CHECK or error handlers wrapping these calls
sed -n '130,185p' cpp/src/linear_programming/utilities/cython_solve.cu | grep -i 'cuda\|check\|try\|error'Repository: NVIDIA/cuopt Length of output: 311 🏁 Script executed: #!/bin/bash
# Verify the exact context - is this function expected to return immediately after these copies?
sed -n '175,200p' cpp/src/linear_programming/utilities/cython_solve.cuRepository: NVIDIA/cuopt Length of output: 1177 Consider batching device-to-host transfers to reduce redundant synchronizations. The // Instead of 12 separate host_copy calls with 12 syncs each,
// batch the raft::copy calls first, then synchronize once🤖 Prompt for AI Agents |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| solution.get_pdlp_warm_start_data().initial_primal_weight_, | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Comment on lines
+151
to
160
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Skip warm-start host copies when we’re in batch mode. With this change every PDLP warm-start vector is eagerly Wrap these initializers so the copies only occur when @@
- cuopt::host_copy(solution.get_pdlp_warm_start_data().current_primal_solution_),
- cuopt::host_copy(solution.get_pdlp_warm_start_data().current_dual_solution_),
- cuopt::host_copy(solution.get_pdlp_warm_start_data().initial_primal_average_),
- cuopt::host_copy(solution.get_pdlp_warm_start_data().initial_dual_average_),
- cuopt::host_copy(solution.get_pdlp_warm_start_data().current_ATY_),
- cuopt::host_copy(solution.get_pdlp_warm_start_data().sum_primal_solutions_),
- cuopt::host_copy(solution.get_pdlp_warm_start_data().sum_dual_solutions_),
- cuopt::host_copy(solution.get_pdlp_warm_start_data().last_restart_duality_gap_primal_solution_),
- cuopt::host_copy(solution.get_pdlp_warm_start_data().last_restart_duality_gap_dual_solution_),
+ is_batch_mode ? std::vector<double>{}
+ : cuopt::host_copy(solution.get_pdlp_warm_start_data().current_primal_solution_),
+ is_batch_mode ? std::vector<double>{}
+ : cuopt::host_copy(solution.get_pdlp_warm_start_data().current_dual_solution_),
+ is_batch_mode ? std::vector<double>{}
+ : cuopt::host_copy(solution.get_pdlp_warm_start_data().initial_primal_average_),
+ is_batch_mode ? std::vector<double>{}
+ : cuopt::host_copy(solution.get_pdlp_warm_start_data().initial_dual_average_),
+ is_batch_mode ? std::vector<double>{}
+ : cuopt::host_copy(solution.get_pdlp_warm_start_data().current_ATY_),
+ is_batch_mode ? std::vector<double>{}
+ : cuopt::host_copy(solution.get_pdlp_warm_start_data().sum_primal_solutions_),
+ is_batch_mode ? std::vector<double>{}
+ : cuopt::host_copy(solution.get_pdlp_warm_start_data().sum_dual_solutions_),
+ is_batch_mode ? std::vector<double>{}
+ : cuopt::host_copy(solution.get_pdlp_warm_start_data().last_restart_duality_gap_primal_solution_),
+ is_batch_mode ? std::vector<double>{}
+ : cuopt::host_copy(solution.get_pdlp_warm_start_data().last_restart_duality_gap_dual_solution_),(or compute the vectors above the initializer and reuse them). That preserves existing semantics while avoiding unnecessary transfers in the batch path. 📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| solution.get_pdlp_warm_start_data().initial_step_size_, | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| solution.get_pdlp_warm_start_data().total_pdlp_iterations_, | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
@@ -205,7 +198,9 @@ mip_ret_t call_solve_mip( | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| error_type_t::ValidationError, | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| "MIP solve cannot be called on an LP problem!"); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| auto solution = cuopt::linear_programming::solve_mip(op_problem, solver_settings); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| mip_ret_t mip_ret{std::make_unique<rmm::device_buffer>(solution.get_solution().release()), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| // Convert device vector to host vector for MILP solution | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| mip_ret_t mip_ret{cuopt::host_copy(solution.get_solution()), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| solution.get_termination_status(), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| solution.get_error_status().get_error_type(), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| solution.get_error_status().what(), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix markdown list indentation.
Line 15 has incorrect indentation for a nested list item. Markdown expects 2 spaces of indentation for sub-items, not 4.
🔎 Proposed fix for list indentation
📝 Committable suggestion
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
15-15: Unordered list indentation
Expected: 2; Actual: 4
(MD007, ul-indent)
🤖 Prompt for AI Agents