A Julia implementation of the Halpern Peaceman-Rachford (HPR) method for solving linear programming (LP) problems on the GPU.
📖 Documentation:
- Stable - Latest released version (v0.1.3)
- Dev - Latest from main branch (recommended for development)
Before using HPR-LP, make sure the following dependencies are installed:
- Julia (Recommended version:
1.10.4) - CUDA (Required for GPU acceleration; install the appropriate version for your GPU and Julia, >= 12.4 recommended)
- Required Julia packages
To install the required Julia packages and build the HPR-LP environment, run:
julia --project -e 'using Pkg; Pkg.instantiate()'To verify that CUDA is properly installed and working with Julia, run:
using CUDA
CUDA.versioninfo()Before running the scripts, please modify
run_single_file.jlorrun_dataset.jlin the scripts directory to specify the data path and result path according to your setup.
To test the script on a single instance (.mps file):
julia --project scripts/run_single_file.jlTo process all .mps files in a directory:
julia --project scripts/run_dataset.jlThis example demonstrates how to construct an LP model using the JuMP modeling language in Julia and export it to MPS format for use with the HPR-LP solver.
julia --project demo/demo_JuMP.jlThe script:
- Builds a linear programming (LP) model.
- Saves the model as an MPS file.
- Uses HPR-LP to solve the LP instance.
Remark: If the model may be infeasible or unbounded, you can use HiGHS to check it.
using JuMP, HiGHS
## read a model from file (or create in other ways)
mps_file_path = "xxx" # your file path
model = read_from_file(mps_file_path)
## set HiGHS as the optimizer
set_optimizer(model, HiGHS.Optimizer)
## solve it
optimize!(model)This example demonstrates how to construct and solve a linear programming problem directly in Julia without relying on JuMP.
julia --project demo/demo_Abc.jlYou may notice that solving a single instance — or the first instance in a dataset — appears slow. This is due to Julia’s Just-In-Time (JIT) compilation, which compiles code on first execution.
💡 Tip for Better Performance:
To reduce repeated compilation overhead, it’s recommended to run scripts from an IDE like VS Code or the Julia REPL in the terminal.
julia --projectThen, at the Julia REPL, run demo/demo_Abc.jl (or other scripts):
include("demo/demo_Abc.jl")CAUTION:
If you encounter the error message:
Error: Error during loading of extension AtomixCUDAExt of Atomix, use Base.retry_load_extensions() to retry.Don’t panic — this is usually a transient issue. Simply wait a few moments; the extension typically loads successfully on its own.
Below is a list of the parameters in HPR-LP along with their default values and usage:
| Parameter | Default Value | Description |
|---|---|---|
warm_up | true | Determines if a warm-up phase is performed before main execution. |
time_limit | 3600 | Maximum allowed runtime (seconds) for the algorithm. |
stoptol | 1e-4 | Stopping tolerance for convergence checks. |
device_number | 0 | GPU device number (only relevant if use_gpu is true). |
max_iter | typemax(Int32) | Maximum number of iterations allowed. |
check_iter | 150 | Number of iterations to check residuals. |
use_Ruiz_scaling | true | Whether to apply Ruiz scaling. |
use_Pock_Chambolle_scaling | true | Whether to use the Pock-Chambolle scaling. |
use_bc_scaling | true | Whether to use the scaling for b and c. |
use_gpu | true | Whether to use GPU or not. |
print_frequency | -1 (auto) | Print the log every print_frequency iterations. |
verbose | true | Whether to print solver output. Set to false for silent mode. |
After solving an instance, you can access the result variables as shown below:
# Example from /demo/demo_Abc.jl
println("Objective value: ", result.primal_obj)
println("x1 = ", result.x[1])
println("x2 = ", result.x[2])| Category | Variable | Description |
|---|---|---|
| Iteration Counts | iter | Total number of iterations performed by the algorithm. |
iter_4 | Number of iterations required to achieve an accuracy of 1e-4. | |
iter_6 | Number of iterations required to achieve an accuracy of 1e-6. | |
iter_8 | Number of iterations required to achieve an accuracy of 1e-8. | |
| Time Metrics | time | Total time in seconds taken by the algorithm. |
time_4 | Time in seconds taken to achieve an accuracy of 1e-4. | |
time_6 | Time in seconds taken to achieve an accuracy of 1e-6. | |
time_8 | Time in seconds taken to achieve an accuracy of 1e-8. | |
power_time | Time in seconds used by the power method. | |
| Objective Values | primal_obj | The primal objective value obtained. |
gap | The gap between the primal and dual objective values. | |
| Residuals | residuals | Relative residuals of the primal feasibility, dual feasibility, and duality gap. |
| Algorithm Status | status | The final status of the algorithm: - OPTIMAL: Found optimal solution- MAX_ITER: Max iterations reached- TIME_LIMIT: Time limit reached |
| Solution Vectors | x | The final solution vector x. |
y | The final solution vector y. | |
z | The final solution vector z. |
Kaihuang Chen, Defeng Sun, Yancheng Yuan, Guojun Zhang, and Xinyuan Zhao, “HPR-LP: An implementation of an HPR method for solving linear programming”, arXiv:2408.12179 (August 2024), Mathematical Programming Computation 17 (2025), doi.org/10.1007/s12532-025-00292-0.
For the C implementation on GPUs, please visit the following repository:
https://github.com/PolyU-IOR/HPR-LP-C
For the Julia implementation on CPUs and GPUs, please visit the following repository:
https://github.com/PolyU-IOR/HPR-LP
For the Python implementation on GPUs, please visit the following repository:
https://github.com/PolyU-IOR/HPR-LP-Python
For the MATLAB implementation on CPUs, please visit the following repository: