Skip to content

PolyU-IOR/HPR-LP

Repository files navigation

HPR-LP: A GPU Solver for Linear Programming

Documentation Documentation CI

A Julia implementation of the Halpern Peaceman-Rachford (HPR) method for solving linear programming (LP) problems on the GPU.

📖 Documentation:

  • Stable - Latest released version (v0.1.3)
  • Dev - Latest from main branch (recommended for development)

LP Problem Formulation

$$ \begin{array}{ll} \underset{x \in \mathbb{R}^n}{\min} \quad & \langle c, x \rangle \\ \text{s.t.} \quad & L \leq A x \leq U, \\ & l \leq x \leq u . \end{array} $$


Getting Started

Prerequisites

Before using HPR-LP, make sure the following dependencies are installed:

  • Julia (Recommended version: 1.10.4)
  • CUDA (Required for GPU acceleration; install the appropriate version for your GPU and Julia, >= 12.4 recommended)
  • Required Julia packages

To install the required Julia packages and build the HPR-LP environment, run:

julia --project -e 'using Pkg; Pkg.instantiate()'

To verify that CUDA is properly installed and working with Julia, run:

using CUDA
CUDA.versioninfo()

Usage 1: Test Instances in MPS Format

Setting Data and Result Paths

Before running the scripts, please modify run_single_file.jl or run_dataset.jl in the scripts directory to specify the data path and result path according to your setup.

Running a Single Instance

To test the script on a single instance (.mps file):

julia --project scripts/run_single_file.jl

Running All Instances in a Directory

To process all .mps files in a directory:

julia --project scripts/run_dataset.jl

Usage 2: Define Your LP Model in Julia Scripts

Example 1: Build and Export an LP Model Using JuMP

This example demonstrates how to construct an LP model using the JuMP modeling language in Julia and export it to MPS format for use with the HPR-LP solver.

julia --project demo/demo_JuMP.jl

The script:

  • Builds a linear programming (LP) model.
  • Saves the model as an MPS file.
  • Uses HPR-LP to solve the LP instance.

Remark: If the model may be infeasible or unbounded, you can use HiGHS to check it.

using JuMP, HiGHS
## read a model from file (or create in other ways)
mps_file_path = "xxx" # your file path
model = read_from_file(mps_file_path)
## set HiGHS as the optimizer
set_optimizer(model, HiGHS.Optimizer)
## solve it
optimize!(model)

Example 2: Define an LP Instance Directly in Julia

This example demonstrates how to construct and solve a linear programming problem directly in Julia without relying on JuMP.

julia --project demo/demo_Abc.jl

Note on First-Time Execution Performance

You may notice that solving a single instance — or the first instance in a dataset — appears slow. This is due to Julia’s Just-In-Time (JIT) compilation, which compiles code on first execution.

💡 Tip for Better Performance:
To reduce repeated compilation overhead, it’s recommended to run scripts from an IDE like VS Code or the Julia REPL in the terminal.

Start Julia REPL with the project environment:

julia --project

Then, at the Julia REPL, run demo/demo_Abc.jl (or other scripts):

include("demo/demo_Abc.jl")

CAUTION:
If you encounter the error message:
Error: Error during loading of extension AtomixCUDAExt of Atomix, use Base.retry_load_extensions() to retry.

Don’t panic — this is usually a transient issue. Simply wait a few moments; the extension typically loads successfully on its own.


Parameters

Below is a list of the parameters in HPR-LP along with their default values and usage:

Parameter Default Value Description
warm_uptrueDetermines if a warm-up phase is performed before main execution.
time_limit3600Maximum allowed runtime (seconds) for the algorithm.
stoptol1e-4Stopping tolerance for convergence checks.
device_number0GPU device number (only relevant if use_gpu is true).
max_itertypemax(Int32)Maximum number of iterations allowed.
check_iter150Number of iterations to check residuals.
use_Ruiz_scalingtrueWhether to apply Ruiz scaling.
use_Pock_Chambolle_scalingtrueWhether to use the Pock-Chambolle scaling.
use_bc_scalingtrueWhether to use the scaling for b and c.
use_gputrueWhether to use GPU or not.
print_frequency-1 (auto)Print the log every print_frequency iterations.
verbosetrueWhether to print solver output. Set to false for silent mode.

Result Explanation

After solving an instance, you can access the result variables as shown below:

# Example from /demo/demo_Abc.jl
println("Objective value: ", result.primal_obj)
println("x1 = ", result.x[1])
println("x2 = ", result.x[2])
Category Variable Description
Iteration CountsiterTotal number of iterations performed by the algorithm.
iter_4Number of iterations required to achieve an accuracy of 1e-4.
iter_6Number of iterations required to achieve an accuracy of 1e-6.
iter_8Number of iterations required to achieve an accuracy of 1e-8.
Time MetricstimeTotal time in seconds taken by the algorithm.
time_4Time in seconds taken to achieve an accuracy of 1e-4.
time_6Time in seconds taken to achieve an accuracy of 1e-6.
time_8Time in seconds taken to achieve an accuracy of 1e-8.
power_timeTime in seconds used by the power method.
Objective Valuesprimal_objThe primal objective value obtained.
gapThe gap between the primal and dual objective values.
ResidualsresidualsRelative residuals of the primal feasibility, dual feasibility, and duality gap.
Algorithm StatusstatusThe final status of the algorithm:
- OPTIMAL: Found optimal solution
- MAX_ITER: Max iterations reached
- TIME_LIMIT: Time limit reached
Solution VectorsxThe final solution vector x.
yThe final solution vector y.
zThe final solution vector z.

Reference

Kaihuang Chen, Defeng Sun, Yancheng Yuan, Guojun Zhang, and Xinyuan Zhao, “HPR-LP: An implementation of an HPR method for solving linear programming”, arXiv:2408.12179 (August 2024), Mathematical Programming Computation 17 (2025), doi.org/10.1007/s12532-025-00292-0.


Available HPR-LP Implementations

For the C implementation on GPUs, please visit the following repository:

https://github.com/PolyU-IOR/HPR-LP-C

For the Julia implementation on CPUs and GPUs, please visit the following repository:

https://github.com/PolyU-IOR/HPR-LP

For the Python implementation on GPUs, please visit the following repository:

https://github.com/PolyU-IOR/HPR-LP-Python

For the MATLAB implementation on CPUs, please visit the following repository:

https://github.com/PolyU-IOR/HPR-LP-MATLAB