|
| 1 | +# Copyright 2017, Iain Dunning, Joey Huchette, Miles Lubin, and contributors #src |
| 2 | +# This Source Code Form is subject to the terms of the Mozilla Public License #src |
| 3 | +# v.2.0. If a copy of the MPL was not distributed with this file, You can #src |
| 4 | +# obtain one at https://mozilla.org/MPL/2.0/. #src |
| 5 | + |
| 6 | +# # Chordal decomposition |
| 7 | + |
| 8 | +# The purpose of this tutorial is to show how to use [MathOptChordalDecomposition.jl](@ref) |
| 9 | +# to improve the performance of models with PSD constraints. |
| 10 | + |
| 11 | +# ## Required packages |
| 12 | + |
| 13 | +# This tutorial uses the following packages: |
| 14 | + |
| 15 | +using JuMP |
| 16 | +import MathOptChordalDecomposition |
| 17 | +import SCS |
| 18 | +import SparseArrays |
| 19 | + |
| 20 | +# ## Background |
| 21 | + |
| 22 | +# Chordal decomposition is a technique for decomposing a large PSD constraint |
| 23 | +# into a set of smaller PSD constraints and some linear equality constraints. |
| 24 | + |
| 25 | +# If the original PSD constraint is sparse, the decomposed problem can be faster |
| 26 | +# to solve than the original. |
| 27 | + |
| 28 | +# For more information on chordal decomposition, watch Michael Garstka's talk at |
| 29 | +# [JuMP-dev 2019](https://www.youtube.com/watch?v=H4Q0ZXDqB70). |
| 30 | + |
| 31 | +# Some solvers, such as [Clarabel.jl](https://github.com/oxfordcontrol/Clarabel.jl) |
| 32 | +# and [COSMO.jl](https://github.com/oxfordcontrol/COSMO.jl) implement chordal |
| 33 | +# decomposition internally. Others, such as [SCS.jl](@ref) do not implement |
| 34 | +# chordal decomposition. |
| 35 | + |
| 36 | +# The Julia package [MathOptChordalDecomposition.jl](@ref) is a MathOptInterface |
| 37 | +# layer that implements chordal decomposition of sparse semidefinite constraints. |
| 38 | +# It can be used to wrap any solver which supports PSD constraints and does not |
| 39 | +# implement chordal decomposition internally. |
| 40 | + |
| 41 | +# ## JuMP Model |
| 42 | + |
| 43 | +# To demonstrate the benefits of chordal decomposition, we use the `mcp124-1` |
| 44 | +# model from [SDPLIB](http://euler.nmt.edu/~brian/sdplib/sdplib.html). |
| 45 | + |
| 46 | +model = read_from_file(joinpath(@__DIR__, "mcp124-1.dat-s")) |
| 47 | + |
| 48 | +# This model has 124 decision variables and one PSD constraint. This PSD |
| 49 | +# constraint is sparse, which means that many elements of the matrix are zero. |
| 50 | + |
| 51 | +# To view the matrix, use [`all_constraints`](@ref) to get a list of the |
| 52 | +# constraints, then use [`constraint_object`](@ref) to get the function and set |
| 53 | +# form of the constraint: |
| 54 | + |
| 55 | +S = MOI.PositiveSemidefiniteConeTriangle |
| 56 | +constraints = all_constraints(model, Vector{AffExpr}, S) |
| 57 | +con = constraint_object(constraints[1]); |
| 58 | +con.set |
| 59 | + |
| 60 | +#- |
| 61 | + |
| 62 | +con.func |
| 63 | + |
| 64 | +# The constraint function is given in vectorized form. Use [`reshape_vector`](@ref) |
| 65 | +# to convert it into a matrix: |
| 66 | + |
| 67 | +F = reshape_vector(con.func, SymmetricMatrixShape(con.set.side_dimension)) |
| 68 | + |
| 69 | +# The `F` matrix is dense, but many elements are zero. Use `SparseArrays.sparse` |
| 70 | +# to turn it into a sparse matrix: |
| 71 | + |
| 72 | +A = SparseArrays.sparse(F) |
| 73 | + |
| 74 | +# The sparse matrix has 422 non-zeros, which is a density of 2.7%: |
| 75 | + |
| 76 | +SparseArrays.nnz(A) / size(A, 1)^2 |
| 77 | + |
| 78 | +# ## Solution speed |
| 79 | + |
| 80 | +# [SCS.jl](@ref) is a first-order solver that does not exploit the sparsity of |
| 81 | +# PSD constraints. Let's solve it and see how long it took: |
| 82 | + |
| 83 | +set_optimizer(model, SCS.Optimizer) |
| 84 | +@time optimize!(model) |
| 85 | + |
| 86 | +# In comparison, if we wrap `SCS.Optimizer` in `MathOptChordalDecomposition.Optimizer`, |
| 87 | +# then the problem takes less than 1 second to solve: |
| 88 | + |
| 89 | +set_optimizer(model, () -> MathOptChordalDecomposition.Optimizer(SCS.Optimizer)) |
| 90 | +@time optimize!(model) |
| 91 | + |
| 92 | +# The difference in performance is because of the chordal decomposition. The |
| 93 | +# decomposed problem introduced new variables (there are now 1,155 variables |
| 94 | +# instead of 124) and constraints (there are now 115 PSD constraints instead of |
| 95 | +# one), but each PSD constraint is smaller than the original. |
| 96 | + |
| 97 | +decom = unsafe_backend(model) |
| 98 | + |
| 99 | +# With a bit of effort, we can compute the number of PSD constraints of each |
| 100 | +# size: |
| 101 | + |
| 102 | +count_by_size = Dict{Int,Int}() |
| 103 | +for ci in MOI.get(decom, MOI.ListOfConstraintIndices{MOI.VectorOfVariables,S}()) |
| 104 | + set = MOI.get(decom, MOI.ConstraintSet(), ci) |
| 105 | + n = set.side_dimension |
| 106 | + count_by_size[n] = get(count_by_size, n, 0) + 1 |
| 107 | +end |
| 108 | +count_by_size |
| 109 | + |
| 110 | +# The largest PSD constraint is now of size 10, which is much smaller than the |
| 111 | +# original 124-by-124 matrix. |
0 commit comments