-
Notifications
You must be signed in to change notification settings - Fork 37
Closed
Description
I observe a significant slowdown with the current version of NLPModels. A MWE is:
using QPSReader
using QuadraticModels
using NLPModels
src = fetch_mm()
qpdat = readqps(joinpath(src, "UBH1.SIF"))
qp = QuadraticModel(qpdat)
m = NLPModels.get_ncon(qp)
x = NLPModels.get_x0(qp)
c = zeros(m)
# Evaluate in a few milliseconds
@time NLPModels.cons_lin!(qp, x, c)
# Computation does not complete in reasonable time
@time NLPModels.cons!(qp, x, c)This second MWE is giving the behavior I am expecting:
using QPSReader
using QuadraticModels
using NLPModels
function NLPModels.cons!(qp::QuadraticModel{T}, x::Vector{T}, c::Vector{T}) where T
return NLPModels.cons_lin!(qp, x, c)
end
src = fetch_mm()
qpdat = readqps(joinpath(src, "UBH1.SIF"))
qp = QuadraticModel(qpdat)
m = NLPModels.get_ncon(qp)
x = NLPModels.get_x0(qp)
c = zeros(m)
# Evaluate in few milliseconds
@time NLPModels.cons_lin!(qp, x, c)
# Evaluate also in few milliseconds
@time NLPModels.cons!(qp, x, c)I am opening the issue here as I think the culprit is the following dispatch in NLPModels:
https://github.com/JuliaSmoothOptimizers/NLPModels.jl/blob/main/src/nlp/api.jl#L61
It returns a view, and then Julia dispatches on generic_matvecmul! afterwards, as the view is non-contiguous. Resulting in a significant slowdown, as we cannot use an efficient implementation for the sparse-matrix-vector product.
What would be a valid solution? Should we always redefine cons! manually, or should we fix the issue in NLPModels?
Metadata
Metadata
Assignees
Labels
No labels