Skip to content

Batch API#540

Open
amontoison wants to merge 32 commits intomainfrom
am/batch_api
Open

Batch API#540
amontoison wants to merge 32 commits intomainfrom
am/batch_api

Conversation

@amontoison
Copy link
Member

@klamike klamike mentioned this pull request Feb 4, 2026
@github-actions
Copy link
Contributor

github-actions bot commented Feb 4, 2026

Package name latest stable
ADNLPModels
AdaptiveRegularization
AmplNLReader
BundleAdjustmentModels
CUTEst
CaNNOLeS
DCISolver
FletcherPenaltySolver
FluxNLPModels
JSOSolvers
JSOSuite
LLSModels
ManualNLPModels
NLPModelsIpopt
NLPModelsJuMP
NLPModelsKnitro
NLPModelsModifiers
NLPModelsTest
NLSProblems
PDENLPModels
PartiallySeparableNLPModels
PartiallySeparableSolvers
Percival
QuadraticModels
RegularizedOptimization
RegularizedProblems
SolverBenchmark
SolverTest
SolverTools

@amontoison
Copy link
Member Author

Michael, I finished what I wanted.
You can do a pass on the tests when you have time.

@klamike
Copy link
Collaborator

klamike commented Feb 5, 2026

@amontoison probably this is why the VI is not in the regular meta: klamike@c54ded6 we can't infer it.

@klamike
Copy link
Collaborator

klamike commented Feb 5, 2026

A more realistic example, batched QuadraticModel where we vary only the RHS: JuliaSmoothOptimizers/QuadraticModels.jl@main...klamike:QuadraticModels.jl:mk/rhsbatch

@amontoison
Copy link
Member Author

Amazing Michael!

@amontoison
Copy link
Member Author

Should we hardcode VI = Vector{Float64} like in the non-batch case?
Did this API is what we need in MadIPM.jl or should we adjust a few things?

@klamike
Copy link
Collaborator

klamike commented Feb 5, 2026

Should we hardcode VI = Vector{Float64} like in the non-batch case?

I think so, yes, to be consistent with the regular API. Both can be updated at the same time later (maybe only in 0.22)

Did this API is what we need in MadIPM.jl or should we adjust a few things?

We probably want to define some more meta functions like get_nvar, get_x0.. I'll try to integrate the batch API into the MadIPM UniformBatch over the few days and get back

@klamike
Copy link
Collaborator

klamike commented Feb 8, 2026

@amontoison what do you think about having an API for updating the nbatch? and maybe some optional get_nlpmodel_at_index returning an AbstractNLPModel`?

@amontoison
Copy link
Member Author

@klamike I don't unserstand what you mean by updating the nbatch.
Do you want to dynamically increase or reduce the batch?
For get_nlpmodel_at_index, you suppose that the models are independent but with ExaModels.jl all models will be compiled in one expression and we can't split them.

@klamike
Copy link
Collaborator

klamike commented Feb 8, 2026

Yes I meant updating the nbatch throughout the solve. At the NLPModels level it would just change the size of the buffers in the out-of-place API. The motivation is to have some way of skipping evaluating batch elements that have already converged, for batch models where that can be done efficiently. Models with non-independent elements can override it to error of course. Similar story for the get_nlpmodel_at_index!.

In the ExaModels case, of course it depends on how you do the batching. When I added parameters to ExaModels I specifically made the lower level functions all take the parameter vector as an input, to make it possible to implement the batching the way I did in BatchNLPKernels. It is based on a single ExaModel and does the same number of kernel launches for the batch evaluation as ExaModels would for a regular model, just with 2D grids. Since the base parametric ExaModel is built "unbatched", it is trivial to implement the set_nbatch and get_nlpmodel_at_index!.

@klamike
Copy link
Collaborator

klamike commented Feb 8, 2026

Actually the set_nbatch should really be set_active_batch_idx which would itself update the nbatch... I need to think on it some more, it feels more like a solver concern than an NLPModels concern. Might make more sense to implement it on some DynamicBatchNLPModel wrapper that lives in e.g. MadNLP. At least in the meantime, we can just force full-batch all the time here.

I do still think the get_nlpmodel_at_index! would be useful. It would simplify the incremental implementation of batch versions of existing solvers.

@amontoison
Copy link
Member Author

@klamike Be free to add what you need in the APi on NLPModels.jl with this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants