IPOPT defaults to limited-memory Hessian approximation when using Dymos? #1172
FluffyCodeMonster
started this conversation in
General
Replies: 1 comment
-
|
Currently openmdao computes the Jacobian but not the Hessian. This would require an extension that's possible, but would also require users to define their second derivatives for components which would be fairly daunting. With the Jax integration we've done and other AD capabilities available like OpenMDAO.jl and CSDL, we might have a path to implementing second derivatives at the component level. I don't know when full support for second derivatives in openmdao might happen. Until then, we can't provide a Hessian to IPOPT and we're stuck with the limited memory approximation. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I've noticed something interesting and wanted to share it here (it is probably known, but could be helpful for my problem too).
I've been getting frequent 'error in step computation' terminations when optimising my trajectories with IPOPT, using Dymos as the transcription software. I calculate my gradients for Dymos using Jax, and being suspicious that I might be doing something wrong, I decided to write my own simple Hermite-Simpson collocation scheme in Casadi, which has built-in autodifferentiation for providing gradients to the optimiser, to check that there was nothing wrong with my dynamics model. I was playing around with settings and found that Casadi (which worked reliably, but obviously I used a much less capable collocation transcription) never gave the step computation error, until I enabled IPOPT's limited-memory Hessian approximation setting. This makes sense I suppose, that if it's approximating the Hessian, any discrepancies from the true values would result in the the optimiser facing greater difficulty in progressing through the optimisation space (although my understanding could be wrong).
I found that IPOPT was using limited-memory Hessian approximation mode by default when using Dymos, and when I tried to disable it through the driver options in Dymos, it overruled me and continued to use this mode. Switching the setting off in the ipopt.opt file resulted in IPOPT exiting without beginning the optimisation, with an error message about a piece of required code which wasn't implemented (I can try to replicate this if required).
I don't really understand what's going on - presumably Dymos must be providing these Hessian values, since this is one of the main purposes of specifying the gradients? I suppose the Hessian contains the second derivatives, so maybe these aren't provided. If Hessian approximation isn't suitable for my problem, does this in some way suggest that my problem (dynamics) is poorly scaled or ill-conditioned? I'm fairly new to optimisation theory so apologise if this is a basic question...
Beta Was this translation helpful? Give feedback.
All reactions