-
Notifications
You must be signed in to change notification settings - Fork 0
Description
It seems that some time-optimal parametrization algorithms may achieve performances of 3ms.
However up to commit 0d1b933 the time of this optimization is at best less than 35ms (one examples is 18 iterations, Total CPU secs in IPOPT (w/o function evaluations) = 0.027, Total CPU secs in NLP function evaluations = 0.001 and 19 evaluations of the jacobian, gradient). The mean time to compute the acceleration constraints is +3.1255556e-03 ms for the value and +5.7921429e-03 ms for the jacobian which is more or less the 0.8% of 1 millisecond In what task does the NLP invest its time?.
-
set
"fast_step_computation"Ipopt options to"yes"this makes the algorithm assumes that the linear system that is solved to obtain the search direction is solved sufficiently well. In that case, no residuals are computed to verify the solution and the computation of the search direction is a little faster. The default value for this string option is "no".Possible values: yes, no
-
fix compiler flags to in optstop and gsplines
set(CMAKE_BUILD_TYPE Release)
set(CMAKE_CXX_FLAGS_RELEASE "-O3 -DNDEBUG -funroll-loops -mfpmath=sse -fopenmp")
# -march=navite produces a segmentation fault (why?)- fix the bug on time_cost
void TimeCost::FillJacobianBlock(std::string _var_set, Jacobian &_jac) const {
_jac.coeffRef(0, 0) = 1;
_jac.coeffRef(0, 1) = 0.0;
}
- It seems that
ipopt.SetOption("obj_scaling_factor", 0.8);makes the problem faster - Test other linear solver to mumps