You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/2-PROXQP_API/2-ProxQP_api.md
+5-7Lines changed: 5 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -415,7 +415,7 @@ In this table, you have the three columns from left to right: the name of the se
415
415
| nb_power_iteration | 1000 | Number of power iteration iteration used by default for estimating H lowest eigenvalue.
416
416
| power_iteration_accuracy | 1.E-6 | If set to true, it solves the closest primal feasible problem if primal infeasibility is detected.
417
417
| primal_infeasibility_solving | False | Accuracy target of the power iteration algorithm for estimating the lowest eigenvalue of H.
418
-
| find_minimal_H_eigenvalue | NoRegularization | Option for estimating the minimal eigen value of H and regularizing default_rho default_rho=rho_regularization_scaling*abs(default_H_eigenvalue_estimate). This option can be used for solving non convex QPs.
418
+
| estimate_method_option | NoRegularization | Option for estimating the minimal eigen value of H and regularizing default_rho default_rho=rho_regularization_scaling*abs(default_H_eigenvalue_estimate). This option can be used for solving non convex QPs.
419
419
| default_H_eigenvalue_estimate | 0. | Default estimate of the minimal eigen value of H.
420
420
| rho_regularization_scaling | 1.5 | Scaling for regularizing default_rho according to the minimal eigen value of H.
421
421
@@ -436,15 +436,13 @@ If set to this option, the solver will start with no initial guess, which means
436
436
437
437
\subsection OverviewEstimatingHminimalEigenValue The different options for estimating H minimal Eigenvalue
438
438
439
-
The solver has four options for estimating the minimal eigenvalue of H within the struct HessianCostRegularization:
440
-
* NoRegularization : set by default, it means the solver does not try to estimate it,
441
-
* Manual: the user can provide an estimate of it through the init method,
439
+
The solver environment provides an independent function for estimating the minimal eigenvalue of a dense or sparse symmetric matrix. It is named "estimate_minimal_eigen_value_of_symmetric_matrix". In the sparse case, it uses a power iteration algorithm (with two options: the maximal number of iterations and the accuracy target for the estimate). In the dense case, we provide two options within the struct EigenValueEstimateMethodOption:
442
440
* PowerIteration: a power iteration algorithm will be used for estimating H minimal eigenvalue,
443
-
*EigenRegularization: in case the dense backend is used, the solver make use of Eigen method for estimating it.
441
+
*ExactMethod: in this case, an exact method from EigenSolver is used to provide an estimate.
444
442
445
-
This option is particularly usefull when solving QP with non convex quadratics. Indeed, if default_rho is set to a value strictly higher than the minimal eigenvalue of H, then ProxQP is guaranteed for find a local minimum to the problem since it relies on a Proximal Method of Multipliers (for more detail for example this [work](https://arxiv.org/pdf/2010.02653.pdf) providing convergence proof of this property).
443
+
Estimating minimal eigenvalue is particularly usefull for solving QP with non convex quadratics. Indeed, if default_rho is set to a value strictly higher than the minimal eigenvalue of H, then ProxQP is guaranteed for find a local minimum to the problem since it relies on a Proximal Method of Multipliers (for more detail for example this [work](https://arxiv.org/pdf/2010.02653.pdf) providing convergence proof of this property).
446
444
447
-
More precisely, when HessianCostRegularization is set to a value different of NoRegularization, then ProxQP first estimate a minimal eigenvalue for H and then update default_rho following the rule: default_rho = rho_regularization_scaling * abs(default_H_eigenvalue_estimate), which guarantees for appropriate scaling than the proximal step-size is larger than the minimal eigenvalue of H. We provide below examples in C++ and python for using this feature appropriately with the dense backend (it is similar with the sparse one)
445
+
More precisely, ProxQP API enables the user to provide for the init or update methods estimate of the minimal eigenvalue of H (i.e., manual_minimal_H_eigenvalue). It the values are not empty, then the values of primal proximal step size rho will be updated according to: rho = rho + abs(manual_minimal_H_eigenvalue). It guarantees that the proximal step-size is larger than the minimal eigenvalue of H and hence to converging towards a local minimum of the QP. We provide below examples in C++ and python for using this feature appropriately with the dense backend (it is similar with the sparse one)
Copy file name to clipboardExpand all lines: doc/3-ProxQP_solve.md
+1-11Lines changed: 1 addition & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -161,14 +161,4 @@ Note that if some elements of your QP model are not defined (for example a QP wi
161
161
</tr>
162
162
</table>
163
163
164
-
Finally, note that you can also you ProxQP for solving QP with non convex quadratic. For doing so, you need first to estimate the smallest eigenvalue of the quadratic cost H. ProxQP has four internal options for estimating using the setting HessianCostRegularization:
165
-
* NoRegularization : set by default, it means the solver does not try to estimate it,
166
-
* Manual: the user can provide an estimate of it through the init method,
167
-
* PowerIteration: a power iteration algorithm will be used for estimating H minimal eigenvalue,
168
-
* EigenRegularization: in case the dense backend is used, the solver make use of Eigen method for estimating it.
169
-
170
-
This option is particularly usefull when solving QP with non convex quadratics. Indeed, if default_rho is set to a value strictly higher than the minimal eigenvalue of H, then ProxQP is guaranteed for find a local minimum to the problem since it relies on a Proximal Method of Multipliers (for more detail for example this [work](https://arxiv.org/pdf/2010.02653.pdf) providing convergence proof of this property).
171
-
172
-
More precisely, when HessianCostRegularization is set to a value different of NoRegularization, then ProxQP first estimate a minimal eigenvalue for H and then update default_rho following the rule: default_rho = rho_regularization_scaling * abs(default_H_eigenvalue_estimate), which guarantees for appropriate scaling than the proximal step-size is larger than the minimal eigenvalue of H. We provide below examples in C++ and python for using this feature appropriately with the dense backend (it is similar with the sparse one).
173
-
174
-
The solve function enables using this option directly by passing accordingly the parameters "HessianCostRegularization" and "rho_regularization_scaling". You can find more details in [ProxQP API with examples](2-ProxQP_api.md) about the different other settings that can be used for setting other related parameters (e.g., for using PowerIteration algorithm with other options).
164
+
Finally, note that you can also you ProxQP for solving QP with non convex quadratic. For doing so, you just need to provide to the solve function an estimate of the smallest eigenvalue of the quadratic cost H. The solver environment provides an independent function for estimating the minimal eigenvalue of a dense or sparse symmetric matrix. It is named "estimate_minimal_eigen_value_of_symmetric_matrix". You can find more details in [ProxQP API with examples](2-ProxQP_api.md) about the different other settings that can be used for setting other related parameters (e.g., for using a Power Iteration algorithm).
0 commit comments