|
| 1 | +# [Numerical considerations](@id numerics) |
| 2 | + |
| 3 | +Optimization solvers cannot be expected to find the exact solution of |
| 4 | +a problem, since it may not be possible to represent that solution |
| 5 | +using floating-point arithemtic. However, solvers will typically run |
| 6 | +faster and find more accurate solutions if the problem has good |
| 7 | +numerical properties. Ideally the optimal value of all primal |
| 8 | +variables (and dual variables when relevant) will be of order |
| 9 | +unity. This typically occurs if all objective and constraint matrix |
| 10 | +coefficients, as well as finite variable and constraint bounds, are of |
| 11 | +order unity. Whilst there may be some reasons why this ideal cannot be |
| 12 | +achieved in all models, there are many pitfalls to avoid. For an |
| 13 | +insight into reasons why a model may have bad numerical properties and |
| 14 | +how to avoid them, users are recommended to study this [JuMP |
| 15 | +tutorial](https://jump.dev/JuMP.jl/stable/tutorials/getting_started/tolerances/). Improving |
| 16 | +the numerical properties of a model will typically lead to it being |
| 17 | +solved faster and more accurately/reliably, so the investment should |
| 18 | +pay off! |
| 19 | + |
| 20 | +Internally, the HiGHS continuous optimization solvers scale the |
| 21 | +constraint matrix to improve the numerical properties of the problem, |
| 22 | +[feasibility and optimality tolerances](@ref kkt) are determined with |
| 23 | +respect to the original, unscaled problem. However, faced with a model |
| 24 | +with bad numerical properties, there is only so much that HiGHS can do |
| 25 | +to solve it efficiently and accurately. |
| 26 | + |
| 27 | +If the optimal values of many variables in a model are typically very |
| 28 | +large, this can correspond to very large values of the objective |
| 29 | +coefficients and finite variable and constraint bounds. Since most of |
| 30 | +the HiGHS solvers terminate according to small absolute [feasibility |
| 31 | +tolerances](@ref kkt), large objective coefficients and bounds force |
| 32 | +the solvers to achieve an accuracy that may be unrealsitic in the |
| 33 | +context of a model. As well as having an impact on efficiency, the |
| 34 | +solver may ultimately be unable to achieve the required accuracy and |
| 35 | +fail. Objective coefficients and bounds that are less than the |
| 36 | +feasibility and optimality tolerances can also be problematic, |
| 37 | +although this is less common and less serious. |
| 38 | + |
| 39 | +HiGHS offers a facility to enable users to assess the consequences of |
| 40 | +better problem scaling, in cases where some objective coefficients or |
| 41 | +bounds are large, or if all objective coefficients or bounds are |
| 42 | +small. By setting the options [user\_objective\_scale](@id |
| 43 | +option-user-objective-scale) and/or [user\_bound\_scale](@id |
| 44 | +option-user-bound-scale), HiGHS will solve the given model with |
| 45 | +uniform scaling of the objective coefficients or bounds. Note that |
| 46 | +these options define the exponent in power-of-two scaling factors so |
| 47 | +that model accuracy is not compromised. After solving the problem, |
| 48 | +feasibility and optimality will be assessed for the original model, |
| 49 | +with a warning given if the tolerances are not satisfied. Note that |
| 50 | +uniform scaling of bounds on discrete variables is not possible, and |
| 51 | +is achieved implicitly by scaling their cost and matrix |
| 52 | +coefficients. Also, when bounds on variables in a quadratic |
| 53 | +programming problem are scaled up (down), the values in the Hessian |
| 54 | +matrix must be scaled down (up) so that the overall scaling of the |
| 55 | +objective is uniform. |
0 commit comments