3
3
## The Solver Seems to Violate Constraints During the Optimization, Causing ` DomainError ` s, What Can I Do About That?
4
4
5
5
During the optimization, optimizers use slack variables to relax the solution to the constraints. Because of this,
6
- there is no guarentee that for an arbitrary optimizer the steps will all satisfy the constraints during the
6
+ there is no guarantee that for an arbitrary optimizer the steps will all satisfy the constraints during the
7
7
optimization. In many cases, this can cause one's objective function code throw a ` DomainError ` if it is evaluated
8
8
outside of its acceptable zone. For example, ` log(-1) ` gives:
9
9
@@ -16,7 +16,7 @@ log will only return a complex result if called with a complex argument. Try log
16
16
To handle this, one should not assume that the variables will always satisfy the constraints on each step. There
17
17
are three general ways to handle this better:
18
18
19
- 1 . Use NaNMath.jl
19
+ 1 . Use [ NaNMath.jl] ( https://github.com/JuliaMath/NaNMath.jl )
20
20
2 . Process variables before domain-restricted calls
21
21
3 . Use a domain transformation
22
22
@@ -29,9 +29,9 @@ reason for the difference.
29
29
30
30
Alternatively, one can pre-process the values directly. For example, ` log(abs(x)) ` is guaranteed to work. If one does
31
31
this, there are two things to make note of. One is that the solution will not be transformed, and thus the transformation
32
- should be applied on ` sol.u ` as well. I.e. , the solution could fine an optima for ` x = -2 ` , and one should manually
32
+ should be applied on ` sol.u ` as well. For example , the solution could find an optima for ` x = -2 ` , and one should manually
33
33
change this to ` x = 2 ` if the ` abs ` version is used within the objective function. Note that many functions for this will
34
- introduce a disocontinuity in the derivative which can effect the optimization process.
34
+ introduce a discontinuity in the derivative which can affect the optimization process.
35
35
36
36
Finally and relatedly, one can write the optimization with domain transformations in order to allow the optimization to
37
37
take place in the full real set. For example, instead of optimizing ` x in [0,Inf] ` , one can optimize ` exp(x) in [0,Inf] `
@@ -94,4 +94,4 @@ This will both be faster and numerically easier.
94
94
and symbolic simplification passes, it does include the ability to specialize the solution process. For example,
95
95
it can treat linear optimization problems, quadratic optimization problem, convex optimization problems, etc.
96
96
in specific ways that are more efficient than a general nonlinear interface. For more information on the types of
97
- special solves that are allowed with JuMP, see [ this page] ( https://jump.dev/JuMP.jl/stable/installation/#Supported-solvers ) .
97
+ special solves that are allowed with JuMP, see [ this page] ( https://jump.dev/JuMP.jl/stable/installation/#Supported-solvers ) .
0 commit comments