You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The simple or conventional radius update scheme. This scheme is chosen by default
22
-
and follows the conventional approach to update the trust region radius, i.e. if the
23
-
trial step is accepted it increases the radius by a fixed factor (bounded by a maximum radius)
22
+
The simple or conventional radius update scheme. This scheme is chosen by default and
23
+
follows the conventional approach to update the trust region radius, i.e. if the trial
24
+
step is accepted it increases the radius by a fixed factor (bounded by a maximum radius)
24
25
and if the trial step is rejected, it shrinks the radius by a fixed factor.
25
26
"""
26
27
Simple
27
28
28
29
"""
29
30
RadiusUpdateSchemes.NLsolve
30
31
31
-
The same updating scheme as in NLsolve's (https://github.com/JuliaNLSolvers/NLsolve.jl) trust region dogleg implementation.
32
+
The same updating scheme as in NLsolve's (https://github.com/JuliaNLSolvers/NLsolve.jl)
33
+
trust region dogleg implementation.
32
34
"""
33
35
NLsolve
34
36
@@ -42,40 +44,57 @@ states as `RadiusUpdateSchemes.T`. Simply put the desired scheme as follows:
42
44
"""
43
45
RadiusUpdateSchemes.Hei
44
46
45
-
This scheme is proposed by [Hei, L.] (https://www.jstor.org/stable/43693061). The trust region radius
46
-
depends on the size (norm) of the current step size. The hypothesis is to let the radius converge to zero
47
-
as the iterations progress, which is more reliable and robust for ill-conditioned as well as degenerate
48
-
problems.
47
+
This scheme is proposed by Hei, L. [1]. The trust region radius depends on the size
48
+
(norm) of the current step size. The hypothesis is to let the radius converge to zero as
49
+
the iterations progress, which is more reliable and robust for ill-conditioned as well
50
+
as degenerate problems.
51
+
52
+
[1] Hei, Long. "A self-adaptive trust region algorithm." Journal of Computational
53
+
Mathematics (2003): 229-236.
49
54
"""
50
55
Hei
51
56
52
57
"""
53
58
RadiusUpdateSchemes.Yuan
54
59
55
-
This scheme is proposed by [Yuan, Y.] (https://www.researchgate.net/publication/249011466_A_new_trust_region_algorithm_with_trust_region_radius_converging_to_zero).
56
-
Similar to Hei's scheme, the trust region is updated in a way so that it converges to zero, however here,
57
-
the radius depends on the size (norm) of the current gradient of the objective (merit) function. The hypothesis
58
-
is that the step size is bounded by the gradient size, so it makes sense to let the radius depend on the gradient.
60
+
This scheme is proposed by Yuan, Y [1]. Similar to Hei's scheme, the trust region is
61
+
updated in a way so that it converges to zero, however here, the radius depends on the
62
+
size (norm) of the current gradient of the objective (merit) function. The hypothesis is
63
+
that the step size is bounded by the gradient size, so it makes sense to let the radius
64
+
depend on the gradient.
65
+
66
+
[1] Fan, Jinyan, Jianyu Pan, and Hongyan Song. "A retrospective trust region algorithm
67
+
with trust region converging to zero." Journal of Computational Mathematics 34.4 (2016):
68
+
421-436.
59
69
"""
60
70
Yuan
61
71
62
72
"""
63
73
RadiusUpdateSchemes.Bastin
64
74
65
-
This scheme is proposed by [Bastin, et al.] (https://www.researchgate.net/publication/225100660_A_retrospective_trust-region_method_for_unconstrained_optimization).
66
-
The scheme is called a retrospective update scheme as it uses the model function at the current
67
-
iteration to compute the ratio of the actual reduction and the predicted reduction in the previous
68
-
trial step, and use this ratio to update the trust region radius. The hypothesis is to exploit the information
69
-
made available during the optimization process in order to vary the accuracy of the objective function computation.
75
+
This scheme is proposed by Bastin, et al. [1]. The scheme is called a retrospective
76
+
update scheme as it uses the model function at the current iteration to compute the
77
+
ratio of the actual reduction and the predicted reduction in the previous trial step,
78
+
and use this ratio to update the trust region radius. The hypothesis is to exploit the
79
+
information made available during the optimization process in order to vary the accuracy
80
+
of the objective function computation.
81
+
82
+
[1] Bastin, Fabian, et al. "A retrospective trust-region method for unconstrained
0 commit comments