|
1 | 1 | EnumX.@enumx RadiusUpdateSchemes begin
|
| 2 | + """ |
| 3 | + `RadiusUpdateSchemes.Simple` |
| 4 | +
|
| 5 | + The simple or conventional radius update scheme. This scheme is chosen by default |
| 6 | + and follows the conventional approach to update the trust region radius, i.e. if the |
| 7 | + trial step is accepted it increases the radius by a fixed factor (bounded by a maximum radius) |
| 8 | + and if the trial step is rejected, it shrinks the radius by a fixed factor. |
| 9 | + """ |
2 | 10 | Simple
|
| 11 | + |
| 12 | + """ |
| 13 | + `RadiusUpdateSchemes.Hei` |
| 14 | +
|
| 15 | + This scheme is proposed by [Hei, L.] (https://www.jstor.org/stable/43693061). The trust region radius |
| 16 | + depends on the size (norm) of the current step size. The hypothesis is to let the radius converge to zero |
| 17 | + as the iterations progress, which is more reliable and robust for ill-conditioned as well as degenerate |
| 18 | + problems. |
| 19 | + """ |
3 | 20 | Hei
|
| 21 | + |
| 22 | + """ |
| 23 | + `RadiusUpdateSchemes.Yuan` |
| 24 | +
|
| 25 | + This scheme is proposed by [Yuan, Y.] (https://www.researchgate.net/publication/249011466_A_new_trust_region_algorithm_with_trust_region_radius_converging_to_zero). |
| 26 | + Similar to Hei's scheme, the trust region is updated in a way so that it converges to zero, however here, |
| 27 | + the radius depends on the size (norm) of the current gradient of the objective (merit) function. The hypothesis |
| 28 | + is that the step size is bounded by the gradient size, so it makes sense to let the radius depend on the gradient. |
| 29 | + """ |
4 | 30 | Yuan
|
| 31 | + |
| 32 | + """ |
| 33 | + `RadiusUpdateSchemes.Bastin` |
| 34 | +
|
| 35 | + This scheme is proposed by [Bastin, et al.] (https://www.researchgate.net/publication/225100660_A_retrospective_trust-region_method_for_unconstrained_optimization). |
| 36 | + The scheme is called a retrospective update scheme as it uses the model function at the current |
| 37 | + iteration to compute the ratio of the actual reduction and the predicted reduction in the previous |
| 38 | + trial step, and use this ratio to update the trust region radius. The hypothesis is to exploit the information |
| 39 | + made available during the optimization process in order to vary the accuracy of the objective function computation. |
| 40 | + """ |
5 | 41 | Bastin
|
| 42 | + |
| 43 | + """ |
| 44 | + `RadiusUpdateSchemes.Fan` |
| 45 | +
|
| 46 | + This scheme is proposed by [Fan, J.] (https://link.springer.com/article/10.1007/s10589-005-3078-8). It is very much similar to |
| 47 | + Hei's and Yuan's schemes as it lets the trust region radius depend on the current size (norm) of the objective (merit) |
| 48 | + function itself. These new update schemes are known to improve local convergence. |
| 49 | + """ |
6 | 50 | Fan
|
7 | 51 | end
|
8 | 52 |
|
|
0 commit comments