@@ -241,7 +241,7 @@ print.TMBCalibrator = function(x, ...) {
241241# ' optimization is started from the currently best parameter set.
242242# ' Therefore, if you find yourself in a local minimum you might need
243243# ' to either recreate the model using \code{\link{mp_tmb_calibrator}}
244- # ' or use `optimzer = "DEoptim`, which is more robust to objective
244+ # ' or use `optimizer = "DEoptim" `, which is more robust to objective
245245# ' functions with multiple optima.
246246# '
247247# ' @param model A model object capable of being optimized. Typically
@@ -258,16 +258,14 @@ print.TMBCalibrator = function(x, ...) {
258258# ' ## `nlminb`
259259# '
260260# ' The default optimizer is \code{\link{nlminb}}. This optimizer uses
261- # ' gradients and Hessians computed by the
262- # ' [template model builder ](https://kaskr.github.io/adcomp/_book/Introduction.html)
261+ # ' gradients computed by the
262+ # ' [Template Model Builder ](https://kaskr.github.io/adcomp/_book/Introduction.html)
263263# ' engine of `macpan2`. This optimizer is efficient at local optimization
264- # ' by exploiting the Hessian matrix of second derivatives, which gives
265- # ' the optimizer information about how far to step during each iteration.
266- # ' However, this optimzer can struggle if the objective function has
267- # ' multiple optima.
264+ # ' by exploiting the gradient information computed by automatic differentiation.
265+ # ' However, like many nonlinear optimizers, it can struggle if the objective function has multiple optima.
268266# '
269- # ' To set control parameters (e.g., maximum number of iterations), one
270- # ' may use the following.
267+ # ' To set control parameters (e.g., maximum number of iterations), you
268+ # ' can use the `control` argument:
271269# ' ```
272270# ' mp_optimize(model, "nlminb", control = list(iter.max = 800))
273271# ' ```
@@ -276,7 +274,9 @@ print.TMBCalibrator = function(x, ...) {
276274# '
277275# ' ## `optim`
278276# '
279- # ' The \code{\link{optim}} optimizer does not use second derivatives
277+ # ' The \code{\link{optim}} optimizer lets you choose from a variety
278+ # ' of optimization algorithms. The default, `method = "Nelder-Mead"`,
279+ # ' does not use second derivatives
280280# ' (compare with the description of \code{\link{nlminb}}), and so
281281# ' could be less efficient at taking each step. However, we
282282# ' find that it can be somewhat better at getting out of local optima,
@@ -291,44 +291,41 @@ print.TMBCalibrator = function(x, ...) {
291291# ' See the \code{\link{optim}} help page for the complete list of
292292# ' control parameters and what the output means.
293293# '
294- # ' Note that if your model is parameterized by only a single parameter,
295- # ' you will get a warning asking you to use "Brent" or ` optimize()`
296- # ' directly. You can ignore this warning if you are happy with your
294+ # ' If your model is parameterized by only a single parameter,
295+ # ' you'll get a warning asking you to use 'method = "Brent"' or optimizer = ' optimize()'.
296+ # ' You can ignore this warning if you are happy with your
297297# ' answer, or can do either of the suggested options as follows.
298298# ' ```
299299# ' mp_optimize(model, "optim", method = "Brent", lower = 0, upper = 1.2)
300- # ' mp_optimize(model, "optimize", c(0, 1.2))
300+ # ' mp_optimize(model, "optimize", interval = c(0, 1.2))
301301# ' ```
302- # ' Note that we have to specify the upper and lower values, between
303- # ' which the optimizer searches for the optimum.
302+ # ' In this case you have to specify lower and upper bounds for the optimization.
304303# '
305304# ' ## `DEoptim`
306305# '
307- # ' The `DEoptim` optimizer is a function in the `DEoptim` package, and
308- # ' so you will need to have this package installed to use this option.
306+ # ' The `DEoptim` optimizer comes from the `DEoptim` package;
307+ # ' you'll need to have that package installed to use this option.
309308# ' It is designed for objective functions with multiple optima. Use
310309# ' this method if you do not believe the fit you get by other methods.
311- # ' The downsides of this method is that it doesn't use gradient or
312- # ' Hessian information, and so it is likely to be inefficient when
313- # ' the default starting values are close to the optimum, and it just
314- # ' generally will be slower because it utilizes multiple starting points
315- # ' to try to get out of local optima.
310+ # ' The downsides of this method are that it doesn't use gradient
311+ # ' information and evaluates the objective function at many different
312+ # ' points, so is likely to be much slower than gradient-based optimizers
313+ # ' such as the default `nlminb` optimizer, or `optim` with `method = "BFGS"`.
314+ # '
315+ # ' Because this optimizer starts from multiple points in the parameter
316+ # ' space, you need to specify lower and upper bounds for each parameter
317+ # ' in the parameter vector.
316318# '
317- # ' Because this optimizer starts from multiple places on the parameter
318- # ' space, you need to specify upper and lower values for the parameter
319- # ' vector -- between which different starting values will be chosen.
320- # ' Here is how that is done.
321319# ' ```
322320# ' mp_optimize(model, "DEoptim", lower = c(0, 0), upper = c(1.2, 1.2))
323321# ' ```
324- # ' Note that in this example we have two parameters, and therefore need
325- # ' to specify two `lower` and two `upper` sets of values.
322+ # ' In this example we have two parameters, and therefore need
323+ # ' to specify two values each for `lower` and `upper`
326324# '
327325# ' ## `optimize` and `optimise`
328326# '
329- # ' This optimizer can only be used for models parameterized with one
330- # ' parameter, and it is necessary to specify upper and lower values for
331- # ' this parameter using the following approach.
327+ # ' This optimizer can only be used for models parameterized with a single
328+ # ' parameter. You need to specify lower and upper bounds, e.g.
332329# ' ```
333330# ' mp_optimize(model, "optimize", c(0, 1.2))
334331# ' ```
@@ -351,12 +348,13 @@ print.TMBCalibrator = function(x, ...) {
351348# ' data = data[data$time > 24, ]
352349# ' data$time = data$time - 24
353350# '
354- # ' ## time scale object that accounts for the 24-steps of
355- # ' ## the epidemic that are not captured in the data.
356- # ' ## in real life we would need to guess at this number 24.
351+ # ' ## time scale object that accounts for the true starting time
352+ # ' ## of the epidemic (in this example we are not trying to estimate
353+ # ' ## the initial number infected, so the starting time strongly
354+ # ' ## affects the fitting procedure)
357355# ' time = mp_sim_offset(24, 0, "steps")
358356# '
359- # ' ## model calibrator with one transmission parameter to calibrate
357+ # ' ## model calibrator, estimating only the transmission parameter
360358# ' cal = mp_tmb_calibrator(spec
361359# ' , data = data
362360# ' , traj = "infection"
@@ -365,13 +363,13 @@ print.TMBCalibrator = function(x, ...) {
365363# ' , time = time
366364# ' )
367365# '
368- # ' ## this takes us into a local optimum at beta = 0.13,
369- # ' ## which is far away from the true value of beta = 0.6.
366+ # ' ## From the starting point at beta = 1 this takes us into a
367+ # ' ## local optimum at beta = 0.13, far from the true value of beta = 0.6.
370368# ' mp_optimize(cal)
371369# '
372- # ' ## this gets us out of the optimum and to the true value.
370+ # ' ## In this case, one-dimensional optimization finds the true value.
373371# ' mp_optimize(cal, "optimize", c(0, 1.2))
374- # '
372+ # '
375373# ' @export
376374mp_optimize = function (model , optimizer , ... ) UseMethod(" mp_optimize" )
377375
0 commit comments