You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/geometry-zero-finding.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -90,7 +90,7 @@ annotate!([(α, 0, "α", :top)])
90
90
p
91
91
```
92
92
93
-
The secant method is implemented in `Secant()`.
93
+
The secant method is implemented in `Secant()`. As the tangent line is the best local approximation to the function near a point, it should be expected that the secant method converges a slower rate than Newton's method.
94
94
95
95
Steffensen's method (`Root.Steffensen()`) is related to the secant method, though the points are not ``x_n`` and ``x_{n-1}``, rather ``x_n + f(x_n)`` and ``x_n``. As ``x_n`` gets close to ``\alpha``, ``f(x_n)`` gets close to ``0``, so this method converges at an asymptotic rate like Newton's method. (Though with a tradeoff, as the secant method needs only one new function evaluation per step, Steffensen's require two.)
Copy file name to clipboardExpand all lines: docs/src/index.md
-48Lines changed: 0 additions & 48 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -49,51 +49,3 @@ specification of a method. These include:
49
49
free. The `X` denotes the number of derivatives that need
50
50
specifying. The `Roots.LithBoonkkampIJzerman{S,D}` methods remember
51
51
`S` steps and use `D` derivatives.
52
-
53
-
54
-
55
-
## Basic usage
56
-
57
-
Consider the polynomial function ``f(x) = x^5 - x + 1/2``. As a polynomial, its roots, or zeros, could be identified with the `roots` function of the `Polynomials` package. However, even that function uses a numeric method to identify the values, as no solution with radicals is available. That is, even for polynomials, non-linear root finders are needed to solve ``f(x)=0``. (Though polynomial root-finders can exploit certain properties not available for general non-linear functions.)
58
-
59
-
The `Roots` package provides a variety of algorithms for this task. In this quick overview, only the default ones are illustrated.
60
-
61
-
For the function ``f(x) = x^5 - x + 1/2`` a simple plot will show a zero somewhere between ``-1.2`` and ``-1.0`` and two zeros near ``0.6``.
62
-
63
-
For the zero between two values at which the function changes sign, a
64
-
bracketing method is useful, as bracketing methods are guaranteed to
65
-
converge for continuous functions by the intermediate value
66
-
theorem. A bracketing algorithm will be used when the initial data is
The default algorithm is guaranteed to have an answer nearly as accurate as is possible given the limitations of floating point computations.
80
-
81
-
For the zeros "near" a point, a non-bracketing method is often used, as generally the algorithms are more efficient and can be used in cases where a zero does not cross the ``x`` axis. Passing just an initial guess will dispatch to such a method:
82
-
83
-
```jldoctest find_zero
84
-
julia> find_zero(f, 0.6) ≈ 0.550606579334135
85
-
true
86
-
```
87
-
88
-
89
-
This finds the answer to the left of the starting point. To get the other nearby zero, a starting point closer to the answer can be used.
90
-
91
-
However, an initial graph might convince one that any of the up-to-``5`` real roots will occur between ``-5`` and ``5``. The `find_zeros` function uses heuristics and a few of the algorithms to identify all zeros between the specified range. Here the method successfully identifies all ``3``:
Copy file name to clipboardExpand all lines: docs/src/reference.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -306,7 +306,7 @@ Roots.dfree
306
306
307
307
## MATLAB interface
308
308
309
-
The initial naming scheme used `fzero` instead of `fzeros`, following the name of the MATLAB function [fzero](https://www.mathworks.com/help/matlab/ref/fzero.html). This interface is not recommended, but, for now, still maintained.
309
+
The initial naming scheme used `fzero` instead of `find_zero`, following the name of the MATLAB function [fzero](https://www.mathworks.com/help/matlab/ref/fzero.html). This interface is not recommended, but, for now, still maintained.
Copy file name to clipboardExpand all lines: docs/src/roots.md
+61-14Lines changed: 61 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,55 @@ julia> using Roots, ForwardDiff
14
14
15
15
```
16
16
17
-
## Bracketing
17
+
## Basic usage
18
+
19
+
Consider the polynomial function ``f(x) = x^5 - x + 1/2``. As a polynomial, its roots, or zeros, could be identified with the `roots` function of the `Polynomials` package. However, even that function uses a numeric method to identify the values, as no solution with radicals is available. That is, even for polynomials, non-linear root finders are needed to solve ``f(x)=0``. (Though polynomial root-finders can exploit certain properties not available for general non-linear functions.)
20
+
21
+
The `Roots` package provides a variety of algorithms for this task. In this quick overview, only the default ones are illustrated.
22
+
23
+
For the function ``f(x) = x^5 - x + 1/2`` a simple plot over ``[-2,2]``will show a zero somewhere **between**``-1.5`` and ``-0.5`` and two zeros near ``0.6``. ("Between", as the continuous function has different signs at ``-1.5`` and ``-0.5``.)
24
+
25
+
For the zero between two values at which the function changes sign, a
26
+
bracketing method is useful, as bracketing methods are guaranteed to
27
+
converge for continuous functions by the intermediate value
28
+
theorem. A bracketing algorithm will be used when the initial data is
The default algorithm is guaranteed to have an answer nearly as accurate as is possible given the limitations of floating point computations.
42
+
43
+
For the zeros **near** a point, a non-bracketing method is often used, as generally the algorithms are more efficient and can be used in cases where a zero does not cross the ``x`` axis. Passing just an initial guess will dispatch to such a method:
44
+
45
+
```jldoctest find_zero
46
+
julia> find_zero(f, 0.6) ≈ 0.550606579334135
47
+
true
48
+
```
49
+
50
+
51
+
This finds the answer to the left of the starting point. To get the other nearby zero, a starting point closer to the answer can be used.
52
+
53
+
However, an initial graph might convince one that any of the up-to-``5`` real roots will occur between ``-2`` and ``2``. The `find_zeros` function uses heuristics and a few of the algorithms to identify all zeros between the specified range. Here the method successfully identifies all ``3``:
54
+
55
+
```jldoctest find_zero
56
+
julia> find_zeros(f, -2, 2)
57
+
3-element Vector{Float64}:
58
+
-1.0983313019186334
59
+
0.5506065793341349
60
+
0.7690997031778959
61
+
```
62
+
63
+
This shows the two main entry points of `Roots`: `find_zero` to locate a zero between or near values using one of many methods and `find_zeros` to heuristically identify all zeros within some interval.
64
+
65
+
## Bracketing methods
18
66
19
67
For a function $f$ (univariate, real-valued) a *bracket* is a pair $ a < b $
20
68
for which $f(a) \cdot f(b) < 0$. That is the function values have
@@ -159,11 +207,11 @@ julia> rt - pi
159
207
160
208
```
161
209
162
-
## Non-bracketing problems
210
+
## Non-bracketing methods
163
211
164
212
Bracketing methods have guaranteed convergence, but in general may
165
213
require many more function calls than are otherwise needed to produce
166
-
an answer and not all zeros of a function are bracketed. If a good
214
+
an answer and not all zeros of a function may be bracketed. If a good
167
215
initial guess is known, then the `find_zero` function provides an
168
216
interface to some different iterative algorithms that are more
169
217
efficient. Unlike bracketing methods, these algorithms may not
@@ -191,16 +239,16 @@ julia> x, f(x)
191
239
192
240
```
193
241
194
-
For the polynomial $f(x) = x^3 - 2x - 5$, an initial guess of 2 seems reasonable:
242
+
For the polynomial $f(x) = x^3 - 2x - 5$, an initial guess of $2$ seems reasonable:
`Order5`, `Order8`, and `Order16`. The order 1 or 2 methods are
274
+
`Order5`, `Order8`, and `Order16`. The order $1$ or $2$ methods are
227
275
generally quite efficient in terms of steps needed over floating point
228
276
values. The even-higher-order ones are potentially useful when more
229
277
precision is used. These algorithms are accessed by specifying the
@@ -263,8 +311,7 @@ julia> x, f(x)
263
311
264
312
```
265
313
266
-
Starting at ``2`` the algorithm converges to ``1``, showing that zeros need not be simple zeros to be found.
267
-
A simple zero, $c$, has $f(x) = (x-c) \cdot g(x)$ where $g(c) \neq 0$.
314
+
Starting at ``2`` the algorithm converges towards ``1``, showing that zeros need not be simple zeros to be found. A simple zero, $c,$ has $f(x) = (x-c) \cdot g(x)$ where $g(c) \neq 0$.
268
315
Generally speaking, non-simple zeros are
269
316
expected to take many more function calls, as the methods are no
270
317
longer super-linear. This is the case here, where `Order2` uses $51$
The problem-algorithm-solve interface is a pattern popularized in `Julia` by the `DifferentialEquations.jl` suite of packages. The pattern consists of setting up a *problem* then *solving* the problem by specifying an *algorithm*. This is very similar to what is specified in the `find_zero(f, x0, M)` interface where `f` and `x0` specify the problem, `M` the algorithm, and `find_zero` calls the solver.
372
419
373
-
To break this up into steps, `Roots` has methods `ZeroProblem` and `init`, `solve`, and `solve!` from the `CommonSolve.jl` package.
420
+
To break this up into steps, `Roots` has the type `ZeroProblem` and methods for`init`, `solve`, and `solve!` from the `CommonSolve.jl` package.
374
421
375
422
Consider solving ``\sin(x) = 0`` using the `Secant` method starting with the interval ``[3,4]``.
376
423
@@ -704,7 +751,7 @@ julia> x, f(x)
704
751
705
752
Finally, for many functions, all of these methods need a good initial
706
753
guess. For example, the polynomial function $f(x) = x^5 - x - 1$ has
707
-
its one zero near $1.16$.
754
+
its one real zero near $1.16$.
708
755
709
756
If we start far from the zero, convergence may happen, but it isn't
0 commit comments