Skip to content

Commit 0ca2344

Browse files
authored
slight doc reorganization (#420)
* slight doc reorganization * add typos
1 parent aa1446d commit 0ca2344

File tree

9 files changed

+79
-72
lines changed

9 files changed

+79
-72
lines changed

_typos.toml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
[default.extend-words]
2+
abd = "abd"

docs/make.jl

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,13 @@ DocMeta.setdocmeta!(Roots, :DocTestSetup, :(using Roots); recursive=true)
88
makedocs(
99
sitename = "Roots",
1010
format = Documenter.HTML(ansicolor=true),
11-
modules = [Roots]
11+
modules = [Roots],
12+
pages=[
13+
"Home" => "index.md",
14+
"Overview" => "roots.md",
15+
"Reference/API" => "reference.md",
16+
"Geometry" => "geometry-zero-finding.md"
17+
]
1218
)
1319

1420
deploydocs(

docs/src/geometry-zero-finding.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ annotate!([(α, 0, "α", :top)])
9090
p
9191
```
9292

93-
The secant method is implemented in `Secant()`.
93+
The secant method is implemented in `Secant()`. As the tangent line is the best local approximation to the function near a point, it should be expected that the secant method converges a slower rate than Newton's method.
9494

9595
Steffensen's method (`Root.Steffensen()`) is related to the secant method, though the points are not ``x_n`` and ``x_{n-1}``, rather ``x_n + f(x_n)`` and ``x_n``. As ``x_n`` gets close to ``\alpha``, ``f(x_n)`` gets close to ``0``, so this method converges at an asymptotic rate like Newton's method. (Though with a tradeoff, as the secant method needs only one new function evaluation per step, Steffensen's require two.)
9696

docs/src/index.md

Lines changed: 0 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -49,51 +49,3 @@ specification of a method. These include:
4949
free. The `X` denotes the number of derivatives that need
5050
specifying. The `Roots.LithBoonkkampIJzerman{S,D}` methods remember
5151
`S` steps and use `D` derivatives.
52-
53-
54-
55-
## Basic usage
56-
57-
Consider the polynomial function ``f(x) = x^5 - x + 1/2``. As a polynomial, its roots, or zeros, could be identified with the `roots` function of the `Polynomials` package. However, even that function uses a numeric method to identify the values, as no solution with radicals is available. That is, even for polynomials, non-linear root finders are needed to solve ``f(x)=0``. (Though polynomial root-finders can exploit certain properties not available for general non-linear functions.)
58-
59-
The `Roots` package provides a variety of algorithms for this task. In this quick overview, only the default ones are illustrated.
60-
61-
For the function ``f(x) = x^5 - x + 1/2`` a simple plot will show a zero somewhere between ``-1.2`` and ``-1.0`` and two zeros near ``0.6``.
62-
63-
For the zero between two values at which the function changes sign, a
64-
bracketing method is useful, as bracketing methods are guaranteed to
65-
converge for continuous functions by the intermediate value
66-
theorem. A bracketing algorithm will be used when the initial data is
67-
passed as a tuple:
68-
69-
```jldoctest find_zero
70-
julia> using Roots
71-
72-
julia> f(x) = x^5 - x + 1/2
73-
f (generic function with 1 method)
74-
75-
julia> find_zero(f, (-1.2, -1)) ≈ -1.0983313019186336
76-
true
77-
```
78-
79-
The default algorithm is guaranteed to have an answer nearly as accurate as is possible given the limitations of floating point computations.
80-
81-
For the zeros "near" a point, a non-bracketing method is often used, as generally the algorithms are more efficient and can be used in cases where a zero does not cross the ``x`` axis. Passing just an initial guess will dispatch to such a method:
82-
83-
```jldoctest find_zero
84-
julia> find_zero(f, 0.6) ≈ 0.550606579334135
85-
true
86-
```
87-
88-
89-
This finds the answer to the left of the starting point. To get the other nearby zero, a starting point closer to the answer can be used.
90-
91-
However, an initial graph might convince one that any of the up-to-``5`` real roots will occur between ``-5`` and ``5``. The `find_zeros` function uses heuristics and a few of the algorithms to identify all zeros between the specified range. Here the method successfully identifies all ``3``:
92-
93-
```jldoctest find_zero
94-
julia> find_zeros(f, -5, 5)
95-
3-element Vector{Float64}:
96-
-1.0983313019186334
97-
0.550606579334135
98-
0.7690997031778959
99-
```

docs/src/reference.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -306,7 +306,7 @@ Roots.dfree
306306

307307
## MATLAB interface
308308

309-
The initial naming scheme used `fzero` instead of `fzeros`, following the name of the MATLAB function [fzero](https://www.mathworks.com/help/matlab/ref/fzero.html). This interface is not recommended, but, for now, still maintained.
309+
The initial naming scheme used `fzero` instead of `find_zero`, following the name of the MATLAB function [fzero](https://www.mathworks.com/help/matlab/ref/fzero.html). This interface is not recommended, but, for now, still maintained.
310310

311311
```@docs
312312
fzero

docs/src/roots.md

Lines changed: 61 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,55 @@ julia> using Roots, ForwardDiff
1414
1515
```
1616

17-
## Bracketing
17+
## Basic usage
18+
19+
Consider the polynomial function ``f(x) = x^5 - x + 1/2``. As a polynomial, its roots, or zeros, could be identified with the `roots` function of the `Polynomials` package. However, even that function uses a numeric method to identify the values, as no solution with radicals is available. That is, even for polynomials, non-linear root finders are needed to solve ``f(x)=0``. (Though polynomial root-finders can exploit certain properties not available for general non-linear functions.)
20+
21+
The `Roots` package provides a variety of algorithms for this task. In this quick overview, only the default ones are illustrated.
22+
23+
For the function ``f(x) = x^5 - x + 1/2`` a simple plot over ``[-2,2]``will show a zero somewhere **between** ``-1.5`` and ``-0.5`` and two zeros near ``0.6``. ("Between", as the continuous function has different signs at ``-1.5`` and ``-0.5``.)
24+
25+
For the zero between two values at which the function changes sign, a
26+
bracketing method is useful, as bracketing methods are guaranteed to
27+
converge for continuous functions by the intermediate value
28+
theorem. A bracketing algorithm will be used when the initial data is
29+
passed as a tuple:
30+
31+
```jldoctest find_zero
32+
julia> using Roots
33+
34+
julia> f(x) = x^5 - x + 1/2
35+
f (generic function with 1 method)
36+
37+
julia> find_zero(f, (-3/2, -1/2)) ≈ -1.0983313019186336
38+
true
39+
```
40+
41+
The default algorithm is guaranteed to have an answer nearly as accurate as is possible given the limitations of floating point computations.
42+
43+
For the zeros **near** a point, a non-bracketing method is often used, as generally the algorithms are more efficient and can be used in cases where a zero does not cross the ``x`` axis. Passing just an initial guess will dispatch to such a method:
44+
45+
```jldoctest find_zero
46+
julia> find_zero(f, 0.6) ≈ 0.550606579334135
47+
true
48+
```
49+
50+
51+
This finds the answer to the left of the starting point. To get the other nearby zero, a starting point closer to the answer can be used.
52+
53+
However, an initial graph might convince one that any of the up-to-``5`` real roots will occur between ``-2`` and ``2``. The `find_zeros` function uses heuristics and a few of the algorithms to identify all zeros between the specified range. Here the method successfully identifies all ``3``:
54+
55+
```jldoctest find_zero
56+
julia> find_zeros(f, -2, 2)
57+
3-element Vector{Float64}:
58+
-1.0983313019186334
59+
0.5506065793341349
60+
0.7690997031778959
61+
```
62+
63+
This shows the two main entry points of `Roots`: `find_zero` to locate a zero between or near values using one of many methods and `find_zeros` to heuristically identify all zeros within some interval.
64+
65+
## Bracketing methods
1866

1967
For a function $f$ (univariate, real-valued) a *bracket* is a pair $ a < b $
2068
for which $f(a) \cdot f(b) < 0$. That is the function values have
@@ -159,11 +207,11 @@ julia> rt - pi
159207
160208
```
161209

162-
## Non-bracketing problems
210+
## Non-bracketing methods
163211

164212
Bracketing methods have guaranteed convergence, but in general may
165213
require many more function calls than are otherwise needed to produce
166-
an answer and not all zeros of a function are bracketed. If a good
214+
an answer and not all zeros of a function may be bracketed. If a good
167215
initial guess is known, then the `find_zero` function provides an
168216
interface to some different iterative algorithms that are more
169217
efficient. Unlike bracketing methods, these algorithms may not
@@ -191,16 +239,16 @@ julia> x, f(x)
191239
192240
```
193241

194-
For the polynomial $f(x) = x^3 - 2x - 5$, an initial guess of 2 seems reasonable:
242+
For the polynomial $f(x) = x^3 - 2x - 5$, an initial guess of $2$ seems reasonable:
195243

196244
```jldoctest roots
197245
julia> f(x) = x^3 - 2x - 5;
198246
199247
julia> x = find_zero(f, 2)
200248
2.0945514815423265
201249
202-
julia> x, f(x), sign(f(prevfloat(x)) * f(nextfloat(x)))
203-
(2.0945514815423265, -8.881784197001252e-16, -1.0)
250+
julia> f(x), sign(f(prevfloat(x)) * f(x)), sign(f(x) * f(nextfloat(x)))
251+
(-8.881784197001252e-16, 1.0, -1.0)
204252
205253
```
206254

@@ -215,15 +263,15 @@ julia> x, sin(x), x - pi
215263
216264
```
217265

218-
### Higher order methods
266+
### Higher-order methods
219267

220268
The default call to `find_zero` uses a first order method and then
221269
possibly bracketing, which potentially involves more function
222270
calls than necessary. There may be times where a more efficient algorithm is sought.
223271
For such, a higher-order method might be better suited. There are
224272
algorithms `Order1` (secant method), `Order2`
225273
([Steffensen](http://en.wikipedia.org/wiki/Steffensen's_method)),
226-
`Order5`, `Order8`, and `Order16`. The order 1 or 2 methods are
274+
`Order5`, `Order8`, and `Order16`. The order $1$ or $2$ methods are
227275
generally quite efficient in terms of steps needed over floating point
228276
values. The even-higher-order ones are potentially useful when more
229277
precision is used. These algorithms are accessed by specifying the
@@ -263,8 +311,7 @@ julia> x, f(x)
263311
264312
```
265313

266-
Starting at ``2`` the algorithm converges to ``1``, showing that zeros need not be simple zeros to be found.
267-
A simple zero, $c$, has $f(x) = (x-c) \cdot g(x)$ where $g(c) \neq 0$.
314+
Starting at ``2`` the algorithm converges towards ``1``, showing that zeros need not be simple zeros to be found. A simple zero, $c,$ has $f(x) = (x-c) \cdot g(x)$ where $g(c) \neq 0$.
268315
Generally speaking, non-simple zeros are
269316
expected to take many more function calls, as the methods are no
270317
longer super-linear. This is the case here, where `Order2` uses $51$
@@ -370,7 +417,7 @@ julia> find_zero(dfᵏs(f, 2), 2, Roots.LithBoonkkampIJzerman(2,2)) # like Halle
370417

371418
The problem-algorithm-solve interface is a pattern popularized in `Julia` by the `DifferentialEquations.jl` suite of packages. The pattern consists of setting up a *problem* then *solving* the problem by specifying an *algorithm*. This is very similar to what is specified in the `find_zero(f, x0, M)` interface where `f` and `x0` specify the problem, `M` the algorithm, and `find_zero` calls the solver.
372419

373-
To break this up into steps, `Roots` has methods `ZeroProblem` and `init`, `solve`, and `solve!` from the `CommonSolve.jl` package.
420+
To break this up into steps, `Roots` has the type `ZeroProblem` and methods for `init`, `solve`, and `solve!` from the `CommonSolve.jl` package.
374421

375422
Consider solving ``\sin(x) = 0`` using the `Secant` method starting with the interval ``[3,4]``.
376423

@@ -704,7 +751,7 @@ julia> x, f(x)
704751

705752
Finally, for many functions, all of these methods need a good initial
706753
guess. For example, the polynomial function $f(x) = x^5 - x - 1$ has
707-
its one zero near $1.16$.
754+
its one real zero near $1.16$.
708755

709756
If we start far from the zero, convergence may happen, but it isn't
710757
guaranteed:
@@ -740,7 +787,7 @@ savefig("newton.svg"); nothing # hide
740787

741788
![](newton.svg)
742789

743-
Even with real graphics, only a few of the steps are discernible, as the function's relative maximum
790+
Graphically only a few of the steps are discernible, as the function's relative maximum
744791
causes a trap for this algorithm. Starting to the right of the
745792
relative minimum--nearer the zero--would avoid this trap. The default
746793
method employs a trick to bounce out of such traps, though it doesn't
@@ -854,7 +901,7 @@ julia> findall(iszero, fs)
854901

855902

856903
For `f1(x)` there is only one zero, but it isn't the floating
857-
point value for `1/3` but rather 10 floating point numbers
904+
point value for `1/3` but rather $10$ floating point numbers
858905
away.
859906

860907

src/Bracketing/alefeld_potra_shi.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -218,7 +218,7 @@ end
218218
"""
219219
Roots.AlefeldPotraShi()
220220
221-
Follows algorithm 4.1 in "ON ENCLOSING SIMPLE ROOTS OF NONLINEAR
221+
Follows Algorithm 4.1 in "ON ENCLOSING SIMPLE ROOTS OF NONLINEAR
222222
EQUATIONS", by Alefeld, Potra, Shi; DOI:
223223
[10.1090/S0025-5718-1993-1192965-2](https://doi.org/10.1090/S0025-5718-1993-1192965-2).
224224
@@ -279,7 +279,7 @@ end
279279
280280
Bracketing method which finds the root of a continuous function within
281281
a provided bracketing interval `[a, b]`, without requiring derivatives. It is based
282-
on algorithm 4.2 described in: G. E. Alefeld, F. A. Potra, and
282+
on Algorithm 4.2 described in: G. E. Alefeld, F. A. Potra, and
283283
Y. Shi, "Algorithm 748: enclosing zeros of continuous functions," ACM
284284
Trans. Math. Softw. 21, 327–344 (1995), DOI: [10.1145/210089.210111](https://doi.org/10.1145/210089.210111).
285285
The asymptotic efficiency index, ``q^{1/k}``, is ``(2 + 7^{1/2})^{1/3} = 1.6686...``.

src/Derivative/halley_like.jl

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -59,8 +59,8 @@ end
5959
"""
6060
Roots.Halley()
6161
62-
Implements Halley's [method](https://en.wikipedia.org/wiki/Halley%27s_method), `xᵢ₊₁ = xᵢ
63-
- (f/f')(xᵢ) * (1 - (f/f')(xᵢ) * (f''/f')(xᵢ) * 1/2)^(-1)` This method
62+
Implements Halley's [method](https://en.wikipedia.org/wiki/Halley%27s_method),
63+
`xᵢ₊₁ = xᵢ - (f/f')(xᵢ) * (1 - (f/f')(xᵢ) * (f''/f')(xᵢ) * 1/2)^(-1)` This method
6464
is cubically converging, it requires ``3`` function calls per
6565
step. Halley's method finds `xₙ₊₁` as the zero of a hyperbola at the
6666
point `(xₙ, f(xₙ))` matching the first and second derivatives of `f`.
@@ -131,12 +131,12 @@ The error, `eᵢ = xᵢ - α`, [satisfies](https://dl.acm.org/doi/10.1080/002071
131131
struct QuadraticInverse <: AbstractΔMethod end
132132

133133
"""
134-
CHEBYSHEV-LIKE METHODS AND QUADRATIC EQUATIONS (J. A. EZQUERRO, J. M. GUTIÉRREZ, M. A. HERNÁNDEZ and M. A. SALANOVA)
134+
Chebyshev-like methods and quadratic equations (J. A. Ezquerro, J. M. Gutiérrez, M. A. Hernández and M. A. Salanova)
135135
"""
136136
struct ChebyshevLike <: AbstractΔMethod end
137137

138138
"""
139-
An acceleration of Newton's method: Super-Halley method (J.M. Gutierrez, M.A. Hernandez
139+
An acceleration of Newton's method: Super-Halley method (J.M. Gutierrez, M.A. Hernandez)
140140
"""
141141
struct SuperHalley <: AbstractΔMethod end
142142

src/alternative_interfaces.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -249,7 +249,7 @@ end
249249
fzeros(f, a, b; kwargs...)
250250
fzeros(f, ab; kwargs...)
251251
252-
Searches for all zeros of `f` within an interval `(a,b)`. Assume neither `a` or `b` is a zero.
252+
Searches for all zeros of `f` within an interval `(a,b)`. Assumes neither `a` or `b` is a zero.
253253
254254
Compatibility interface for [`find_zeros`](@ref).
255255
"""

0 commit comments

Comments
 (0)