Skip to content

Commit 27b87a9

Browse files
haampieandreasnoack
authored andcommitted
Remove FactCheck, use Base.test and simplify tests (#150)
* Make sure CG gets the correct types * Use Base.Test in bicgstab * Use Base.Test in cg * Simplify cg * Chebyshev to base.test and simplified tests * Remove LinearMaps tests (basically verifying LinearMaps internals) and move to Base.Test * Simplify GMRES tests and move partially to Base.Test * Use new API for LinearMap * Move factorization to Base.Test * Use Base.Test in Hessenberg test * Simplify tests and use Base.Test * Indentation and move initialization to the start of the test * Use Base.Test in lsmr * Use Base.Test in orthogonalization tests * Use Base.Test in rlinalg * Use Base.Test in rsvd and use broadcasting * Use Base.Test in rsvd_fnkz * Use Base.Test in simple eigensolvers and update the API for LinearMaps * Simplify stationary solvers tests and use Base.Test * Update the Lanczos test * Remove FastCheck related things * Remove FactCheck dependency * Simplify ls__ tests * Surpress broadcast warning * Use broadcasting properly * Use Base.Test in minres * Remove empty tests in history smoke test * Move matrix stuff to benchmark folder * Reduce dependencies in test REQUIRE file * Test preconditioners with UmfpackLU on Julia 0.6+ only, since Julia 0.5 does not have support for A_ldiv_B!(::UmfpackLU, ::SubArray) * Fix builds on Julia nightly by escaping strings properly * Support 0.5 * Escape even more strings * Remove LinearMaps where it isn't used * Remove checking for 0.5 * Use latest API of LinearMaps * Use I over (sp)eye * Actually run the orthogonalization test * Simplify orth. test * Fix one() in CG
1 parent 968d3a2 commit 27b87a9

30 files changed

+528
-825
lines changed
File renamed without changes.
File renamed without changes.

src/cg.jl

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -118,17 +118,15 @@ function cg_iterator!(x, A, b, Pl = Identity();
118118
reltol = norm(b) * tol
119119
end
120120

121-
# Stopping criterion
122-
ρ = one(residual)
123-
124121
# Return the iterable
125122
if isa(Pl, Identity)
126123
return CGIterable(A, x, b,
127124
r, c, u,
128-
reltol, residual, ρ,
125+
reltol, residual, one(residual),
129126
maxiter, mv_products
130127
)
131128
else
129+
ρ = one(eltype(r))
132130
return PCGIterable(Pl, A, x, b,
133131
r, c, u,
134132
reltol, residual, ρ,

src/factorization.jl

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ and `P` computed by `idfact()`. See the documentation of `idfact()` for details.
1212
1313
# References
1414
15-
\cite{Cheng2005, Liberty2007}
15+
\\cite{Cheng2005, Liberty2007}
1616
"""
1717
immutable Interpolative{T} <: Factorization{T}
1818
B :: AbstractMatrix{T}
@@ -43,13 +43,13 @@ Where:
4343
4444
# Implementation note
4545
46-
This is a hacky version of the algorithms described in \cite{Liberty2007}
47-
and \cite{Cheng2005}. The former refers to the factorization (3.1) of the
46+
This is a hacky version of the algorithms described in \\cite{Liberty2007}
47+
and \\cite{Cheng2005}. The former refers to the factorization (3.1) of the
4848
latter. However, it is not actually necessary to compute this
4949
factorization in its entirely to compute an interpolative decomposition.
5050
5151
Instead, it suffices to find some permutation of the first k columns of Y =
52-
R * A, extract the subset of A into B, then compute the P matrix as B\A
52+
R * A, extract the subset of A into B, then compute the P matrix as B\\A
5353
which will automatically compute P using a suitable least-squares
5454
algorithm.
5555
@@ -59,7 +59,7 @@ pivoted QR process.
5959
6060
# References
6161
62-
\cite[Algorithm I]{Liberty2007}
62+
\\cite[Algorithm I]{Liberty2007}
6363
6464
```bibtex
6565
@article{Cheng2005,

src/rlinalg.jl

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ see [`rnorms`](@ref) for a different estimator that uses premultiplying by both
8686
8787
# References
8888
89-
\cite[Lemma 4.1]{Halko2011}
89+
\\cite[Lemma 4.1]{Halko2011}
9090
"""
9191
function rnorm(A, r::Int, p::Real=0.05)
9292
@assert 0<p1
@@ -113,7 +113,7 @@ bound on the true norm by a factor
113113
114114
ρ ≤ α ‖A‖
115115
116-
with probability greater than `1 - p`, where `p = 4\sqrt(n/(iters-1)) α^(-2iters)`.
116+
with probability greater than `1 - p`, where `p = 4\\sqrt(n/(iters-1)) α^(-2iters)`.
117117
118118
# Arguments
119119
@@ -138,7 +138,7 @@ premultiplying by `A'`
138138
139139
# References
140140
141-
Appendix of \cite{Liberty2007}.
141+
Appendix of \\cite{Liberty2007}.
142142
143143
```bibtex
144144
@article{Liberty2007,
@@ -175,7 +175,7 @@ Estimate matrix condition number randomly.
175175
# Arguments
176176
177177
`A`: matrix whose condition number to estimate. Must be square and
178-
support premultiply (`A*⋅`) and solve (`A\⋅`).
178+
support premultiply (`A*⋅`) and solve (`A\\⋅`).
179179
180180
`iters::Int = 1`: number of power iterations to run.
181181
@@ -189,7 +189,7 @@ Interval `(x, y)` which contains `κ(A)` with probability `1 - p`.
189189
190190
# Implementation note
191191
192-
\cite{Dixon1983} originally describes this as a computation that
192+
\\cite{Dixon1983} originally describes this as a computation that
193193
can be done by computing the necessary number of power iterations given p
194194
and the desired accuracy parameter `θ=y/x`. However, these bounds were only
195195
derived under the assumptions of exact arithmetic. Empirically, `iters≥4` has
@@ -200,7 +200,7 @@ parameter and hence the interval containing `κ(A)`.
200200
201201
# References
202202
203-
\cite[Theorem 2]{Dixon1983}
203+
\\cite[Theorem 2]{Dixon1983}
204204
205205
```bibtex
206206
@article{Dixon1983,
@@ -256,7 +256,7 @@ probability `1 - p`.
256256
257257
# References
258258
259-
\cite[Corollary of Theorem 1]{Dixon1983}.
259+
\\cite[Corollary of Theorem 1]{Dixon1983}.
260260
"""
261261
function reigmax(A, k::Int=1, p::Real=0.05)
262262
@assert 0<p1
@@ -294,7 +294,7 @@ probability `1 - p`.
294294
295295
# References
296296
297-
\cite[Corollary of Theorem 1]{Dixon1983}.
297+
\\cite[Corollary of Theorem 1]{Dixon1983}.
298298
"""
299299
function reigmin(A, k::Int=1, p::Real=0.05)
300300
@assert 0<p1
@@ -345,7 +345,7 @@ Apply a subsampled random Fourier transform to the columns of `A`.
345345
346346
# References
347347
348-
\[Equation 4.6]{Halko2011}
348+
\\[Equation 4.6]{Halko2011}
349349
"""
350350
#Define two methods here to avoid method ambiguity with f::Function*b::Any
351351
*(A::Function, Ω::srft) = function *(A, Ω::srft)

src/rsvd.jl

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -40,9 +40,9 @@ largest.
4040
4141
This function calls `rrange`, which uses naive randomized rangefinding to
4242
compute a basis for a subspace of dimension `n` (Algorithm 4.1 of
43-
\cite{Halko2011}), followed by `svdfact_restricted()`, which computes the
43+
\\cite{Halko2011}), followed by `svdfact_restricted()`, which computes the
4444
exact SVD factorization on the restriction of `A` to this randomly selected
45-
subspace (Algorithm 5.1 of \cite{Halko2011}).
45+
subspace (Algorithm 5.1 of \\cite{Halko2011}).
4646
4747
Alternatively, you can mix and match your own randomized algorithm using
4848
any of the randomized range finding algorithms to find a suitable subspace
@@ -100,9 +100,9 @@ largest.
100100
101101
This function calls `rrange`, which uses naive randomized rangefinding to
102102
compute a basis for a subspace of dimension `n` (Algorithm 4.1 of
103-
\cite{Halko2011}), followed by `svdfact_restricted()`, which computes the
103+
\\cite{Halko2011}), followed by `svdfact_restricted()`, which computes the
104104
exact SVD factorization on the restriction of `A` to this randomly selected
105-
subspace (Algorithm 5.1 of \cite{Halko2011}).
105+
subspace (Algorithm 5.1 of \\cite{Halko2011}).
106106
107107
Alternatively, you can mix and match your own randomized algorithm using
108108
any of the randomized range finding algorithms to find a suitable subspace
@@ -153,7 +153,7 @@ The Reference explicitly discourages using this algorithm.
153153
154154
# Implementation note
155155
156-
Whereas \cite{Halko2011} recommends classical Gram-Schmidt with double
156+
Whereas \\cite{Halko2011} recommends classical Gram-Schmidt with double
157157
reorthogonalization, we instead compute the basis with `qrfact()`, which
158158
for dense `A` computes the QR factorization using Householder reflectors.
159159
"""
@@ -198,7 +198,7 @@ vectors of the computed subspace of `A`.
198198
199199
# References
200200
201-
Algorithm 4.2 of \cite{Halko2011}
201+
Algorithm 4.2 of \\cite{Halko2011}
202202
"""
203203
function rrange_adaptive(A, r::Integer, ϵ::Real=eps(); maxiter::Int=10)
204204
m, n = size(A)
@@ -265,7 +265,7 @@ for dense A computes the QR factorization using Householder reflectors.
265265
266266
# References
267267
268-
Algorithm 4.4 of \cite{Halko2011}
268+
Algorithm 4.4 of \\cite{Halko2011}
269269
"""
270270
function rrange_si(A, l::Int; At=A', q::Int=0)
271271
basis=x->full(qrfact(x)[:Q])
@@ -312,7 +312,7 @@ for dense `A` computes the QR factorization using Householder reflectors.
312312
313313
# References
314314
315-
Algorithm 4.5 of \cite{Halko2011}
315+
Algorithm 4.5 of \\cite{Halko2011}
316316
"""
317317
function rrange_f(A, l::Int)
318318
n = size(A, 2)
@@ -340,7 +340,7 @@ desired.
340340
341341
# References
342342
343-
Algorithm 5.1 of \cite{Halko2011}
343+
Algorithm 5.1 of \\cite{Halko2011}
344344
"""
345345
function svdfact_restricted(A, Q, n::Int)
346346
B=Q'A
@@ -367,7 +367,7 @@ desired.
367367
368368
# References
369369
370-
Algorithm 5.1 of \cite{Halko2011}
370+
Algorithm 5.1 of \\cite{Halko2011}
371371
"""
372372
function svdvals_restricted(A, Q, n::Int)
373373
B=Q'A
@@ -380,7 +380,7 @@ end
380380
Compute the SVD factorization of `A` restricted to the subspace spanned by `Q`
381381
using row extraction.
382382
383-
*Note:* \cite[Remark 5.2]{Halko2011} recommends input of `Q` of the form `Q=A*Ω`
383+
*Note:* \\cite[Remark 5.2]{Halko2011} recommends input of `Q` of the form `Q=A*Ω`
384384
where `Ω` is a sample computed by `randn(n,l)` or even `srft(l)`.
385385
386386
# Arguments
@@ -401,7 +401,7 @@ interpolative decomposition `idfact`.
401401
402402
# References
403403
404-
Algorithm 5.2 of \cite{Halko2011}
404+
Algorithm 5.2 of \\cite{Halko2011}
405405
"""
406406
function svdfact_re(A, Q)
407407
F = idfact(Q)
@@ -431,7 +431,7 @@ restriction to is desired.
431431
432432
# References
433433
434-
Algorithm 5.3 of \cite{Halko2011}
434+
Algorithm 5.3 of \\cite{Halko2011}
435435
"""
436436
function eigfact_restricted(A::Hermitian, Q)
437437
B = Q'A*Q
@@ -445,7 +445,7 @@ end
445445
Compute the spectral (`Eigen`) factorization of `A` restricted to the subspace
446446
spanned by `Q` using row extraction.
447447
448-
*Note:* \cite[Remark 5.2]{Halko2011} recommends input of `Q` of the form `Q=A*Ω`
448+
*Note:* \\cite[Remark 5.2]{Halko2011} recommends input of `Q` of the form `Q=A*Ω`
449449
where `Ω` is a sample computed by `randn(n,l)` or even `srft(l)`.
450450
451451
# Arguments
@@ -466,7 +466,7 @@ interpolative decomposition `idfact()`.
466466
467467
# References
468468
469-
Algorithm 5.4 of \cite{Halko2011}
469+
Algorithm 5.4 of \\cite{Halko2011}
470470
"""
471471
function eigfact_re(A::Hermitian, Q)
472472
X, J = idfact(Q)
@@ -501,7 +501,7 @@ that can be Cholesky decomposed.
501501
502502
# References
503503
504-
Algorithm 5.5 of \cite{Halko2011}
504+
Algorithm 5.5 of \\cite{Halko2011}
505505
"""
506506
function eigfact_nystrom(A, Q)
507507
B₁=A*Q
@@ -534,7 +534,7 @@ product involving `A`.
534534
535535
# References
536536
537-
Algorithm 5.6 of \cite{Halko2011}
537+
Algorithm 5.6 of \\cite{Halko2011}
538538
"""
539539
function eigfact_onepass(A::Hermitian, Ω)
540540
Y=A*Ω; Q = full(qrfact!(Y)[:Q])

src/rsvd_fnkz.jl

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ function rsvd_fnkz(A, k::Int;
7575
A[π, :]
7676
end
7777
X, RRR = qr(B₀)
78-
X = X[:, abs(diag(RRR)) .> ϵ] #Remove linear dependent columns
78+
X = X[:, abs.(diag(RRR)) .> ϵ] #Remove linear dependent columns
7979
B = tallandskinny ? X A'X : X X'A
8080

8181
oldnrmB = 0.0
@@ -86,7 +86,7 @@ function rsvd_fnkz(A, k::Int;
8686
π = randperm(k)[1:l]
8787
#Update B using Theorem 2.4
8888
X, RRR = qr([B.X A[:, π]]) #We are doing more work than needed here
89-
X = X[:, abs(diag(RRR)) .> ϵ] #Remove linearly dependent columns
89+
X = X[:, abs.(diag(RRR)) .> ϵ] #Remove linearly dependent columns
9090
Y = A'X
9191
if dosvd
9292
S = svdfact!(Y)
@@ -111,5 +111,5 @@ function rsvd_fnkz(A, k::Int;
111111
for i=1:size(B.Y, 2)
112112
scale!(view(B.Y, :, i), 1/√Λ[i])
113113
end
114-
Base.LinAlg.SVD(B.X, Λ, B.Y')
114+
Base.LinAlg.SVD(B.X, sqrt.(Λ), B.Y')
115115
end

src/svdl.jl

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ end
8383
svdl(A)
8484
8585
Compute some singular values (and optionally vectors) using Golub-Kahan-Lanczos
86-
bidiagonalization \cite{Golub1965} with thick restarting \cite{Wu2000}.
86+
bidiagonalization \\cite{Golub1965} with thick restarting \\cite{Wu2000}.
8787
8888
If `log` is set to `true` is given, method will output a tuple `X, L, ch`. Where
8989
`ch` is a `ConvergenceHistory` object. Otherwise it will only return `X, L`.
@@ -346,7 +346,7 @@ function isconverged(L::PartialFactorization, F::Base.LinAlg.SVD, k::Int, tol::R
346346
@assert tol 0
347347

348348
σ = F[:S][1:k]
349-
Δσ= L.β*abs(F[:U][end, 1:k])
349+
Δσ= L.β * abs.(F[:U][end, 1 : k])
350350

351351
#Best available eigenvalue bounds
352352
δσ = copy(Δσ)
@@ -577,7 +577,7 @@ which case it will be necessary to orthogonalize both sets of vectors. See
577577
578578
```bibtex
579579
@book{Bjorck2015,
580-
author = {Bj{\"{o}}rck, {\AA}ke},
580+
author = {Bj{\\"{o}}rck, {\\AA}ke},
581581
doi = {10.1007/978-3-319-05089-8},
582582
publisher = {Springer},
583583
series = {Texts in Applied Mathematics},

test/IterativeSolvers.jld

-30.5 KB
Binary file not shown.

test/REQUIRE

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,3 @@
1-
FactCheck
2-
MAT
3-
MatrixMarket
41
Plots
52
UnicodePlots
63
LinearMaps
7-
JLD

0 commit comments

Comments
 (0)