Skip to content

Commit 8a717da

Browse files
committed
add code for chapter 21
1 parent 6fd2e86 commit 8a717da

File tree

3 files changed

+56
-5
lines changed

3 files changed

+56
-5
lines changed

demos/book/21/airyeigmax.jl

Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
#airyeigmax.jl
2+
#Algorithm <21.1> of Random Eigenvalues by Alan Edelman
3+
4+
#Experiment: Largets eigenvalue of a Stochastic Airy Operator
5+
#Plot: Histogram of the largest eigenvalues
6+
#Theory: The Tracy-Widom Law
7+
8+
## Parameters # Include the most interesting configurable values at the top, with brief descriptions
9+
t = 10000 # trials
10+
n = 1e9 # level of discretization
11+
beta = 2
12+
h = n^(-1/3) # h serves as dx
13+
14+
function airyeigmax_experiment(t,n,beta,h) # wrap the experimental logic in a function to enable faster JIT
15+
## Experiment
16+
v = zeros(t) # samples
17+
x = 0:h:10 # discretization of x
18+
N = length(x)
19+
20+
#Parallel experiment
21+
b = (1 / h^2) * ones(N-1)
22+
v = pmap(1:t) do i
23+
## discretize stochastic airy operator
24+
# discretize airy operator
25+
a = -(2 / h ^ 2) * ones(N); # differential operator: d^2 / dx^2
26+
a -= x # d^2 / dx^2 - x
27+
# add the stochastic part
28+
dW = randn(N) * sqrt(h)
29+
a += (2 / sqrt(beta)) * dW / h
30+
## calculate the largest eigenvalue of tridiagonal matrix T
31+
# diagonal of T: a
32+
# subdiagonal of T: b
33+
return eigmax(SymTridiagonal(a,b))
34+
end
35+
36+
binsize = 1/6
37+
return hist(v, -6:binsize:6)
38+
end
39+
grid, count = airyeigmax_experiment(t,n,beta,h) #run the experiment, making the global variables local for speed
40+
41+
## Plot
42+
include("../1/tracywidom.jl")
43+
h = Histogram(count/(sum(count)*step(grid)), step(grid)) # add the histogram of data
44+
h.x0 = first(grid)
45+
add(p, h)
46+
47+
if isinteractive()
48+
Winston.display(p)
49+
else
50+
file(p, "<filename>.png")
51+
end

demos/book/4/mpexperiment.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ r = 1 # aspect ratio
1111
n = 100 # matrix column size
1212
dx = 0.1 # binsize
1313

14-
function mp_experiment()
14+
function mp_experiment(n,r,t,dx)
1515
m = iround(n/r)
1616
## Experiment
1717
v = Float64[] # eigenvalue samples
@@ -28,7 +28,7 @@ function mp_experiment()
2828
y = real(sqrt((x.^2-a^2).*(2^2-x.^2) + 0im)./(pi*x*r))
2929
return (hist(v, x), (x, y))
3030
end
31-
((grid, count), (x,y)) = mp_experiment()
31+
((grid, count), (x,y)) = mp_experiment(n,r,t,dx)
3232

3333
## Plot
3434
using Winston

demos/book/README

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ The codes in this folder are translations of MATLAB code by Alan Edelman in his
22

33
It is intended that these codes can be run in batch mode to generate the sample figures as output files, but that the codes are most useful when used interactively in the exploration of random matrix theory. For this reason, when run from the Julia REPL, the plots will be output to the screen.
44

5-
Additional codes should use the following template (modified from book/4/mpexperiment.jl). While some experiments will need to be single-threaded, many codes in Random Matrix Theory are Monte Carlo simulations which are trivial to parallelize using the pmap function. To make use of the parallelism this provides, either start Julia with the `-p N` flag or run `addprocs(N-1)` at the REPL (where N>=3 and equals CPU_CORES+1 optimally). When doing Julia-level parallelism, it is typically best to disable OpenBLAS parallelism by calling `Base.openblas_set_num_threads(1)`.
5+
Additional codes should use the following template (modified from book/4/mpexperiment.jl). While some experiments will need to be single-threaded, many codes in Random Matrix Theory are Monte Carlo simulations which are trivial to parallelize using the pmap function. To make use of the parallelism this provides, either start Julia with the `-p N` flag or run `addprocs(N-1)` at the REPL (where N>=3 and equals CPU_CORES+1 optimally). When doing Julia-level parallelism, it is typically best to disable OpenBLAS parallelism by calling `Base.openblas_set_num_threads(1)`. Run code at the REPL using `include("<filename.jl>")` so that it is run only on the local processor (`require` is run on all processors).
66

77
```julia
88
#<filename>.jl
@@ -17,7 +17,7 @@ t = 10000 # trials
1717
n = 100 # matrix column size
1818
dx = 0.1 # binsize
1919

20-
function mp_experiment(t,n,dx) # wrap the experimental logic in a function to enable faster JIT
20+
function an_experiment(t,n,dx) # wrap the experimental logic in a function to enable faster JIT
2121
## Experiment
2222

2323
#Single threaded experiment
@@ -35,7 +35,7 @@ function mp_experiment(t,n,dx) # wrap the experimental logic in a function to en
3535
y = x .^ 2
3636
return (hist(v, x), (x, y))
3737
end
38-
((grid, count), (x,y)) = mp_experiment(t,n,dx) #run the experiment, making the global variables local for speed
38+
((grid, count), (x,y)) = an_experiment(t,n,dx) #run the experiment, making the global variables local for speed
3939

4040
## Plot
4141
using Winston

0 commit comments

Comments
 (0)