You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: demos/book/README
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@ The codes in this folder are translations of MATLAB code by Alan Edelman in his
2
2
3
3
It is intended that these codes can be run in batch mode to generate the sample figures as output files, but that the codes are most useful when used interactively in the exploration of random matrix theory. For this reason, when run from the Julia REPL, the plots will be output to the screen.
4
4
5
-
Additional codes should use the following template (modified from book/4/mpexperiment.jl). While some experiments will need to be single-threaded, many codes in Random Matrix Theory are Monte Carlo simulations which are trivial to parallelize using the pmap function. To make use of the parallelism this provides, either start Julia with the `-p N` flag or run `addprocs(N-1)` at the REPL (where N>=3 and equals CPU_CORES+1 optimally). When doing Julia-level parallelism, it is typically best to disable OpenBLAS parallelism by calling `Base.openblas_set_num_threads(1)`.
5
+
Additional codes should use the following template (modified from book/4/mpexperiment.jl). While some experiments will need to be single-threaded, many codes in Random Matrix Theory are Monte Carlo simulations which are trivial to parallelize using the pmap function. To make use of the parallelism this provides, either start Julia with the `-p N` flag or run `addprocs(N-1)` at the REPL (where N>=3 and equals CPU_CORES+1 optimally). When doing Julia-level parallelism, it is typically best to disable OpenBLAS parallelism by calling `Base.openblas_set_num_threads(1)`. Run code at the REPL using `include("<filename.jl>")` so that it is run only on the local processor (`require` is run on all processors).
6
6
7
7
```julia
8
8
#<filename>.jl
@@ -17,7 +17,7 @@ t = 10000 # trials
17
17
n = 100 # matrix column size
18
18
dx = 0.1 # binsize
19
19
20
-
function mp_experiment(t,n,dx) # wrap the experimental logic in a function to enable faster JIT
20
+
function an_experiment(t,n,dx) # wrap the experimental logic in a function to enable faster JIT
21
21
## Experiment
22
22
23
23
#Single threaded experiment
@@ -35,7 +35,7 @@ function mp_experiment(t,n,dx) # wrap the experimental logic in a function to en
35
35
y = x .^ 2
36
36
return (hist(v, x), (x, y))
37
37
end
38
-
((grid, count), (x,y)) = mp_experiment(t,n,dx) #run the experiment, making the global variables local for speed
38
+
((grid, count), (x,y)) = an_experiment(t,n,dx) #run the experiment, making the global variables local for speed
0 commit comments