Skip to content

Commit 702f681

Browse files
Merge pull request #249 from SciML/fm/fix
Quality of life fixes
2 parents a053a2b + 7fa2c77 commit 702f681

File tree

2 files changed

+89
-26
lines changed

2 files changed

+89
-26
lines changed

README.md

Lines changed: 27 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -17,11 +17,21 @@
1717

1818
# ReservoirComputing.jl
1919

20-
ReservoirComputing.jl provides an efficient, modular and easy to use implementation of Reservoir Computing models such as Echo State Networks (ESNs). For information on using this package please refer to the [stable documentation](https://docs.sciml.ai/ReservoirComputing/stable/). Use the [in-development documentation](https://docs.sciml.ai/ReservoirComputing/dev/) to take a look at at not yet released features.
20+
ReservoirComputing.jl provides an efficient, modular and easy to use
21+
implementation of Reservoir Computing models such as Echo State Networks (ESNs).
22+
For information on using this package please refer to the
23+
[stable documentation](https://docs.sciml.ai/ReservoirComputing/stable/).
24+
Use the
25+
[in-development documentation](https://docs.sciml.ai/ReservoirComputing/dev/)
26+
to take a look at not yet released features.
2127

2228
## Quick Example
2329

24-
To illustrate the workflow of this library we will showcase how it is possible to train an ESN to learn the dynamics of the Lorenz system. As a first step we will need to gather the data. For the `Generative` prediction we need the target data to be one step ahead of the training data:
30+
To illustrate the workflow of this library we will showcase
31+
how it is possible to train an ESN to learn the dynamics of the
32+
Lorenz system. As a first step we gather the data.
33+
For the `Generative` prediction we need the target data
34+
to be one step ahead of the training data:
2535

2636
```julia
2737
using ReservoirComputing, OrdinaryDiffEq
@@ -52,7 +62,9 @@ target_data = data[:, (shift + 1):(shift + train_len)]
5262
test = data[:, (shift + train_len):(shift + train_len + predict_len - 1)]
5363
```
5464

55-
Now that we have the data we can initialize the ESN with the chosen parameters. Given that this is a quick example we are going to change the least amount of possible parameters. For more detailed examples and explanations of the functions please refer to the documentation.
65+
Now that we have the data we can initialize the ESN with the chosen parameters.
66+
Given that this is a quick example we are going to change the least amount of
67+
possible parameters:
5668

5769
```julia
5870
input_size = 3
@@ -63,14 +75,17 @@ esn = ESN(input_data, input_size, res_size;
6375
nla_type=NLAT2())
6476
```
6577

66-
The echo state network can now be trained and tested. If not specified, the training will always be ordinary least squares regression. The full range of training methods is detailed in the documentation.
78+
The echo state network can now be trained and tested.
79+
If not specified, the training will always be ordinary least squares regression:
6780

6881
```julia
6982
output_layer = train(esn, target_data)
7083
output = esn(Generative(predict_len), output_layer)
7184
```
7285

73-
The data is returned as a matrix, `output` in the code above, that contains the predicted trajectories. The results can now be easily plotted (for the actual script used to obtain this plot please refer to the documentation):
86+
The data is returned as a matrix, `output` in the code above,
87+
that contains the predicted trajectories.
88+
The results can now be easily plotted:
7489

7590
```julia
7691
using Plots
@@ -80,7 +95,8 @@ plot!(transpose(test); layout=(3, 1), label="actual")
8095

8196
![lorenz_basic](https://user-images.githubusercontent.com/10376688/166227371-8bffa318-5c49-401f-9c64-9c71980cb3f7.png)
8297

83-
One can also visualize the phase space of the attractor and the comparison with the actual one:
98+
One can also visualize the phase space of the attractor and the
99+
comparison with the actual one:
84100

85101
```julia
86102
plot(transpose(output)[:, 1],
@@ -111,4 +127,8 @@ If you use this library in your work, please cite:
111127

112128
## Acknowledgements
113129

114-
This project was possible thanks to initial funding through the [Google summer of code](https://summerofcode.withgoogle.com/) 2020 program. Francesco M. further acknowledges [ScaDS.AI](https://scads.ai/) and [RSC4Earth](https://rsc4earth.de/) for supporting the current progress on the library.
130+
This project was possible thanks to initial funding through
131+
the [Google summer of code](https://summerofcode.withgoogle.com/)
132+
2020 program. Francesco M. further acknowledges [ScaDS.AI](https://scads.ai/)
133+
and [RSC4Earth](https://rsc4earth.de/) for supporting the current progress
134+
on the library.

src/esn/esn_inits.jl

Lines changed: 62 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,9 @@ a range defined by `scaling`.
1313
- `T`: Type of the elements in the reservoir matrix.
1414
Default is `Float32`.
1515
- `dims`: Dimensions of the matrix. Should follow `res_size x in_size`.
16+
17+
# Keyword arguments
18+
1619
- `scaling`: A scaling factor to define the range of the uniform distribution.
1720
The matrix elements will be randomly chosen from the
1821
range `[-scaling, scaling]`. Defaults to `0.1`.
@@ -55,6 +58,9 @@ elements distributed uniformly within the range [-`scaling`, `scaling`] [^Lu2017
5558
- `T`: Type of the elements in the reservoir matrix.
5659
Default is `Float32`.
5760
- `dims`: Dimensions of the matrix. Should follow `res_size x in_size`.
61+
62+
# Keyword arguments
63+
5864
- `scaling`: The scaling factor for the weight distribution.
5965
Defaults to `0.1`.
6066
- `return_sparse`: flag for returning a `sparse` matrix.
@@ -106,6 +112,9 @@ Create an input layer for informed echo state networks [^Pathak2018].
106112
- `T`: Type of the elements in the reservoir matrix.
107113
Default is `Float32`.
108114
- `dims`: Dimensions of the matrix. Should follow `res_size x in_size`.
115+
116+
# Keyword arguments
117+
109118
- `scaling`: The scaling factor for the input matrix.
110119
Default is 0.1.
111120
- `model_in_size`: The size of the input model.
@@ -167,6 +176,9 @@ is randomly determined by the `sampling` chosen.
167176
- `T`: Type of the elements in the reservoir matrix.
168177
Default is `Float32`.
169178
- `dims`: Dimensions of the matrix. Should follow `res_size x in_size`.
179+
180+
# Keyword arguments
181+
170182
- `weight`: The weight used to fill the layer matrix. Default is 0.1.
171183
- `sampling_type`: The sampling parameters used to generate the input matrix.
172184
Default is `:bernoulli`.
@@ -239,7 +251,10 @@ function minimal_init(rng::AbstractRNG, ::Type{T}, dims::Integer...;
239251
rng,
240252
T)
241253
else
242-
error("Sampling type not allowed. Please use one of :bernoulli or :irrational")
254+
error("""\n
255+
Sampling type not allowed.
256+
Please use one of :bernoulli or :irrational\n
257+
""")
243258
end
244259
return layer_matrix
245260
end
@@ -282,7 +297,7 @@ function _create_irrational(irrational::Irrational, start::Int, res_size::Int,
282297
return T.(input_matrix)
283298
end
284299

285-
"""
300+
@doc raw"""
286301
chebyshev_mapping([rng], [T], dims...;
287302
amplitude=one(T), sine_divisor=one(T),
288303
chebyshev_parameter=one(T), return_sparse=true)
@@ -292,14 +307,15 @@ using a sine function and subsequent rows are iteratively generated
292307
via the Chebyshev mapping. The first row is defined as:
293308
294309
```math
295-
w(1, j) = amplitude * sin(j * π / (sine_divisor * n_cols))
310+
W[1, j] = \text{amplitude} \cdot \sin(j \cdot \pi / (\text{sine_divisor}
311+
\cdot \text{n_cols}))
296312
```
297313
298314
for j = 1, 2, …, n_cols (with n_cols typically equal to K+1, where K is the number of input layer neurons).
299315
Subsequent rows are generated by applying the mapping:
300316
301317
```math
302-
w(i+1, j) = cos(chebyshev_parameter * acos(w(i, j)))
318+
W[i+1, j] = \cos( \text{chebyshev_parameter} \cdot \acos(W[pi, j]))
303319
```
304320
305321
# Arguments
@@ -364,22 +380,23 @@ function chebyshev_mapping(rng::AbstractRNG, ::Type{T}, dims::Integer...;
364380
end
365381

366382
@doc raw"""
367-
logistic_mapping(rng::AbstractRNG, ::Type{T}, dims::Integer...;
368-
amplitude=0.3, sine_divisor=5.9, logistic_parameter = 3.7,
383+
logistic_mapping([rng], [T], dims...;
384+
amplitude=0.3, sine_divisor=5.9, logistic_parameter=3.7,
369385
return_sparse=true)
370386
371387
Generate an input weight matrix using a logistic mapping [^wang2022].The first
372388
row is initialized using a sine function:
373389
374390
```math
375-
W(1, j) = amplitude * sin(j * π / (sine_divisor * in_size))
391+
W[1, j] = \text{amplitude} \cdot \sin(j \cdot \pi /
392+
(\text{sine_divisor} \cdot in_size))
376393
```
377394
378395
for each input index `j`, with `in_size` being the number of columns provided in `dims`. Subsequent rows
379396
are generated recursively using the logistic map recurrence:
380397
381398
```math
382-
W(i+1, j) = logistic_parameter * W(i, j) * (1 - W(i, j))
399+
W[i+1, j] = \text{logistic_parameter} \cdot W(i, j) \cdot (1 - W[i, j])
383400
```
384401
385402
# Arguments
@@ -389,7 +406,8 @@ are generated recursively using the logistic map recurrence:
389406
Default is `Float32`.
390407
- `dims`: Dimensions of the matrix. Should follow `res_size x in_size`.
391408
392-
# keyword arguments
409+
# Keyword arguments
410+
393411
- `amplitude`: Scaling parameter used in the sine initialization of the
394412
first row. Default is 0.3.
395413
- `sine_divisor`: Parameter used to adjust the phase in the sine initialization.
@@ -452,14 +470,15 @@ as follows:
452470
- The first element of the chain is initialized using a sine function:
453471
454472
```math
455-
W(1,j) = amplitude * sin( (j * π) / (factor * n * sine_divisor) )
473+
W[1,j] = \text{amplitude} \cdot \sin( (j \cdot \pi) /
474+
(\text{factor} \cdot \text{n} \cdot \text{sine_divisor}) )
456475
```
457476
where `j` is the index corresponding to the input and `n` is the number of inputs.
458477
459478
- Subsequent elements are recursively computed using the logistic mapping:
460479
461480
```math
462-
W(i+1,j) = logistic_parameter * W(i,j) * (1 - W(i,j))
481+
W[i+1,j] = \text{logistic_parameter} \cdot W[i,j] \cdot (1 - W[i,j])
463482
```
464483
465484
The resulting matrix has dimensions `(factor * in_size) x in_size`, where
@@ -474,7 +493,8 @@ the number of rows is overridden.
474493
Default is `Float32`.
475494
- `dims`: Dimensions of the matrix. Should follow `res_size x in_size`.
476495
477-
# keyword arguments
496+
# Keyword arguments
497+
478498
- `factor`: The number of logistic map iterations (chain length) per input,
479499
determining the number of rows per input.
480500
- `amplitude`: Scaling parameter A for the sine-based initialization of
@@ -563,6 +583,9 @@ and scaled spectral radius according to `radius`.
563583
- `T`: Type of the elements in the reservoir matrix.
564584
Default is `Float32`.
565585
- `dims`: Dimensions of the reservoir matrix.
586+
587+
# Keyword arguments
588+
566589
- `radius`: The desired spectral radius of the reservoir.
567590
Defaults to 1.0.
568591
- `sparsity`: The sparsity level of the reservoir matrix,
@@ -590,7 +613,10 @@ function rand_sparse(rng::AbstractRNG, ::Type{T}, dims::Integer...;
590613
rho_w = maximum(abs.(eigvals(reservoir_matrix)))
591614
reservoir_matrix .*= radius / rho_w
592615
if Inf in unique(reservoir_matrix) || -Inf in unique(reservoir_matrix)
593-
error("Sparsity too low for size of the matrix. Increase res_size or increase sparsity")
616+
error("""\n
617+
Sparsity too low for size of the matrix.
618+
Increase res_size or increase sparsity.\n
619+
""")
594620
end
595621

596622
return return_sparse ? sparse(reservoir_matrix) : reservoir_matrix
@@ -609,6 +635,9 @@ Create and return a delay line reservoir matrix [^Rodan2010].
609635
- `T`: Type of the elements in the reservoir matrix.
610636
Default is `Float32`.
611637
- `dims`: Dimensions of the reservoir matrix.
638+
639+
# Keyword arguments
640+
612641
- `weight`: Determines the value of all connections in the reservoir.
613642
Default is 0.1.
614643
- `return_sparse`: flag for returning a `sparse` matrix.
@@ -640,8 +669,10 @@ julia> res_matrix = delay_line(5, 5; weight=1)
640669
function delay_line(rng::AbstractRNG, ::Type{T}, dims::Integer...;
641670
weight=T(0.1), return_sparse::Bool=true) where {T <: Number}
642671
reservoir_matrix = DeviceAgnostic.zeros(rng, T, dims...)
643-
@assert length(dims) == 2&&dims[1] == dims[2] "The dimensions
644-
must define a square matrix (e.g., (100, 100))"
672+
@assert length(dims) == 2&&dims[1] == dims[2] """\n
673+
The dimensions must define a square matrix
674+
(e.g., (100, 100))
675+
"""
645676

646677
for i in 1:(dims[1] - 1)
647678
reservoir_matrix[i + 1, i] = weight
@@ -652,7 +683,7 @@ end
652683

653684
"""
654685
delay_line_backward([rng], [T], dims...;
655-
weight = 0.1, fb_weight = 0.2, return_sparse=true)
686+
weight=0.1, fb_weight=0.2, return_sparse=true)
656687
657688
Create a delay line backward reservoir with the specified by `dims` and weights.
658689
Creates a matrix with backward connections as described in [^Rodan2010].
@@ -664,6 +695,9 @@ Creates a matrix with backward connections as described in [^Rodan2010].
664695
- `T`: Type of the elements in the reservoir matrix.
665696
Default is `Float32`.
666697
- `dims`: Dimensions of the reservoir matrix.
698+
699+
# Keyword arguments
700+
667701
- `weight`: The weight determines the absolute value of
668702
forward connections in the reservoir. Default is 0.1
669703
- `fb_weight`: Determines the absolute value of backward connections
@@ -709,7 +743,7 @@ end
709743

710744
"""
711745
cycle_jumps([rng], [T], dims...;
712-
cycle_weight = 0.1, jump_weight = 0.1, jump_size = 3, return_sparse=true)
746+
cycle_weight=0.1, jump_weight=0.1, jump_size=3, return_sparse=true)
713747
714748
Create a cycle jumps reservoir with the specified dimensions,
715749
cycle weight, jump weight, and jump size.
@@ -721,6 +755,9 @@ cycle weight, jump weight, and jump size.
721755
- `T`: Type of the elements in the reservoir matrix.
722756
Default is `Float32`.
723757
- `dims`: Dimensions of the reservoir matrix.
758+
759+
# Keyword arguments
760+
724761
- `cycle_weight`: The weight of cycle connections.
725762
Default is 0.1.
726763
- `jump_weight`: The weight of jump connections.
@@ -779,7 +816,7 @@ end
779816

780817
"""
781818
simple_cycle([rng], [T], dims...;
782-
weight = 0.1, return_sparse=true)
819+
weight=0.1, return_sparse=true)
783820
784821
Create a simple cycle reservoir with the specified dimensions and weight.
785822
@@ -789,6 +826,9 @@ Create a simple cycle reservoir with the specified dimensions and weight.
789826
from WeightInitializers.
790827
- `T`: Type of the elements in the reservoir matrix. Default is `Float32`.
791828
- `dims`: Dimensions of the reservoir matrix.
829+
830+
# Keyword arguments
831+
792832
- `weight`: Weight of the connections in the reservoir matrix.
793833
Default is 0.1.
794834
- `return_sparse`: flag for returning a `sparse` matrix.
@@ -831,7 +871,7 @@ end
831871

832872
"""
833873
pseudo_svd([rng], [T], dims...;
834-
max_value=1.0, sparsity=0.1, sorted = true, reverse_sort = false,
874+
max_value=1.0, sparsity=0.1, sorted=true, reverse_sort=false,
835875
return_sparse=true)
836876
837877
Returns an initializer to build a sparse reservoir matrix with the given
@@ -844,6 +884,9 @@ Returns an initializer to build a sparse reservoir matrix with the given
844884
- `T`: Type of the elements in the reservoir matrix.
845885
Default is `Float32`.
846886
- `dims`: Dimensions of the reservoir matrix.
887+
888+
# Keyword arguments
889+
847890
- `max_value`: The maximum absolute value of elements in the matrix.
848891
Default is 1.0
849892
- `sparsity`: The desired sparsity level of the reservoir matrix.

0 commit comments

Comments
 (0)