Skip to content

Commit 655cac4

Browse files
Merge pull request #227 from SciML/fm/fixes
Fixing `minimal_init`
2 parents a71209d + 498f1f4 commit 655cac4

File tree

5 files changed

+74
-6
lines changed

5 files changed

+74
-6
lines changed

Project.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
name = "ReservoirComputing"
22
uuid = "7c2d2b1e-3dd4-11ea-355a-8f6a8116e294"
33
authors = ["Francesco Martinuzzi"]
4-
version = "0.10.3"
4+
version = "0.10.4"
55

66
[deps]
77
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"

docs/Project.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,5 +19,5 @@ Documenter = "1"
1919
OrdinaryDiffEq = "6"
2020
Plots = "1"
2121
PredefinedDynamicalSystems = "1"
22-
ReservoirComputing = "0.9, 0.10"
22+
ReservoirComputing = "0.10"
2323
StatsBase = "0.33, 0.34"

docs/pages.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ pages = [
77
"Echo State Network Tutorials" => Any[
88
"Lorenz System Forecasting" => "esn_tutorials/lorenz_basic.md",
99
#"Mackey-Glass Forecasting on GPU" => "esn_tutorials/mackeyglass_basic.md",
10-
#"Using Different Layers" => "esn_tutorials/change_layers.md",
10+
"Using Different Layers" => "esn_tutorials/change_layers.md",
1111
"Using Different Reservoir Drivers" => "esn_tutorials/different_drivers.md",
1212
#"Using Different Training Methods" => "esn_tutorials/different_training.md",
1313
"Deep Echo State Networks" => "esn_tutorials/deep_esn.md",
Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,68 @@
1+
# Using different layers
2+
3+
A great deal of efforts in the ESNs field are devoted to finding an ideal construction for the reservoir matrices. ReservoirComputing.jl offers multiple implementation of reservoir and input matrices initializations found in the literature. The API is standardized, and follows by [WeightInitializers.jl](https://github.com/LuxDL/WeightInitializers.jl):
4+
5+
```julia
6+
weights = init(rng, dims...)
7+
#rng is optional
8+
weights = init(dims...)
9+
```
10+
Additional keywords can be added when needed:
11+
```julia
12+
weights_init = init(rng; kwargs...)
13+
weights = weights_init(rng, dims...)
14+
# or
15+
weights_init = init(; kwargs...)
16+
weights = weights_init(dims...)
17+
```
18+
19+
Custom layers only need to follow these APIs to be compatible with ReservoirComputing.jl.
20+
21+
## Example of minimally complex ESN
22+
23+
Using [^rodan2012] and [^rodan2010] as references this section will provide an example on how to change both the input layer and the reservoir for ESNs.
24+
25+
The task for this example will be the one step ahead prediction of the Henon map. To obtain the data one can leverage the package [PredefinedDynamicalSystems.jl](https://juliadynamics.github.io/PredefinedDynamicalSystems.jl/dev/). The data is scaled to be between -1 and 1.
26+
27+
```@example minesn
28+
using PredefinedDynamicalSystems
29+
train_len = 3000
30+
predict_len = 2000
31+
32+
ds = Systems.henon()
33+
traj, t = trajectory(ds, 7000)
34+
data = Matrix(traj)'
35+
data = (data .-0.5) .* 2
36+
shift = 200
37+
38+
training_input = data[:, shift:shift+train_len-1]
39+
training_target = data[:, shift+1:shift+train_len]
40+
testing_input = data[:,shift+train_len:shift+train_len+predict_len-1]
41+
testing_target = data[:,shift+train_len+1:shift+train_len+predict_len]
42+
```
43+
Now it is possible to define the input layers and reservoirs we want to compare and run the comparison in a simple for loop. The accuracy will be tested using the mean squared deviation msd from StatsBase.
44+
45+
```@example minesn
46+
using ReservoirComputing, StatsBase
47+
48+
res_size = 300
49+
input_layer = [minimal_init(; weight = 0.85, sampling_type=:irrational),
50+
minimal_init(; weight = 0.95, sampling_type=:irrational)]
51+
reservoirs = [simple_cycle(; weight=0.7),
52+
cycle_jumps(; cycle_weight=0.7, jump_weight=0.2, jump_size=5)]
53+
54+
for i=1:length(reservoirs)
55+
esn = ESN(training_input, 2, res_size;
56+
input_layer = input_layer[i],
57+
reservoir = reservoirs[i])
58+
wout = train(esn, training_target, StandardRidge(0.001))
59+
output = esn(Predictive(testing_input), wout)
60+
println(msd(testing_target, output))
61+
end
62+
```
63+
As it is possible to see, changing layers in ESN models is straightforward. Be sure to check the API documentation for a full list of reservoir and layers.
64+
65+
## Bibliography
66+
[^rodan2012]: Rodan, Ali, and Peter Tiňo. “Simple deterministically constructed cycle reservoirs with regular jumps.” Neural computation 24.7 (2012): 1822-1852.
67+
68+
[^rodan2010]: Rodan, Ali, and Peter Tiňo. “Minimum complexity echo state network.” IEEE transactions on neural networks 22.1 (2010): 131-144.

src/esn/esn_input_layers.jl

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -190,10 +190,10 @@ function minimal_init(rng::AbstractRNG, ::Type{T}, dims::Integer...;
190190
return layer_matrix
191191
end
192192

193-
function _create_bernoulli(p::T,
193+
function _create_bernoulli(p::Number,
194194
res_size::Int,
195195
in_size::Int,
196-
weight::T,
196+
weight::Number,
197197
rng::AbstractRNG,
198198
::Type{T}) where {T <: Number}
199199
input_matrix = zeros(T, res_size, in_size)
@@ -210,7 +210,7 @@ function _create_irrational(irrational::Irrational,
210210
start::Int,
211211
res_size::Int,
212212
in_size::Int,
213-
weight::T,
213+
weight::Number,
214214
rng::AbstractRNG,
215215
::Type{T}) where {T <: Number}
216216
setprecision(BigFloat, Int(ceil(log2(10) * (res_size * in_size + start + 1))))

0 commit comments

Comments
 (0)