|
| 1 | +# Using different layers |
| 2 | + |
| 3 | +A great deal of efforts in the ESNs field are devoted to finding an ideal construction for the reservoir matrices. ReservoirComputing.jl offers multiple implementation of reservoir and input matrices initializations found in the literature. The API is standardized, and follows by [WeightInitializers.jl](https://github.com/LuxDL/WeightInitializers.jl): |
| 4 | + |
| 5 | +```julia |
| 6 | +weights = init(rng, dims...) |
| 7 | +#rng is optional |
| 8 | +weights = init(dims...) |
| 9 | +``` |
| 10 | +Additional keywords can be added when needed: |
| 11 | +```julia |
| 12 | +weights_init = init(rng; kwargs...) |
| 13 | +weights = weights_init(rng, dims...) |
| 14 | +# or |
| 15 | +weights_init = init(; kwargs...) |
| 16 | +weights = weights_init(dims...) |
| 17 | +``` |
| 18 | + |
| 19 | +Custom layers only need to follow these APIs to be compatible with ReservoirComputing.jl. |
| 20 | + |
| 21 | +## Example of minimally complex ESN |
| 22 | + |
| 23 | +Using [^rodan2012] and [^rodan2010] as references this section will provide an example on how to change both the input layer and the reservoir for ESNs. |
| 24 | + |
| 25 | +The task for this example will be the one step ahead prediction of the Henon map. To obtain the data one can leverage the package [PredefinedDynamicalSystems.jl](https://juliadynamics.github.io/PredefinedDynamicalSystems.jl/dev/). The data is scaled to be between -1 and 1. |
| 26 | + |
| 27 | +```@example minesn |
| 28 | +using PredefinedDynamicalSystems |
| 29 | +train_len = 3000 |
| 30 | +predict_len = 2000 |
| 31 | +
|
| 32 | +ds = Systems.henon() |
| 33 | +traj, t = trajectory(ds, 7000) |
| 34 | +data = Matrix(traj)' |
| 35 | +data = (data .-0.5) .* 2 |
| 36 | +shift = 200 |
| 37 | +
|
| 38 | +training_input = data[:, shift:shift+train_len-1] |
| 39 | +training_target = data[:, shift+1:shift+train_len] |
| 40 | +testing_input = data[:,shift+train_len:shift+train_len+predict_len-1] |
| 41 | +testing_target = data[:,shift+train_len+1:shift+train_len+predict_len] |
| 42 | +``` |
| 43 | +Now it is possible to define the input layers and reservoirs we want to compare and run the comparison in a simple for loop. The accuracy will be tested using the mean squared deviation msd from StatsBase. |
| 44 | + |
| 45 | +```@example minesn |
| 46 | +using ReservoirComputing, StatsBase |
| 47 | +
|
| 48 | +res_size = 300 |
| 49 | +input_layer = [minimal_init(; weight = 0.85, sampling_type=:irrational), |
| 50 | + minimal_init(; weight = 0.95, sampling_type=:irrational)] |
| 51 | +reservoirs = [simple_cycle(; weight=0.7), |
| 52 | + cycle_jumps(; cycle_weight=0.7, jump_weight=0.2, jump_size=5)] |
| 53 | +
|
| 54 | +for i=1:length(reservoirs) |
| 55 | + esn = ESN(training_input, 2, res_size; |
| 56 | + input_layer = input_layer[i], |
| 57 | + reservoir = reservoirs[i]) |
| 58 | + wout = train(esn, training_target, StandardRidge(0.001)) |
| 59 | + output = esn(Predictive(testing_input), wout) |
| 60 | + println(msd(testing_target, output)) |
| 61 | +end |
| 62 | +``` |
| 63 | +As it is possible to see, changing layers in ESN models is straightforward. Be sure to check the API documentation for a full list of reservoir and layers. |
| 64 | + |
| 65 | +## Bibliography |
| 66 | +[^rodan2012]: Rodan, Ali, and Peter Tiňo. “Simple deterministically constructed cycle reservoirs with regular jumps.” Neural computation 24.7 (2012): 1822-1852. |
| 67 | + |
| 68 | +[^rodan2010]: Rodan, Ali, and Peter Tiňo. “Minimum complexity echo state network.” IEEE transactions on neural networks 22.1 (2010): 131-144. |
0 commit comments