@@ -27,22 +27,18 @@ Use the
2727[ in-development documentation] ( https://docs.sciml.ai/ReservoirComputing/dev/ )
2828to take a look at not yet released features.
2929
30- ## Citing
30+ ## Features
3131
32- If you use this library in your work, please cite:
32+ ReservoirComputing.jl provides layers,models, and functions to help build and train
33+ reservoir computing models. More specifically the software offers
3334
34- ``` bibtex
35- @article{martinuzzi2022reservoircomputing,
36- author = {Francesco Martinuzzi and Chris Rackauckas and Anas Abdelrehim and Miguel D. Mahecha and Karin Mora},
37- title = {ReservoirComputing.jl: An Efficient and Modular Library for Reservoir Computing Models},
38- journal = {Journal of Machine Learning Research},
39- year = {2022},
40- volume = {23},
41- number = {288},
42- pages = {1--8},
43- url = {http://jmlr.org/papers/v23/22-0611.html}
44- }
45- ```
35+ - Base layers for reservoir computing model construction such as ` ReservoirChain ` ,
36+ ` Readout ` , ` Collect ` , and ` ESNCell `
37+ - Fully built models such as ` ESN ` , and ` DeepESN `
38+ - 15+ reservoir initializers and 5+ input layer initializers
39+ - 5+ reservoir states modification algorithms
40+ - Sparse matrix computation through
41+ [ SparseArrays.jl] ( https://docs.julialang.org/en/v1/stdlib/SparseArrays/ )
4642
4743## Installation
4844
@@ -63,67 +59,87 @@ Pkg.add("ReservoirComputing")
6359
6460To illustrate the workflow of this library we will showcase
6561how it is possible to train an ESN to learn the dynamics of the
66- Lorenz system. As a first step we gather the data.
67- For the ` Generative ` prediction we need the target data
68- to be one step ahead of the training data:
62+ Lorenz system.
63+
64+ ### 1. Generate data
65+
66+ As a general first step wee fix the random seed for reproducibilty
6967
7068``` julia
71- using ReservoirComputing, OrdinaryDiffEq, Random
69+ using Random
7270Random. seed! (42 )
7371rng = MersenneTwister (17 )
72+ ```
7473
75- # lorenz system parameters
76- u0 = [1.0 , 0.0 , 0.0 ]
77- tspan = (0.0 , 200.0 )
78- p = [10.0 , 28.0 , 8 / 3 ]
74+ For an autoregressive prediction we need the target data
75+ to be one step ahead of the training data:
76+
77+ ``` julia
78+ using OrdinaryDiffEq
7979
8080# define lorenz system
8181function lorenz (du, u, p, t)
8282 du[1 ] = p[1 ] * (u[2 ] - u[1 ])
8383 du[2 ] = u[1 ] * (p[2 ] - u[3 ]) - u[2 ]
8484 du[3 ] = u[1 ] * u[2 ] - p[3 ] * u[3 ]
8585end
86+
8687# solve and take data
87- prob = ODEProblem (lorenz, u0, tspan, p )
88+ prob = ODEProblem (lorenz, [ 1.0f0 , 0.0f0 , 0.0f0 ], ( 0.0 , 200.0 ), [ 10.0f0 , 28.0f0 , 8 / 3 ] )
8889data = Array (solve (prob, ABM54 (); dt= 0.02 ))
89-
9090shift = 300
9191train_len = 5000
9292predict_len = 1250
9393
9494# one step ahead for generative prediction
9595input_data = data[:, shift: (shift + train_len - 1 )]
9696target_data = data[:, (shift + 1 ): (shift + train_len)]
97-
9897test = data[:, (shift + train_len): (shift + train_len + predict_len - 1 )]
9998```
10099
101- Now that we have the data we can initialize the ESN with the chosen parameters.
102- Given that this is a quick example we are going to change the least amount of
103- possible parameters:
100+ ### 2. Build Echo State Network
101+
102+ We can either use the provided ` ESN ` or build one from scratch.
103+ We showcase the second option:
104104
105105``` julia
106106input_size = 3
107107res_size = 300
108108esn = ReservoirChain (
109- StatefulLayer (ESNCell (input_size => res_size; init_reservoir= rand_sparse (; radius= 1.2 , sparsity= 6 / 300 ))),
109+ StatefulLayer (
110+ ESNCell (
111+ input_size => res_size;
112+ init_reservoir= rand_sparse (; radius= 1.2 , sparsity= 6 / 300 )
113+ )
114+ ),
110115 NLAT2 (),
111- Readout (res_size => input_size)
112- ) # or ESN(input_size, res_size, input_size; init_reservoir=rand_sparse(; radius=1.2, sparsity=6/300))
116+ Readout (res_size => input_size) # autoregressive so out_dims == in_dims
117+ )
118+ # alternative:
119+ # esn = ESN(input_size, res_size, input_size;
120+ # init_reservoir=rand_sparse(; radius=1.2, sparsity=6/300)
121+ # )
113122```
114123
115- The echo state network can now be trained and tested.
116- If not specified, the training will always be ordinary least squares regression:
124+ ### 3. Train the Echo State Network
125+
126+ ReservoirCOmputing.jl builds on Lux(Core), so in order to train the model
127+ we first need to instantiate the parameters and the states:
117128
118129``` julia
119130ps, st = setup (rng, esn)
120131ps, st = train! (esn, input_data, target_data, ps, st)
121- output, _ = predict (esn, 1250 , ps, st; initialdata= test[:, 1 ])
122132```
123133
124- The data is returned as a matrix, ` output ` in the code above,
125- that contains the predicted trajectories.
126- The results can now be easily plotted:
134+ ### 4. Predict and visualize
135+
136+ We can now use the trained ESN to forecast the Lorenz system dynamics
137+
138+ ``` julia
139+ output, st = predict (esn, 1250 , ps, st; initialdata= test[:, 1 ])
140+ ```
141+
142+ We can now visualize the results
127143
128144``` julia
129145using Plots
@@ -146,6 +162,23 @@ plot!(transpose(test)[:, 1], transpose(test)[:, 2], transpose(test)[:, 3]; label
146162
147163![ lorenz_attractor] ( https://user-images.githubusercontent.com/10376688/81470281-5a34b580-91ea-11ea-9eea-d2b266da19f4.png )
148164
165+ ## Citing
166+
167+ If you use this library in your work, please cite:
168+
169+ ``` bibtex
170+ @article{martinuzzi2022reservoircomputing,
171+ author = {Francesco Martinuzzi and Chris Rackauckas and Anas Abdelrehim and Miguel D. Mahecha and Karin Mora},
172+ title = {ReservoirComputing.jl: An Efficient and Modular Library for Reservoir Computing Models},
173+ journal = {Journal of Machine Learning Research},
174+ year = {2022},
175+ volume = {23},
176+ number = {288},
177+ pages = {1--8},
178+ url = {http://jmlr.org/papers/v23/22-0611.html}
179+ }
180+ ```
181+
149182## Acknowledgements
150183
151184This project was possible thanks to initial funding through
0 commit comments