Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "ReservoirComputing"
uuid = "7c2d2b1e-3dd4-11ea-355a-8f6a8116e294"
authors = ["Francesco Martinuzzi"]
version = "0.11.2"
version = "0.11.3"

[deps]
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
Expand Down
56 changes: 37 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,38 @@ Use the
[in-development documentation](https://docs.sciml.ai/ReservoirComputing/dev/)
to take a look at not yet released features.

## Citing

If you use this library in your work, please cite:

```bibtex
@article{martinuzzi2022reservoircomputing,
author = {Francesco Martinuzzi and Chris Rackauckas and Anas Abdelrehim and Miguel D. Mahecha and Karin Mora},
title = {ReservoirComputing.jl: An Efficient and Modular Library for Reservoir Computing Models},
journal = {Journal of Machine Learning Research},
year = {2022},
volume = {23},
number = {288},
pages = {1--8},
url = {http://jmlr.org/papers/v23/22-0611.html}
}
```

## Installation

ReservoirComputing.jl can be installed using either of

```julia_repl
julia> ] #actually press the closing square brackets
pkg> add ReservoirComputing
```
or

```julia
using Pkg
Pkg.add("ReservoirComputing")
```

## Quick Example

To illustrate the workflow of this library we will showcase
Expand All @@ -36,7 +68,9 @@ For the `Generative` prediction we need the target data
to be one step ahead of the training data:

```julia
using ReservoirComputing, OrdinaryDiffEq
using ReservoirComputing, OrdinaryDiffEq, Random
Random.seed!(42)
rng = MersenneTwister(17)

#lorenz system parameters
u0 = [1.0, 0.0, 0.0]
Expand Down Expand Up @@ -74,7 +108,8 @@ res_size = 300
esn = ESN(input_data, input_size, res_size;
reservoir=rand_sparse(; radius=1.2, sparsity=6 / res_size),
input_layer=weighted_init,
nla_type=NLAT2())
nla_type=NLAT2(),
rng=rng)
```

The echo state network can now be trained and tested.
Expand Down Expand Up @@ -110,23 +145,6 @@ plot!(transpose(test)[:, 1], transpose(test)[:, 2], transpose(test)[:, 3]; label

![lorenz_attractor](https://user-images.githubusercontent.com/10376688/81470281-5a34b580-91ea-11ea-9eea-d2b266da19f4.png)

## Citing

If you use this library in your work, please cite:

```bibtex
@article{JMLR:v23:22-0611,
author = {Francesco Martinuzzi and Chris Rackauckas and Anas Abdelrehim and Miguel D. Mahecha and Karin Mora},
title = {ReservoirComputing.jl: An Efficient and Modular Library for Reservoir Computing Models},
journal = {Journal of Machine Learning Research},
year = {2022},
volume = {23},
number = {288},
pages = {1--8},
url = {http://jmlr.org/papers/v23/22-0611.html}
}
```

## Acknowledgements

This project was possible thanks to initial funding through
Expand Down
1 change: 1 addition & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
CellularAutomata = "878138dc-5b27-11ea-1a71-cb95d38d6b29"
DifferentialEquations = "0c46a032-eb83-5123-abaf-570d42b7fbaa"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
DocumenterCitations = "daee34ce-89f3-4625-b898-19384cb65244"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
OrdinaryDiffEq = "1dea7af3-3e70-54e6-95c3-0bf5283fa5ed"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
Expand Down
8 changes: 7 additions & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
using Documenter, ReservoirComputing
using Documenter, DocumenterCitations, ReservoirComputing

cp("./docs/Manifest.toml", "./docs/src/assets/Manifest.toml"; force = true)
cp("./docs/Project.toml", "./docs/src/assets/Project.toml"; force = true)
Expand All @@ -8,9 +8,15 @@ ENV["GKSwstype"] = "100"
include("pages.jl")
mathengine = Documenter.MathJax()

bib = CitationBibliography(
joinpath(@__DIR__, "src", "refs.bib");
style = :authoryear
)

makedocs(; modules = [ReservoirComputing],
sitename = "ReservoirComputing.jl",
clean = true, doctest = false, linkcheck = true,
plugins = [bib],
format = Documenter.HTML(;
mathengine,
assets = ["assets/favicon.ico"],
Expand Down
2 changes: 1 addition & 1 deletion docs/pages.jl
Original file line number Diff line number Diff line change
Expand Up @@ -20,5 +20,5 @@ pages = [
"ESN Initializers" => "api/inits.md",
"ESN Drivers" => "api/esn_drivers.md",
"ESN Variations" => "api/esn_variations.md",
"ReCA" => "api/reca.md"]
"ReCA" => "api/reca.md"] #"References" => "references.md"
]
7 changes: 7 additions & 0 deletions docs/src/api/esn_drivers.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,3 +14,10 @@ The `GRU` driver also provides the user with the choice of the possible variants
```

Please refer to the original papers for more detail about these architectures.

## References

```@bibliography
Pages = ["esn_drivers.md"]
Canonical = false
```
7 changes: 7 additions & 0 deletions docs/src/api/inits.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,3 +44,10 @@
self_loop!
add_jumps!
```

## References

```@bibliography
Pages = ["inits.md"]
Canonical = false
```
7 changes: 7 additions & 0 deletions docs/src/api/states.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,3 +25,10 @@
```@docs
ReservoirComputing.create_states
```

## References

```@bibliography
Pages = ["states.md"]
Canonical = false
```
14 changes: 6 additions & 8 deletions docs/src/esn_tutorials/change_layers.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Custom layers only need to follow these APIs to be compatible with ReservoirComp

## Example of minimally complex ESN

Using [^rodan2012] and [^rodan2010] as references this section will provide an
Using [Rodan2012](@cite) and [Rodan2011](@cite) as references this section will provide an
example on how to change both the input layer and the reservoir for ESNs.

The task for this example will be the one step ahead prediction of the Henon map.
Expand Down Expand Up @@ -77,11 +77,9 @@ end
As it is possible to see, changing layers in ESN models is straightforward.
Be sure to check the API documentation for a full list of reservoir and layers.

## Bibliography
## References

[^rodan2012]: Rodan, Ali, and Peter Tiňo.
“Simple deterministically constructed cycle reservoirs with regular jumps.”
Neural computation 24.7 (2012): 1822-1852.
[^rodan2010]: Rodan, Ali, and Peter Tiňo.
“Minimum complexity echo state network.”
IEEE transactions on neural networks 22.1 (2010): 131-144.
```@bibliography
Pages = ["change_layers.md"]
Canonical = false
```
9 changes: 6 additions & 3 deletions docs/src/esn_tutorials/deep_esn.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Deep Echo State Network architectures started to gain some traction recently. In this guide, we illustrate how it is possible to use ReservoirComputing.jl to build a deep ESN.

The network implemented in this library is taken from [^1]. It works by stacking reservoirs on top of each other, feeding the output from one into the next. The states are obtained by merging all the inner states of the stacked reservoirs. For a more in-depth explanation, refer to the paper linked above.
The network implemented in this library is taken from [Gallicchio2017](@cite). It works by stacking reservoirs on top of each other, feeding the output from one into the next. The states are obtained by merging all the inner states of the stacked reservoirs. For a more in-depth explanation, refer to the paper linked above.

## Lorenz Example

Expand Down Expand Up @@ -88,6 +88,9 @@ plot(p1, p2, p3; plot_title="Lorenz System Coordinates",
legendfontsize=12, titlefontsize=20)
```

## Documentation
## References

[^1]: Gallicchio, Claudio, and Alessio Micheli. "_Deep echo state network (deepesn): A brief survey._" arXiv preprint arXiv:1712.04323 (2017).
```@bibliography
Pages = ["deep_esn.md"]
Canonical = false
```
23 changes: 11 additions & 12 deletions docs/src/esn_tutorials/different_drivers.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ While the original implementation of the Echo State Network implemented the mode

## Multiple Activation Function RNN

Based on the double activation function ESN (DAFESN) proposed in [^1], the Multiple Activation Function ESN expands the idea and allows a custom number of activation functions to be used in the reservoir dynamics. This can be thought of as a linear combination of multiple activation functions with corresponding parameters.
Based on the double activation function ESN (DAFESN) proposed in [Lun2015](@cite), the Multiple Activation Function ESN expands the idea and allows a custom number of activation functions to be used in the reservoir dynamics. This can be thought of as a linear combination of multiple activation functions with corresponding parameters.

```math
\mathbf{x}(t+1) = (1-\alpha)\mathbf{x}(t) + \lambda_1 f_1(\mathbf{W}\mathbf{x}(t)+\mathbf{W}_{in}\mathbf{u}(t)) + \dots + \lambda_D f_D(\mathbf{W}\mathbf{x}(t)+\mathbf{W}_{in}\mathbf{u}(t))
Expand All @@ -14,7 +14,7 @@ where ``D`` is the number of activation functions and respective parameters chos

The method to call to use the multiple activation function ESN is `MRNN(activation_function, leaky_coefficient, scaling_factor)`. The arguments can be used as both `args` and `kwargs`. `activation_function` and `scaling_factor` have to be vectors (or tuples) containing the chosen activation functions and respective scaling factors (``f_1,...,f_D`` and ``\lambda_1,...,\lambda_D`` following the nomenclature introduced above). The `leaky_coefficient` represents ``\alpha`` and it is a single value.

Starting with the example, the data used is based on the following function based on the DAFESN paper [^1].
Starting with the example, the data used is based on the following function based on the DAFESN paper [Lun2015](@cite).

```@example mrnn
u(t) = sin(t) + sin(0.51 * t) + sin(0.22 * t) + sin(0.1002 * t) + sin(0.05343 * t)
Expand Down Expand Up @@ -87,7 +87,7 @@ In this example, it is also possible to observe the input of parameters to the m

## Gated Recurrent Unit

Gated Recurrent Units (GRUs) [^2] have been proposed in more recent years with the intent of limiting notable problems of RNNs, like the vanishing gradient. This change in the underlying equations can be easily transported into the Reservoir Computing paradigm, by switching the RNN equations in the reservoir with the GRU equations. This approach has been explored in [^3] and [^4]. Different variations of GRU have been proposed [^5][^6]; this section is subdivided into different sections that go into detail about the governing equations and the implementation of them into ReservoirComputing.jl. Like before, to access the GRU reservoir driver, it suffices to change the `reservoir_diver` keyword argument for `ESN` with `GRU()`. All the variations that will be presented can be used in this package by leveraging the keyword argument `variant` in the method `GRU()` and specifying the chosen variant: `FullyGated()` or `Minimal()`. Other variations are possible by modifying the inner layers and reservoirs. The default is set to the standard version `FullyGated()`. The first section will go into more detail about the default of the `GRU()` method, and the following ones will refer to it to minimize repetitions. This example was run on Julia v1.7.2.
Gated Recurrent Units (GRUs) [Cho2014](@cite) have been proposed in more recent years with the intent of limiting notable problems of RNNs, like the vanishing gradient. This change in the underlying equations can be easily transported into the Reservoir Computing paradigm, by switching the RNN equations in the reservoir with the GRU equations. This approach has been explored in [Wang2020](@cite) and [Sarli2020](@cite). Different variations of GRU have been proposed [Dey2017](@cite); this section is subdivided into different sections that go into detail about the governing equations and the implementation of them into ReservoirComputing.jl. Like before, to access the GRU reservoir driver, it suffices to change the `reservoir_diver` keyword argument for `ESN` with `GRU()`. All the variations that will be presented can be used in this package by leveraging the keyword argument `variant` in the method `GRU()` and specifying the chosen variant: `FullyGated()` or `Minimal()`. Other variations are possible by modifying the inner layers and reservoirs. The default is set to the standard version `FullyGated()`. The first section will go into more detail about the default of the `GRU()` method, and the following ones will refer to it to minimize repetitions.

### Standard GRU

Expand All @@ -104,7 +104,7 @@ Going over the `GRU` keyword argument, it will be explained how to feed the desi

- `activation_function` is a vector with default values `[NNlib.sigmoid, NNlib.sigmoid, tanh]`. This argument controls the activation functions of the GRU, going from top to bottom. Changing the first element corresponds to changing the activation function for ``\mathbf{r}(t)`` and so on.
- `inner_layer` is a vector with default values `fill(DenseLayer(), 2)`. This keyword argument controls the ``\mathbf{W}_{\text{in}}``s going from top to bottom like before.
- `reservoir` is a vector with default value `fill(RandSparseReservoir(), 2)`. In a similar fashion to `inner_layer`, this keyword argument controls the reservoir matrix construction in a top to bottom order.
- `reservoir` is a vector with default value `fill(RandSparseReservoir(), 2)`. Similarly to `inner_layer`, this keyword argument controls the reservoir matrix construction in a top to bottom order.
- `bias` is again a vector with default value `fill(DenseLayer(), 2)`. It is meant to control the ``\mathbf{b}``s, going as usual from top to bottom.
- `variant` controls the GRU variant. The default value is set to `FullyGated()`.

Expand Down Expand Up @@ -161,7 +161,7 @@ This variation can be obtained by setting `variation=Minimal()`. The `inner_laye

To showcase the use of the `GRU()` method, this section will only illustrate the standard `FullyGated()` version. The full script for this example with the data can be found [here](https://github.com/MartinuzziFrancesco/reservoir-computing-examples/tree/main/change_drivers/gru).

The data used for this example is the Santa Fe laser dataset [^7] retrieved from [here](https://web.archive.org/web/20160427182805/http://www-psych.stanford.edu/%7Eandreas/Time-Series/SantaFe.html). The data is split to account for a next step prediction.
The data used for this example is the Santa Fe laser dataset [Hbner1989](@cite) retrieved from [here](https://web.archive.org/web/20160427182805/http://www-psych.stanford.edu/%7Eandreas/Time-Series/SantaFe.html). The data is split to account for a next step prediction.

```@example gru
using DelimitedFiles
Expand Down Expand Up @@ -241,10 +241,9 @@ println(msd(testing_target, output))
println(msd(testing_target, output_rnn))
```

[^1]: Lun, Shu-Xian, et al. "_A novel model of leaky integrator echo state network for time-series prediction._" Neurocomputing 159 (2015): 58-66.
[^2]: Cho, Kyunghyun, et al. “_Learning phrase representations using RNN encoder-decoder for statistical machine translation._” arXiv preprint arXiv:1406.1078 (2014).
[^3]: Wang, Xinjie, Yaochu Jin, and Kuangrong Hao. "_A Gated Recurrent Unit based Echo State Network._" 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020.
[^4]: Di Sarli, Daniele, Claudio Gallicchio, and Alessio Micheli. "_Gated Echo State Networks: a preliminary study._" 2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA). IEEE, 2020.
[^5]: Dey, Rahul, and Fathi M. Salem. "_Gate-variants of gated recurrent unit (GRU) neural networks._" 2017 IEEE 60th international midwest symposium on circuits and systems (MWSCAS). IEEE, 2017.
[^6]: Zhou, Guo-Bing, et al. "_Minimal gated unit for recurrent neural networks._" International Journal of Automation and Computing 13.3 (2016): 226-234.
[^7]: Hübner, Uwe, Nimmi B. Abraham, and Carlos O. Weiss. "_Dimensions and entropies of chaotic intensity pulsations in a single-mode far-infrared NH 3 laser._" Physical Review A 40.11 (1989): 6354.
## References

```@bibliography
Pages = ["different_drivers.md"]
Canonical = false
```
9 changes: 6 additions & 3 deletions docs/src/esn_tutorials/hybrid.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Hybrid Echo State Networks

Following the idea of giving physical information to machine learning models, the hybrid echo state networks [^1] try to achieve this results by feeding model data into the ESN. In this example, it is explained how to create and leverage such models in ReservoirComputing.jl.
Following the idea of giving physical information to machine learning models, the hybrid echo state networks [Pathak2018](@cite) try to achieve this results by feeding model data into the ESN. In this example, it is explained how to create and leverage such models in ReservoirComputing.jl.

## Generating the data

Expand Down Expand Up @@ -94,6 +94,9 @@ plot(p1, p2, p3; plot_title="Lorenz System Coordinates",
legendfontsize=12, titlefontsize=20)
```

## Bibliography
## References

[^1]: Pathak, Jaideep, et al. "_Hybrid forecasting of chaotic processes: Using machine learning in conjunction with a knowledge-based model._" Chaos: An Interdisciplinary Journal of Nonlinear Science 28.4 (2018): 041101.
```@bibliography
Pages = ["hybrid.md"]
Canonical = false
```
Loading
Loading