@@ -11,7 +11,7 @@ abstract type AbstractReservoirTrainableLayer <: AbstractLuxLayer end
1111Linear readout layer with optional bias and elementwise activation. Intended as
1212the final, trainable mapping from collected features (e.g., reservoir state) to
1313outputs. When `include_collect=true`, training will collect features immediately
14- before this layer (logically inserting a [`Collect() `](@ref) right before it).
14+ before this layer (logically inserting a [`Collect`](@ref) right before it).
1515
1616## Equation
1717
@@ -29,7 +29,7 @@ before this layer (logically inserting a [`Collect()`](@ref) right before it).
2929
3030- `use_bias`: Include an additive bias vector `b`. Default: `false`.
3131- `include_collect`: If `true` (default), training collects features immediately
32- before this layer (as if a [`Collect() `](@ref) were inserted right before it).
32+ before this layer (as if a [`Collect`](@ref) were inserted right before it).
3333
3434## Parameters
3535
@@ -42,9 +42,11 @@ before this layer (logically inserting a [`Collect()`](@ref) right before it).
4242
4343## Notes
4444
45- - In ESN workflows, readout weights are typically set via ridge regression in
46- `train!(...)`. Therefore, how `Readout` gets initialized is of no consequence.
47- - If you set `include_collect=false`, make sure a [`Collect()`](@ref) appears earlier in the chain.
45+ - In ESN workflows, readout weights are typically replaced via ridge regression in
46+ [`train!`](@ref). Therefore, how `LinearReadout` gets initialized is of no consequence.
47+ Additionally, the dimesions will also not be taken into account, as [`train!`](@ref)
48+ will replace the weights.
49+ - If you set `include_collect=false`, make sure a [`Collect`](@ref) appears earlier in the chain.
4850 Otherwise training may operate on the post-readout signal,
4951 which is usually unintended.
5052"""
106108 Collect()
107109
108110Marker layer that passes data through unchanged but marks a feature
109- checkpoint for [`collectstates`](@ref). At each time step, whenever a `Collect() ` is
111+ checkpoint for [`collectstates`](@ref). At each time step, whenever a `Collect` is
110112encountered in the chain, the current vector is recorded as part of the feature
111- vector used to train the readout. If multiple `Collect() ` layers exist, their
113+ vector used to train the readout. If multiple `Collect` layers exist, their
112114vectors are concatenated with `vcat` in order of appearance.
113115
114116## Arguments
@@ -137,13 +139,13 @@ vectors are concatenated with `vcat` in order of appearance.
137139
138140## Notes
139141
140- - When used with a single `Collect() ` before a [`LinearReadout`](@ref), training uses exactly
142+ - When used with a single `Collect` before a [`LinearReadout`](@ref), training uses exactly
141143 the tensor right before the readout (e.g., the reservoir state).
142- - With **multiple** `Collect() ` layers (e.g., after different submodules), the
144+ - With **multiple** `Collect` layers (e.g., after different submodules), the
143145 per-step features are `vcat`-ed in chain order to form one feature vector.
144146- If the readout is constructed with `include_collect=true`, an *implicit*
145147 collection point is assumed immediately before the readout. Use an explicit
146- `Collect() ` only when you want to control where/what is collected (or to stack
148+ `Collect` only when you want to control where/what is collected (or to stack
147149 multiple features).
148150
149151 ```julia
@@ -167,9 +169,9 @@ Base.show(io::IO, cl::Collect) = print(io, "Collection point of states")
167169 collectstates(rc, data, ps, st)
168170
169171Run the sequence `data` once through the reservoir chain `rc`, advancing the
170- model state over time, and collect feature vectors at every [`Collect() `](@ref) layer.
171- If more than one [`Collect() `](ref) is encountered in a step, their vectors are
172- concatenated with `vcat` in order of appearance. If no [`Collect() `](@ref) is seen
172+ model state over time, and collect feature vectors at every [`Collect`](@ref) layer.
173+ If more than one [`Collect`](@ ref) is encountered in a step, their vectors are
174+ concatenated with `vcat` in order of appearance. If no [`Collect`](@ref) is seen
173175in a step, the feature defaults to the final vector exiting the chain for
174176that time step.
175177
@@ -180,15 +182,15 @@ that time step.
180182
181183## Arguments
182184
183- - `rc`: A [`ReservoirChain`](@ref) (or compatible [ `AbstractLuxLayer`](@extref) with `.layers`).
185+ - `rc`: A [`ReservoirChain`](@ref) (or compatible `AbstractLuxLayer` with `.layers`).
184186- `data`: Input sequence of shape `(in_dims, T)`, where columns are time steps.
185187- `ps`, `st`: Current parameters and state for `rc`.
186188
187189## Returns
188190
189191- `states`: Reservoir states, i.e. a feature matrix with one column per
190192 time step. The feature dimension `n_features` equals the vertical concatenation
191- of all vectors captured at [`Collect() `](@ref) layers in that step.
193+ of all vectors captured at [`Collect`](@ref) layers in that step.
192194- `st`: Updated model states.
193195
194196"""
0 commit comments