Skip to content

Commit f53a5f4

Browse files
whitespace
1 parent ddc688f commit f53a5f4

File tree

1 file changed

+53
-1
lines changed

1 file changed

+53
-1
lines changed

src/layers/basic.jl

Lines changed: 53 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,23 @@
11
"""
22
Chain(layers...)
3+
34
Chain multiple layers / functions together, so that they are called in sequence
45
on a given input.
6+
57
`Chain` also supports indexing and slicing, e.g. `m[2]` or `m[1:end-1]`.
68
`m[1:3](x)` will calculate the output of the first three layers.
9+
710
# Examples
811
```jldoctest
912
julia> m = Chain(x -> x^2, x -> x+1);
13+
1014
julia> m(5) == 26
1115
true
16+
1217
julia> m = Chain(Dense(10, 5), Dense(5, 2));
18+
1319
julia> x = rand(10);
20+
1421
julia> m(x) == m[2](m[1](x))
1522
true
1623
```
@@ -63,30 +70,40 @@ extraChain(::Tuple{}, x) = ()
6370
"""
6471
Dense(in, out, σ=identity; bias=true, init=glorot_uniform)
6572
Dense(W::AbstractMatrix, [bias, σ])
73+
6674
Create a traditional `Dense` layer, whose forward pass is given by:
75+
6776
y = σ.(W * x .+ bias)
77+
6878
The input `x` should be a vector of length `in`, or batch of vectors represented
6979
as an `in × N` matrix, or any array with `size(x,1) == in`.
7080
The out `y` will be a vector of length `out`, or a batch with
7181
`size(y) == (out, size(x)[2:end]...)`
82+
7283
Keyword `bias=false` will switch off trainable bias for the layer.
7384
The initialisation of the weight matrix is `W = init(out, in)`, calling the function
7485
given to keyword `init`, with default [`glorot_uniform`](@doc Flux.glorot_uniform).
7586
The weight matrix and/or the bias vector (of length `out`) may also be provided explicitly.
87+
7688
# Examples
7789
```jldoctest
7890
julia> d = Dense(5, 2)
7991
Dense(5, 2)
92+
8093
julia> d(rand(Float32, 5, 64)) |> size
8194
(2, 64)
95+
8296
julia> d(rand(Float32, 5, 1, 1, 64)) |> size # treated as three batch dimensions
8397
(2, 1, 1, 64)
98+
8499
julia> d1 = Dense(ones(2, 5), false, tanh) # using provided weight matrix
85100
Dense(5, 2, tanh; bias=false)
101+
86102
julia> d1(ones(5))
87103
2-element Array{Float64,1}:
88104
0.9999092042625951
89105
0.9999092042625951
106+
90107
julia> Flux.params(d1) # no trainable bias
91108
Params([[1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]])
92109
```
@@ -142,10 +159,14 @@ end
142159
"""
143160
Diagonal(α, β)
144161
Diagonal(size::Integer...)
162+
145163
Create an element-wise linear layer, which performs
164+
146165
y = α .* x .+ β
166+
147167
The learnable arrays are initialised `α = ones(Float32, size)` and
148168
`β = zeros(Float32, size)`.
169+
149170
Used by [`LayerNorm`](@ref).
150171
"""
151172
struct Diagonal{T}
@@ -179,9 +200,11 @@ end
179200

180201
"""
181202
Maxout(over)
203+
182204
The [Maxout](https://arxiv.org/abs/1302.4389) layer has a number of
183205
internal layers which all receive the same input. It returns the elementwise
184206
maximum of the internal layers' outputs.
207+
185208
Maxout over linear dense layers satisfies the univeral approximation theorem.
186209
"""
187210
struct Maxout{FS<:Tuple}
@@ -190,15 +213,20 @@ end
190213

191214
"""
192215
Maxout(f, n_alts)
216+
193217
Construct a Maxout layer over `n_alts` instances of the layer given by `f`.
194218
The function takes no arguments and should return some callable layer.
195219
Conventionally, this is a linear dense layer.
220+
196221
# Examples
222+
197223
This constructs a `Maxout` layer over 4 internal dense linear layers, each
198224
identical in structure (784 inputs, 128 outputs):
199225
```jldoctest
200226
julia> insize = 784;
227+
201228
julia> outsize = 128;
229+
202230
julia> Maxout(()->Dense(insize, outsize), 4);
203231
```
204232
"""
@@ -215,19 +243,25 @@ end
215243

216244
"""
217245
SkipConnection(layer, connection)
246+
218247
Create a skip connection which consists of a layer or `Chain` of consecutive
219248
layers and a shortcut connection linking the block's input to the output
220249
through a user-supplied 2-argument callable. The first argument to the callable
221250
will be propagated through the given `layer` while the second is the unchanged,
222251
"skipped" input.
252+
223253
The simplest "ResNet"-type connection is just `SkipConnection(layer, +)`.
224254
Here is a more complicated example:
225255
```jldoctest
226256
julia> m = Conv((3,3), 4 => 7, pad=(1,1));
257+
227258
julia> x = ones(Float32, 5, 5, 4, 10);
259+
228260
julia> size(m(x)) == (5, 5, 7, 10)
229261
true
262+
230263
julia> sm = SkipConnection(m, (mx, x) -> cat(mx, x, dims=3));
264+
231265
julia> size(sm(x)) == (5, 5, 11, 10)
232266
true
233267
```
@@ -250,32 +284,45 @@ end
250284
"""
251285
Bilinear(in1, in2, out, σ=identity; bias=true, init=glorot_uniform)
252286
Bilinear(W::AbstractArray, [bias, σ])
287+
253288
Creates a Bilinear layer, which operates on two inputs at the same time.
254289
Its output, given vectors `x` & `y`, is another vector `z` with,
255290
for all `i ∈ 1:out`:
291+
256292
z[i] = σ(x' * W[i,:,:] * y + bias[i])
293+
257294
If `x` and `y` are matrices, then each column of the output `z = B(x, y)` is of this form,
258295
with `B` a Bilinear layer.
296+
259297
If `y` is not given, it is taken to be equal to `x`, i.e. `B(x) == B(x, x)`
298+
260299
The two inputs may also be provided as a tuple, `B((x, y)) == B(x, y)`,
261300
which is accepted as the input to a `Chain`.
301+
262302
The initialisation works as for [`Dense`](@ref) layer, with `W = init(out, in1, in2)`.
263303
By default the bias vector is `zeros(Float32, out)`, option `bias=false` will switch off
264304
trainable bias. Either of these may be provided explicitly.
305+
265306
# Examples
266307
```jldoctest
267308
julia> x, y = randn(Float32, 5, 32), randn(Float32, 5, 32);
309+
268310
julia> B = Flux.Bilinear(5, 5, 7);
311+
269312
julia> B(x) |> size # interactions based on one input
270313
(7, 32)
314+
271315
julia> B(x,y) == B((x,y)) # two inputs, may be given as a tuple
272316
true
317+
273318
julia> sc = SkipConnection(
274319
Chain(Dense(5, 20, tanh), Dense(20, 9, tanh)),
275320
Flux.Bilinear(9, 5, 3, bias=false),
276321
); # used as the recombinator, with skip as the second input
322+
277323
julia> sc(x) |> size
278324
(3, 32)
325+
279326
julia> Flux.Bilinear(rand(4,8,16), false, tanh) # first dim of weight is the output
280327
Bilinear(8, 16, 4, tanh, bias=false)
281328
```
@@ -329,17 +376,22 @@ end
329376

330377
"""
331378
Parallel(connection, layers...)
379+
332380
Create a 'Parallel' layer that passes an input array to each path in
333381
`layers`, reducing the output with `connection`.
382+
334383
Called with one input `x`, this is equivalent to `reduce(connection, [l(x) for l in layers])`.
335384
If called with multiple inputs, they are `zip`ped with the layers, thus `Parallel(+, f, g)(x, y) = f(x) + g(y)`.
385+
336386
# Examples
337387
```jldoctest
338388
julia> model = Chain(Dense(3, 5),
339389
Parallel(vcat, Dense(5, 4), Chain(Dense(5, 7), Dense(7, 4))),
340390
Dense(8, 17));
391+
341392
julia> size(model(rand(3)))
342393
(17,)
394+
343395
julia> model = Parallel(+, Dense(10, 2), Dense(5, 2))
344396
Parallel(+, Dense(10, 2), Dense(5, 2))
345397
julia> size(model(rand(10), rand(5)))
@@ -366,4 +418,4 @@ function Base.show(io::IO, m::Parallel)
366418
print(io, "Parallel(", m.connection, ", ")
367419
join(io, m.layers, ", ")
368420
print(io, ")")
369-
end
421+
end

0 commit comments

Comments
 (0)