Skip to content

Commit d4e306c

Browse files
committed
Fix slight typos in LayerNorm docs
1 parent e4f8678 commit d4e306c

File tree

1 file changed

+6
-7
lines changed

1 file changed

+6
-7
lines changed

src/layers/normalise.jl

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -143,17 +143,16 @@ testmode!(m::AlphaDropout, mode=true) =
143143
144144
A [normalisation layer](https://arxiv.org/abs/1607.06450) designed to be
145145
used with recurrent hidden states.
146-
The argument `sz` should be an integer or a tuple of integers.
146+
The argument `size` should be an integer or a tuple of integers.
147147
In the forward pass, the layer normalises the mean and standard
148-
deviation of the input, the applied the elementwise activation `λ`.
149-
The input is normalised along the first `length(sz)` dimensions
150-
for tuple `sz`, along the first dimension for integer `sz`.
151-
The input is expected to have first dimensions' size equal to `sz`.
148+
deviation of the input, then applies the elementwise activation `λ`.
149+
The input is normalised along the first `length(size)` dimensions
150+
for tuple `size`, and along the first dimension for integer `size`.
151+
The input is expected to have first dimensions' size equal to `size`.
152152
153-
If `affine=true` also applies a learnable shift and rescaling
153+
If `affine=true`, it also applies a learnable shift and rescaling
154154
using the [`Scale`](@ref) layer.
155155
156-
157156
See also [`BatchNorm`](@ref), [`InstanceNorm`](@ref), [`GroupNorm`](@ref), and [`normalise`](@ref).
158157
"""
159158
struct LayerNorm{F,D,T,N}

0 commit comments

Comments
 (0)