@@ -31,7 +31,7 @@ The ascii name `sigmoid` is also exported.
3131
3232See also [`sigmoid_fast`](@ref).
3333
34- ```
34+ ```julia-repl
3535julia> using UnicodePlots
3636
3737julia> lineplot(sigmoid, -5, 5, height=7)
@@ -63,7 +63,7 @@ const sigmoid = σ
6363
6464Piecewise linear approximation of [`sigmoid`](@ref).
6565
66- ```
66+ ```julia-repl
6767julia> lineplot(hardsigmoid, -5, 5, height=7)
6868 ┌────────────────────────────────────────┐
6969 1 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⡠⠖⠋⠉⠉⠉⠉⠉⠉⠉⠉│ hardσ(x)
@@ -102,7 +102,7 @@ const hardsigmoid = hardσ
102102
103103Return `log(σ(x))` which is computed in a numerically stable way.
104104
105- ```
105+ ```julia-repl
106106julia> lineplot(logsigmoid, -5, 5, height=7)
107107 ┌────────────────────────────────────────┐
108108 0 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡧⠤⠔⠒⠒⠒⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉│ logσ(x)
@@ -128,7 +128,7 @@ Segment-wise linear approximation of `tanh`, much cheaper to compute.
128128See ["Large Scale Machine Learning"](https://ronan.collobert.com/pub/matos/2004_phdthesis_lip6.pdf).
129129
130130See also [`tanh_fast`](@ref).
131- ```
131+ ```julia-repl
132132julia> lineplot(hardtanh, -2, 2, height=7)
133133 ┌────────────────────────────────────────┐
134134 1 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⣀⠔⠋⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉│ hardtanh(x)
@@ -164,7 +164,7 @@ hardtanh(x) = clamp(x, oftype(x, -1), oftype(x, 1)) # clamp(x, -1, 1) is type-s
164164[Rectified Linear Unit](https://en.wikipedia.org/wiki/Rectifier_(neural_networks))
165165activation function.
166166
167- ```
167+ ```julia-repl
168168julia> lineplot(relu, -2, 2, height=7)
169169 ┌────────────────────────────────────────┐
170170 2 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠋│ relu(x)
@@ -188,7 +188,7 @@ Leaky [Rectified Linear Unit](https://en.wikipedia.org/wiki/Rectifier_(neural_ne
188188activation function.
189189You can also specify the coefficient explicitly, e.g. `leakyrelu(x, 0.01)`.
190190
191- ```julia
191+ ```julia-repl
192192julia> lineplot(x -> leakyrelu(x, 0.5), -2, 2, height=7)
193193 ┌────────────────────────────────────────┐
194194 2 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉│ #42(x)
@@ -220,7 +220,7 @@ const leakyrelu_a = 0.01 # also used in gradient below
220220activation function capped at 6.
221221See ["Convolutional Deep Belief Networks"](https://www.cs.toronto.edu/~kriz/conv-cifar10-aug2010.pdf) from CIFAR-10.
222222
223- ```
223+ ```julia-repl
224224julia> lineplot(relu6, -10, 10, height=7)
225225 ┌────────────────────────────────────────┐
226226 6 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠎⠉⠉⠉⠉⠉⠉⠉⠉│ relu6(x)
@@ -245,7 +245,7 @@ Randomized Leaky Rectified Linear Unit activation function.
245245See ["Empirical Evaluation of Rectified Activations"](https://arxiv.org/abs/1505.00853)
246246You can also specify the bound explicitly, e.g. `rrelu(x, 0.0, 1.0)`.
247247
248- ```julia
248+ ```julia-repl
249249julia> lineplot(rrelu, -20, 10, height=7)
250250 ┌────────────────────────────────────────┐
251251 10 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠋│ rrelu(x)
@@ -275,7 +275,7 @@ Exponential Linear Unit activation function.
275275See ["Fast and Accurate Deep Network Learning by Exponential Linear Units"](https://arxiv.org/abs/1511.07289).
276276You can also specify the coefficient explicitly, e.g. `elu(x, 1)`.
277277
278- ```
278+ ```julia-repl
279279julia> lineplot(elu, -2, 2, height=7)
280280 ┌────────────────────────────────────────┐
281281 2 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉│ elu(x)
@@ -305,7 +305,7 @@ deriv_elu(Ω, α=1) = ifelse(Ω ≥ 0, one(Ω), Ω + oftype(Ω, α))
305305
306306Activation function from ["Gaussian Error Linear Units"](https://arxiv.org/abs/1606.08415).
307307
308- ```
308+ ```julia-repl
309309julia> lineplot(gelu, -2, 2, height=7)
310310 ┌────────────────────────────────────────┐
311311 2 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊│ gelu(x)
363363Self-gated activation function.
364364See ["Swish: a Self-Gated Activation Function"](https://arxiv.org/abs/1710.05941).
365365
366- ```
366+ ```julia-repl
367367julia> lineplot(swish, -2, 2, height=7)
368368 ┌────────────────────────────────────────┐
369369 2 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤│ swish(x)
@@ -386,7 +386,7 @@ julia> lineplot(swish, -2, 2, height=7)
386386Hard-Swish activation function.
387387See ["Searching for MobileNetV3"](https://arxiv.org/abs/1905.02244).
388388
389- ```
389+ ```julia-repl
390390julia> lineplot(hardswish, -2, 5, height = 7)
391391 ┌────────────────────────────────────────┐
392392 5 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠔⠒⠉│ hardswish(x)
@@ -430,7 +430,7 @@ deriv_hardswish(x) = ifelse(x < -3, oftf(x,0), ifelse(x > 3, oftf(x,1), x/3 + of
430430Activation function from
431431["LiSHT: Non-Parametric Linearly Scaled Hyperbolic Tangent ..."](https://arxiv.org/abs/1901.05894)
432432
433- ```
433+ ```julia-repl
434434julia> lineplot(lisht, -2, 2, height=7)
435435 ┌────────────────────────────────────────┐
436436 2 │⠢⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔│ lisht(x)
@@ -469,7 +469,7 @@ lisht(x) = x * tanh_fast(x)
469469Scaled exponential linear units.
470470See ["Self-Normalizing Neural Networks"](https://arxiv.org/abs/1706.02515).
471471
472- ```
472+ ```julia-repl
473473julia> lineplot(selu, -3, 2, height=7)
474474 ┌────────────────────────────────────────┐
475475 3 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ selu(x)
507507
508508Activation function from ["Continuously Differentiable Exponential Linear Units"](https://arxiv.org/abs/1704.07483).
509509
510- ```
510+ ```julia-repl
511511julia> lineplot(celu, -2, 2, height=7)
512512 ┌────────────────────────────────────────┐
513513 2 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉│ celu(x)
@@ -535,7 +535,7 @@ deriv_celu(Ω, α=1) = ifelse(Ω > 0, oftf(Ω, 1), Ω / oftf(Ω, α) + 1)
535535Threshold gated rectified linear activation function.
536536See ["Zero-bias autoencoders and the benefits of co-adapting features"](https://arxiv.org/abs/1402.3337)
537537
538- ```
538+ ```julia-repl
539539julia> lineplot(trelu, -2, 4, height=7)
540540 ┌────────────────────────────────────────┐
541541 4 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠋│ trelu(x)
@@ -559,7 +559,7 @@ const thresholdrelu = trelu
559559
560560See ["Quadratic Polynomials Learn Better Image Features"](http://www.iro.umontreal.ca/~lisa/publications2/index.php/attachments/single/205) (2009).
561561
562- ```
562+ ```julia-repl
563563julia> lineplot(softsign, -5, 5, height=7)
564564 ┌────────────────────────────────────────┐
565565 1 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⣀⣀⣀⠤⠤⠤⠤⠤│ softsign(x)
@@ -602,7 +602,7 @@ deriv_softsign(x) = 1 / (1 + abs(x))^2
602602
603603See ["Deep Sparse Rectifier Neural Networks"](http://proceedings.mlr.press/v15/glorot11a/glorot11a.pdf), JMLR 2011.
604604
605- ```
605+ ```julia-repl
606606julia> lineplot(softplus, -3, 3, height=7)
607607 ┌────────────────────────────────────────┐
608608 4 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ softplus(x)
@@ -640,7 +640,7 @@ softplus(x) = log1p(exp(-abs(x))) + relu(x)
640640
641641Return `log(cosh(x))` which is computed in a numerically stable way.
642642
643- ```
643+ ```julia-repl
644644julia> lineplot(logcosh, -5, 5, height=7)
645645 ┌────────────────────────────────────────┐
646646 5 │⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ logcosh(x)
@@ -664,7 +664,7 @@ const log2 = log(2)
664664
665665Activation function from ["Mish: A Self Regularized Non-Monotonic Neural Activation Function"](https://arxiv.org/abs/1908.08681).
666666
667- ```
667+ ```julia-repl
668668julia> lineplot(mish, -5, 5, height=7)
669669 ┌────────────────────────────────────────┐
670670 5 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠖⠋│ mish(x)
@@ -686,7 +686,7 @@ mish(x) = x * tanh(softplus(x))
686686
687687See ["Tanhshrink Activation Function"](https://www.gabormelli.com/RKB/Tanhshrink_Activation_Function).
688688
689- ```
689+ ```julia-repl
690690julia> lineplot(tanhshrink, -3, 3, height=7)
691691 ┌────────────────────────────────────────┐
692692 3 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ tanhshrink(x)
@@ -712,7 +712,7 @@ tanhshrink(x) = x - tanh_fast(x)
712712
713713See ["Softshrink Activation Function"](https://www.gabormelli.com/RKB/Softshrink_Activation_Function).
714714
715- ```
715+ ```julia-repl
716716julia> lineplot(softshrink, -2, 2, height=7)
717717 ┌────────────────────────────────────────┐
718718 2 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀│ softshrink(x)
@@ -770,7 +770,7 @@ For any other number types, it just calls `tanh`.
770770
771771See also [`sigmoid_fast`](@ref).
772772
773- ```
773+ ```julia-repl
774774julia> tanh(0.5f0)
7757750.46211717f0
776776
@@ -808,11 +808,11 @@ tanh_fast(x::Number) = Base.tanh(x)
808808 sigmoid_fast(x)
809809
810810This is a faster, and very slightly less accurate, version of `sigmoid`.
811- For `x::Float32, perhaps 3 times faster, and maximum errors 2 eps instead of 1.
811+ For `x::Float32` , perhaps 3 times faster, and maximum errors 2 eps instead of 1.
812812
813813See also [`tanh_fast`](@ref).
814814
815- ```
815+ ```julia-repl
816816julia> sigmoid(0.2f0)
8178170.54983395f0
818818
0 commit comments