Skip to content

Commit a162245

Browse files
Merge pull request #1985 from skleinbo/patch-1
deprecations.jl: depwarn -> Base.depwarn
2 parents b6b3569 + 65adbf4 commit a162245

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

docs/src/training/optimisers.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,17 +21,17 @@ grads = gradient(() -> loss(x, y), θ)
2121
We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here's one way to do that:
2222

2323
```julia
24-
using Flux.Optimise: update!
25-
2624
η = 0.1 # Learning Rate
2725
for p in (W, b)
28-
update!(p, η * grads[p])
26+
p .-= η * grads[p]
2927
end
3028
```
3129

3230
Running this will alter the parameters `W` and `b` and our loss should go down. Flux provides a more general way to do optimiser updates like this.
3331

3432
```julia
33+
using Flux: update!
34+
3535
opt = Descent(0.1) # Gradient descent with learning rate 0.1
3636

3737
for p in (W, b)

src/deprecations.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ end
3535
Zeros(args...) = Zeros() # was used both Dense(10, 2, initb = Zeros) and Dense(rand(2,10), Zeros())
3636

3737
function Optimise.update!(x::AbstractArray, x̄)
38-
depwarn("`Flux.Optimise.update!(x, x̄)` was not used internally and has been removed. Please write `x .-= x̄` instead.", :update!)
38+
Base.depwarn("`Flux.Optimise.update!(x, x̄)` was not used internally and has been removed. Please write `x .-= x̄` instead.", :update!)
3939
x .-=
4040
end
4141

0 commit comments

Comments
 (0)