Skip to content

Commit 65adbf4

Browse files
authored
Update optimisers.md
Second code block: `update!` -> `.-=` Third code block: added `using Flux: update!`
1 parent d34d317 commit 65adbf4

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

docs/src/training/optimisers.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,17 +21,17 @@ grads = gradient(() -> loss(x, y), θ)
2121
We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here's one way to do that:
2222

2323
```julia
24-
using Flux.Optimise: update!
25-
2624
η = 0.1 # Learning Rate
2725
for p in (W, b)
28-
update!(p, η * grads[p])
26+
p .-= η * grads[p]
2927
end
3028
```
3129

3230
Running this will alter the parameters `W` and `b` and our loss should go down. Flux provides a more general way to do optimiser updates like this.
3331

3432
```julia
33+
using Flux: update!
34+
3535
opt = Descent(0.1) # Gradient descent with learning rate 0.1
3636

3737
for p in (W, b)

0 commit comments

Comments
 (0)