Skip to content

Commit b5e5741

Browse files
Merge pull request #1488 from DhairyaLGandhi/dg/traindocs
Add training loop to docs
2 parents fdf7152 + 7d8e233 commit b5e5741

File tree

1 file changed

+25
-2
lines changed

1 file changed

+25
-2
lines changed

docs/src/training/training.md

Lines changed: 25 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,36 @@ To actually train a model we need four things:
77
* A collection of data points that will be provided to the objective function.
88
* An [optimiser](optimisers.md) that will update the model parameters appropriately.
99

10-
With these we can call `train!`:
10+
Training a model is typically an iterative process, where we go over the data set,
11+
calculate the objective function over the datapoints, and optimise that.
12+
This can be visualised in the form of a simple loop.
13+
14+
```julia
15+
for d in datapoints
16+
17+
# `d` should produce a collection of arguments
18+
# to the loss function
19+
20+
# Calculate the gradients of the parameters
21+
# with respect to the loss function
22+
grads = Flux.gradient(parameters) do
23+
loss(d...)
24+
end
25+
26+
# Update the parameters based on the chosen
27+
# optimiser (opt)
28+
Flux.Optimise.update!(opt, parameters, grads)
29+
end
30+
```
31+
32+
To make it easy, Flux defines `train!`:
1133

1234
```@docs
1335
Flux.Optimise.train!
1436
```
1537

16-
There are plenty of examples in the [model zoo](https://github.com/FluxML/model-zoo).
38+
There are plenty of examples in the [model zoo](https://github.com/FluxML/model-zoo), and
39+
more information can be found on [Custom Training Loops](../models/advanced.md).
1740

1841
## Loss Functions
1942

0 commit comments

Comments
 (0)