Skip to content

Commit d6fa200

Browse files
committed
Revert some mess
1 parent e4a08b1 commit d6fa200

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

docs/src/saving.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -116,15 +116,15 @@ This will update the `"model-checkpoint.bson"` file every thirty seconds.
116116
You can get more advanced by saving a series of models throughout training, for example
117117

118118
```julia
119-
julia> @save "model-$(now()).bson" model
119+
@save "model-$(now()).bson" model
120120
```
121121

122122
will produce a series of models like `"model-2018-03-06T02:57:10.41.bson"`. You
123123
could also store the current test set loss, so that it's easy to (for example)
124124
revert to an older copy of the model if it starts to overfit.
125125

126126
```julia
127-
julia> @save "model-$(now()).bson" model loss = testloss()
127+
@save "model-$(now()).bson" model loss = testloss()
128128
```
129129

130130
Note that to resume a model's training, you might need to restore other stateful parts of your training loop. Possible examples are stateful optimizers (which usually utilize an `IdDict` to store their state), and the randomness used to partition the original data into the training and validation sets.
@@ -133,7 +133,7 @@ You can store the optimiser state alongside the model, to resume training
133133
exactly where you left off. BSON is smart enough to [cache values](https://github.com/JuliaIO/BSON.jl/blob/v0.3.4/src/write.jl#L71) and insert links when saving, but only if it knows everything to be saved up front. Thus models and optimizers must be saved together to have the latter work after restoring.
134134

135135
```julia
136-
julia> opt = ADAM()
137-
julia> @save "model-$(now()).bson" model opt
136+
opt = ADAM()
137+
@save "model-$(now()).bson" model opt
138138
```
139139

0 commit comments

Comments
 (0)