diff --git a/docs/old_tutorials/2024-04-10-blitz.md b/docs/old_tutorials/2024-04-10-blitz.md index 5ba368e38a..19f328b4a4 100755 --- a/docs/old_tutorials/2024-04-10-blitz.md +++ b/docs/old_tutorials/2024-04-10-blitz.md @@ -200,7 +200,7 @@ for (k, p) in trainables(model, path=true) end ``` -You don't have to use layers, but they can be convient for many simple kinds of models and fast iteration. +You don't have to use layers, but they can be convenient for many simple kinds of models and fast iteration. The next step is to update our weights and perform optimisation. As you might be familiar, *Gradient Descent* is a simple algorithm that takes the weights and steps using a learning rate and the gradients. `weights = weights - learning_rate * gradient`. diff --git a/docs/old_tutorials/2024-04-10-mlp.md b/docs/old_tutorials/2024-04-10-mlp.md index be4dd34cf1..33304e4136 100644 --- a/docs/old_tutorials/2024-04-10-mlp.md +++ b/docs/old_tutorials/2024-04-10-mlp.md @@ -101,7 +101,7 @@ end ``` -In addition, we define the function (`accuracy`) to report the accuracy of our model during the training process. To compute the accuray, we need to decode the output of our model using the [onecold](https://fluxml.ai/Flux.jl/stable/data/onehot/#Flux.onecold) function. +In addition, we define the function (`accuracy`) to report the accuracy of our model during the training process. To compute the accuracy, we need to decode the output of our model using the [onecold](https://fluxml.ai/Flux.jl/stable/data/onehot/#Flux.onecold) function. ```julia function accuracy(dataloader, model)