Skip to content

Commit d3c799f

Browse files
authored
Fix typo in readme and qualify Flux.gradient (#30)
* fix typo * qualify Flux.gradient
1 parent 1c47508 commit d3c799f

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
Learning to Optimize (LearningToOptimize) package that provides basic functionalities to help fit proxy models for parametric optimization problems.
1010

11-
Have a look at our sister [HugginFace Organization](https://huggingface.co/LearningToOptimize), for datasets, pre-trained models and benchmarks.
11+
Have a look at our sister [HuggingFace Organization](https://huggingface.co/LearningToOptimize), for datasets, pre-trained models and benchmarks.
1212

1313
[![Stable](https://img.shields.io/badge/docs-stable-blue.svg)](https://andrewrosemberg.github.io/LearningToOptimize.jl/stable/)
1414
[![Dev](https://img.shields.io/badge/docs-dev-blue.svg)](https://andrewrosemberg.github.io/LearningToOptimize.jl/dev/)

src/FullyConnected.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -157,7 +157,7 @@ function train!(model, loss, opt_state, X, Y; _batchsize = 32, shuffle = true)
157157
Y = Y |> gpu
158158
data = Flux.DataLoader((X, Y), batchsize = batchsize, shuffle = shuffle)
159159
for d in data
160-
∇model, _ = gradient(model, d...) do m, x, y # calculate the gradients
160+
∇model, _ = Flux.gradient(model, d...) do m, x, y # calculate the gradients
161161
loss(m(x), y)
162162
end
163163
# insert what ever code you want here that needs gradient

0 commit comments

Comments
 (0)