Skip to content

Commit 987fd03

Browse files
authored
Replace deprecated Flux.ADAM with Flux.Adam (#203)
* replace depreciated Flux.ADAM with Flux.Adam * update Flux compat to 0.14.4
1 parent 93b6fa2 commit 987fd03

11 files changed

+12
-12
lines changed

Project.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Adapt = "3"
2929
CUDA = "3.3"
3030
ChainRulesCore = "1"
3131
DataStructures = "0.18"
32-
Flux = "0.13"
32+
Flux = "0.13.4"
3333
Functors = "0.2, 0.3"
3434
Graphs = "1.4"
3535
KrylovKit = "0.5"

docs/src/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ model = GNNChain(GCNConv(16 => 64),
5454
Dense(64, 1)) |> device
5555

5656
ps = Flux.params(model)
57-
opt = ADAM(1f-4)
57+
opt = Adam(1f-4)
5858
```
5959

6060
### Training

docs/src/tutorials/gnn_intro_pluto.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -266,7 +266,7 @@ Since everything in our model is differentiable and parameterized, we can add so
266266
Here, we make use of a semi-supervised or transductive learning procedure: We simply train against one node per class, but are allowed to make use of the complete input graph data.
267267
268268
Training our model is very similar to any other Flux model.
269-
In addition to defining our network architecture, we define a loss criterion (here, `logitcrossentropy` and initialize a stochastic gradient optimizer (here, `ADAM`).
269+
In addition to defining our network architecture, we define a loss criterion (here, `logitcrossentropy` and initialize a stochastic gradient optimizer (here, `Adam`).
270270
After that, we perform multiple rounds of optimization, where each round consists of a forward and backward pass to compute the gradients of our model parameters w.r.t. to the loss derived from the forward pass.
271271
If you are not new to Flux, this scheme should appear familar to you.
272272
@@ -285,7 +285,7 @@ Let us now start training and see how our node embeddings evolve over time (best
285285
begin
286286
model = GCN(num_features, num_classes)
287287
ps = Flux.params(model)
288-
opt = ADAM(1e-2)
288+
opt = Adam(1e-2)
289289
epochs = 2000
290290

291291
emb = h

docs/src/tutorials/graph_classification_pluto.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -202,7 +202,7 @@ function train!(model; epochs=200, η=1e-2, infotime=10)
202202
device = Flux.cpu
203203
model = model |> device
204204
ps = Flux.params(model)
205-
opt = ADAM(1e-3)
205+
opt = Adam(1e-3)
206206

207207

208208
function report(epoch)

examples/graph_classification_tudataset.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ function train(; kws...)
8282
Dense(nhidden, 1)) |> device
8383

8484
ps = Flux.params(model)
85-
opt = ADAM(args.η)
85+
opt = Adam(args.η)
8686

8787
# LOGGING FUNCTION
8888

examples/link_prediction_pubmed.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ function train(; kws...)
7777
pred = DotPredictor()
7878

7979
ps = Flux.params(model)
80-
opt = ADAM(args.η)
80+
opt = Adam(args.η)
8181

8282
### LOSS FUNCTION ############
8383

examples/neural_ode_cora.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ model = GNNChain(GCNConv(nin => nhidden, relu),
4848
ps = Flux.params(model);
4949

5050
# ## Optimizer
51-
opt = ADAM(0.01)
51+
opt = Adam(0.01)
5252

5353

5454
function eval_loss_accuracy(X, y, mask)

examples/node_classification_cora.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ function train(; kws...)
5757
Dense(nhidden, nout)) |> device
5858

5959
ps = Flux.params(model)
60-
opt = ADAM(args.η)
60+
opt = Adam(args.η)
6161

6262
display(g)
6363

perf/neural_ode_mnist.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ model = Chain(Flux.flatten,
4040
ps = Flux.params(model);
4141

4242
# ## Optimizer
43-
opt = ADAM(0.01)
43+
opt = Adam(0.01)
4444

4545
function eval_loss_accuracy(X, y)
4646
= model(X)

perf/node_classification_cora_geometricflux.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ function train(; kws...)
5959
Dense(nhidden, nout)) |> device
6060

6161
ps = Flux.params(model)
62-
opt = ADAM(args.η)
62+
opt = Adam(args.η)
6363

6464
@info g
6565

0 commit comments

Comments
 (0)