Skip to content

Commit 4e33163

Browse files
authored
Fix few typos in tutorials (#278)
* Remove pytorch sentence The original author evidently forgot to remove the sentence that this commit deletes. * Add missing links * Fix broken link I'm guessing the right one given the context and previous references to the Karate Club tutorial. * Fix typo Missing closing parenthesis.
1 parent a7de26f commit 4e33163

File tree

2 files changed

+4
-6
lines changed

2 files changed

+4
-6
lines changed

docs/tutorials/introductory_tutorials/gnn_intro_pluto.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -258,9 +258,9 @@ Since everything in our model is differentiable and parameterized, we can add so
258258
Here, we make use of a semi-supervised or transductive learning procedure: We simply train against one node per class, but are allowed to make use of the complete input graph data.
259259
260260
Training our model is very similar to any other Flux model.
261-
In addition to defining our network architecture, we define a loss criterion (here, `logitcrossentropy` and initialize a stochastic gradient optimizer (here, `Adam`).
261+
In addition to defining our network architecture, we define a loss criterion (here, `logitcrossentropy`), and initialize a stochastic gradient optimizer (here, `Adam`).
262262
After that, we perform multiple rounds of optimization, where each round consists of a forward and backward pass to compute the gradients of our model parameters w.r.t. to the loss derived from the forward pass.
263-
If you are not new to Flux, this scheme should appear familiar to you.
263+
If you are not new to Flux, this scheme should appear familiar to you.
264264
265265
Note that our semi-supervised learning scenario is achieved by the following line:
266266
```

docs/tutorials/introductory_tutorials/node_classification_pluto.jl

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,7 @@ md"""
166166
Our MLP is defined by two linear layers and enhanced by [ReLU](https://fluxml.ai/Flux.jl/stable/models/nnlib/#NNlib.relu) non-linearity and [Dropout](https://fluxml.ai/Flux.jl/stable/models/layers/#Flux.Dropout).
167167
Here, we first reduce the 1433-dimensional feature vector to a low-dimensional embedding (`hidden_channels=16`), while the second linear layer acts as a classifier that should map each low-dimensional node embedding to one of the 7 classes.
168168
169-
Let's train our simple MLP by following a similar procedure as described in [the first part of this tutorial](https://carlolucibello.github.io/GraphNeuralNetworks.jl/dev/tutorials/gnn_intro_pluto).
169+
Let's train our simple MLP by following a similar procedure as described in [the first part of this tutorial](https://carlolucibello.github.io/GraphNeuralNetworks.jl/dev/tutorials/introductory_tutorials/gnn_intro_pluto/#Hands-on-introduction-to-Graph-Neural-Networks).
170170
We again make use of the **cross entropy loss** and **Adam optimizer**.
171171
This time, we also define a **`accuracy` function** to evaluate how well our final model performs on the test node set (which labels have not been observed during training).
172172
"""
@@ -214,9 +214,7 @@ That is exactly where Graph Neural Networks come into play and can help to boost
214214
md"""
215215
## Training a Graph Convolutional Neural Network (GNN)
216216
217-
We can easily convert our MLP to a GNN by swapping the `torch.nn.Linear` layers with PyG's GNN operators.
218-
219-
Following-up on [the first part of this tutorial](), we replace the linear layers by the [`GCNConv`]() module.
217+
Following-up on [the first part of this tutorial](https://carlolucibello.github.io/GraphNeuralNetworks.jl/dev/tutorials/introductory_tutorials/node_classification_pluto/#Multi-layer-Perception-Network-(MLP)), we replace the `Dense` linear layers by the [`GCNConv`](https://carlolucibello.github.io/GraphNeuralNetworks.jl/dev/api/conv/#GraphNeuralNetworks.GCNConv) module.
220218
To recap, the **GCN layer** ([Kipf et al. (2017)](https://arxiv.org/abs/1609.02907)) is defined as
221219
222220
```math

0 commit comments

Comments
 (0)