You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/gnngraph.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -117,7 +117,7 @@ g.edata.e
117
117
## Edge weights
118
118
119
119
It is common to denote scalar edge features as edge weights. The `GNNGraph` has specific support
120
-
for edge weights: they can be stored as part of internal representions of the graph (COO or adjacency matrix). Some graph convolutional layers, most notably the [`GCNConv`](@ref), can use the edge weights to perform weighted sums over the nodes' neighborhoods.
120
+
for edge weights: they can be stored as part of internal representations of the graph (COO or adjacency matrix). Some graph convolutional layers, most notably the [`GCNConv`](@ref), can use the edge weights to perform weighted sums over the nodes' neighborhoods.
121
121
122
122
```julia
123
123
julia> source = [1, 1, 2, 2, 3, 3];
@@ -143,7 +143,7 @@ julia> get_edge_weight(g)
143
143
144
144
## Batches and Subgraphs
145
145
146
-
Multiple `GNNGraph`s can be batched togheter into a single graph
146
+
Multiple `GNNGraph`s can be batched together into a single graph
147
147
that contains the total number of the original nodes
148
148
and where the original graphs are disjoint subgraphs.
Copy file name to clipboardExpand all lines: docs/src/tutorials/gnn_intro_pluto.jl
+5-5Lines changed: 5 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ end;
40
40
md"""
41
41
# Introduction: Hands-on Graph Neural Networks
42
42
43
-
*This Pluto noteboook is a julia adaptation of the Pytorch Geometric tutorials that can be found [here](https://pytorch-geometric.readthedocs.io/en/latest/notes/colabs.html).*
43
+
*This Pluto notebook is a julia adaptation of the Pytorch Geometric tutorials that can be found [here](https://pytorch-geometric.readthedocs.io/en/latest/notes/colabs.html).*
44
44
45
45
Recently, deep learning on graphs has emerged to one of the hottest research fields in the deep learning community.
46
46
Here, **Graph Neural Networks (GNNs)** aim to generalize classical deep learning concepts to irregular structured data (in contrast to images or texts) and to enable neural networks to reason about objects and their relations.
@@ -135,7 +135,7 @@ The `g` object holds 3 attributes:
135
135
These attributes are `NamedTuples` that can store multiple feature arrays: we can access a specific set of features e.g. `x`, with `g.ndata.x`.
136
136
137
137
138
-
In our task, `g.ndata.train_mask` describes for which nodes we already know their community assigments. In total, we are only aware of the ground-truth labels of 4 nodes (one for each community), and the task is to infer the community assignment for the remaining nodes.
138
+
In our task, `g.ndata.train_mask` describes for which nodes we already know their community assignments. In total, we are only aware of the ground-truth labels of 4 nodes (one for each community), and the task is to infer the community assignment for the remaining nodes.
139
139
140
140
The `g` object also provides some **utility functions** to infer some basic properties of the underlying graph.
141
141
For example, we can easily infer whether there exists isolated nodes in the graph (*i.e.* there exists no edge to any node), whether the graph contains self-loops (*i.e.*, ``(v, v) \in \mathcal{E}``), or whether the graph is bidirected (*i.e.*, for each edge ``(v, w) \in \mathcal{E}`` there also exists the edge ``(w, v) \in \mathcal{E}``).
@@ -262,13 +262,13 @@ This leads to the conclusion that GNNs introduce a strong inductive bias, leadin
262
262
263
263
But can we do better? Let's look at an example on how to train our network parameters based on the knowledge of the community assignments of 4 nodes in the graph (one for each community):
264
264
265
-
Since everything in our model is differentiable and parameterized, we can add some labels, train the model and observse how the embeddings react.
265
+
Since everything in our model is differentiable and parameterized, we can add some labels, train the model and observe how the embeddings react.
266
266
Here, we make use of a semi-supervised or transductive learning procedure: We simply train against one node per class, but are allowed to make use of the complete input graph data.
267
267
268
268
Training our model is very similar to any other Flux model.
269
269
In addition to defining our network architecture, we define a loss criterion (here, `logitcrossentropy` and initialize a stochastic gradient optimizer (here, `Adam`).
270
270
After that, we perform multiple rounds of optimization, where each round consists of a forward and backward pass to compute the gradients of our model parameters w.r.t. to the loss derived from the forward pass.
271
-
If you are not new to Flux, this scheme should appear familar to you.
271
+
If you are not new to Flux, this scheme should appear familiar to you.
272
272
273
273
Note that our semi-supervised learning scenario is achieved by the following line:
274
274
```
@@ -277,7 +277,7 @@ loss = logitcrossentropy(ŷ[:,train_mask], y[:,train_mask])
277
277
While we compute node embeddings for all of our nodes, we **only make use of the training nodes for computing the loss**.
278
278
Here, this is implemented by filtering the output of the classifier `out` and ground-truth labels `data.y` to only contain the nodes in the `train_mask`.
279
279
280
-
Let us now start training and see how our node embeddings evolve over time (best experienced by explicitely running the code):
280
+
Let us now start training and see how our node embeddings evolve over time (best experienced by explicitly running the code):
Copy file name to clipboardExpand all lines: docs/src/tutorials/graph_classification_pluto.jl
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -40,10 +40,10 @@ end;
40
40
md"""
41
41
# Graph Classification with Graph Neural Networks
42
42
43
-
*This Pluto noteboook is a julia adaptation of the Pytorch Geometric tutorials that can be found [here](https://pytorch-geometric.readthedocs.io/en/latest/notes/colabs.html).*
43
+
*This Pluto notebook is a julia adaptation of the Pytorch Geometric tutorials that can be found [here](https://pytorch-geometric.readthedocs.io/en/latest/notes/colabs.html).*
44
44
45
45
In this tutorial session we will have a closer look at how to apply **Graph Neural Networks (GNNs) to the task of graph classification**.
46
-
Graph classification refers to the problem of classifiying entire graphs (in contrast to nodes), given a **dataset of graphs**, based on some structural graph properties.
46
+
Graph classification refers to the problem of classifying entire graphs (in contrast to nodes), given a **dataset of graphs**, based on some structural graph properties.
47
47
Here, we want to embed entire graphs, and we want to embed those graphs in such a way so that they are linearly separable given a task at hand.
48
48
49
49
@@ -242,7 +242,7 @@ end
242
242
# ╔═╡ 3454b311-9545-411d-b47a-b43724b84c36
243
243
md"""
244
244
As one can see, our model reaches around **74% test accuracy**.
245
-
Reasons for the fluctations in accuracy can be explained by the rather small dataset (only 38 test graphs), and usually disappear once one applies GNNs to larger datasets.
245
+
Reasons for the fluctuations in accuracy can be explained by the rather small dataset (only 38 test graphs), and usually disappear once one applies GNNs to larger datasets.
0 commit comments