Skip to content

Commit 23af13a

Browse files
authored
Fixed a few typos in notebooks, docs and docstrings (#205)
1 parent 20c591c commit 23af13a

File tree

6 files changed

+13
-13
lines changed

6 files changed

+13
-13
lines changed

docs/src/gnngraph.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,7 @@ g.edata.e
117117
## Edge weights
118118

119119
It is common to denote scalar edge features as edge weights. The `GNNGraph` has specific support
120-
for edge weights: they can be stored as part of internal representions of the graph (COO or adjacency matrix). Some graph convolutional layers, most notably the [`GCNConv`](@ref), can use the edge weights to perform weighted sums over the nodes' neighborhoods.
120+
for edge weights: they can be stored as part of internal representations of the graph (COO or adjacency matrix). Some graph convolutional layers, most notably the [`GCNConv`](@ref), can use the edge weights to perform weighted sums over the nodes' neighborhoods.
121121

122122
```julia
123123
julia> source = [1, 1, 2, 2, 3, 3];
@@ -143,7 +143,7 @@ julia> get_edge_weight(g)
143143

144144
## Batches and Subgraphs
145145

146-
Multiple `GNNGraph`s can be batched togheter into a single graph
146+
Multiple `GNNGraph`s can be batched together into a single graph
147147
that contains the total number of the original nodes
148148
and where the original graphs are disjoint subgraphs.
149149

docs/src/messagepassing.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ manipulating arrays of size ``D_{node} \times num\_nodes`` and
2525

2626
[`propagate`](@ref) is composed of two steps, also available as two independent methods:
2727

28-
1. [`apply_edges`](@ref) materializes node features on edges and applyes the message function.
28+
1. [`apply_edges`](@ref) materializes node features on edges and applies the message function.
2929
2. [`aggregate_neighbors`](@ref) applies a reduction operator on the messages coming from the neighborhood of each node.
3030

3131
The whole propagation mechanism internally relies on the [`NNlib.gather`](@ref)

docs/src/tutorials/gnn_intro_pluto.jl

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ end;
4040
md"""
4141
# Introduction: Hands-on Graph Neural Networks
4242
43-
*This Pluto noteboook is a julia adaptation of the Pytorch Geometric tutorials that can be found [here](https://pytorch-geometric.readthedocs.io/en/latest/notes/colabs.html).*
43+
*This Pluto notebook is a julia adaptation of the Pytorch Geometric tutorials that can be found [here](https://pytorch-geometric.readthedocs.io/en/latest/notes/colabs.html).*
4444
4545
Recently, deep learning on graphs has emerged to one of the hottest research fields in the deep learning community.
4646
Here, **Graph Neural Networks (GNNs)** aim to generalize classical deep learning concepts to irregular structured data (in contrast to images or texts) and to enable neural networks to reason about objects and their relations.
@@ -135,7 +135,7 @@ The `g` object holds 3 attributes:
135135
These attributes are `NamedTuples` that can store multiple feature arrays: we can access a specific set of features e.g. `x`, with `g.ndata.x`.
136136
137137
138-
In our task, `g.ndata.train_mask` describes for which nodes we already know their community assigments. In total, we are only aware of the ground-truth labels of 4 nodes (one for each community), and the task is to infer the community assignment for the remaining nodes.
138+
In our task, `g.ndata.train_mask` describes for which nodes we already know their community assignments. In total, we are only aware of the ground-truth labels of 4 nodes (one for each community), and the task is to infer the community assignment for the remaining nodes.
139139
140140
The `g` object also provides some **utility functions** to infer some basic properties of the underlying graph.
141141
For example, we can easily infer whether there exists isolated nodes in the graph (*i.e.* there exists no edge to any node), whether the graph contains self-loops (*i.e.*, ``(v, v) \in \mathcal{E}``), or whether the graph is bidirected (*i.e.*, for each edge ``(v, w) \in \mathcal{E}`` there also exists the edge ``(w, v) \in \mathcal{E}``).
@@ -262,13 +262,13 @@ This leads to the conclusion that GNNs introduce a strong inductive bias, leadin
262262
263263
But can we do better? Let's look at an example on how to train our network parameters based on the knowledge of the community assignments of 4 nodes in the graph (one for each community):
264264
265-
Since everything in our model is differentiable and parameterized, we can add some labels, train the model and observse how the embeddings react.
265+
Since everything in our model is differentiable and parameterized, we can add some labels, train the model and observe how the embeddings react.
266266
Here, we make use of a semi-supervised or transductive learning procedure: We simply train against one node per class, but are allowed to make use of the complete input graph data.
267267
268268
Training our model is very similar to any other Flux model.
269269
In addition to defining our network architecture, we define a loss criterion (here, `logitcrossentropy` and initialize a stochastic gradient optimizer (here, `Adam`).
270270
After that, we perform multiple rounds of optimization, where each round consists of a forward and backward pass to compute the gradients of our model parameters w.r.t. to the loss derived from the forward pass.
271-
If you are not new to Flux, this scheme should appear familar to you.
271+
If you are not new to Flux, this scheme should appear familiar to you.
272272
273273
Note that our semi-supervised learning scenario is achieved by the following line:
274274
```
@@ -277,7 +277,7 @@ loss = logitcrossentropy(ŷ[:,train_mask], y[:,train_mask])
277277
While we compute node embeddings for all of our nodes, we **only make use of the training nodes for computing the loss**.
278278
Here, this is implemented by filtering the output of the classifier `out` and ground-truth labels `data.y` to only contain the nodes in the `train_mask`.
279279
280-
Let us now start training and see how our node embeddings evolve over time (best experienced by explicitely running the code):
280+
Let us now start training and see how our node embeddings evolve over time (best experienced by explicitly running the code):
281281
"""
282282

283283

docs/src/tutorials/graph_classification_pluto.jl

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -40,10 +40,10 @@ end;
4040
md"""
4141
# Graph Classification with Graph Neural Networks
4242
43-
*This Pluto noteboook is a julia adaptation of the Pytorch Geometric tutorials that can be found [here](https://pytorch-geometric.readthedocs.io/en/latest/notes/colabs.html).*
43+
*This Pluto notebook is a julia adaptation of the Pytorch Geometric tutorials that can be found [here](https://pytorch-geometric.readthedocs.io/en/latest/notes/colabs.html).*
4444
4545
In this tutorial session we will have a closer look at how to apply **Graph Neural Networks (GNNs) to the task of graph classification**.
46-
Graph classification refers to the problem of classifiying entire graphs (in contrast to nodes), given a **dataset of graphs**, based on some structural graph properties.
46+
Graph classification refers to the problem of classifying entire graphs (in contrast to nodes), given a **dataset of graphs**, based on some structural graph properties.
4747
Here, we want to embed entire graphs, and we want to embed those graphs in such a way so that they are linearly separable given a task at hand.
4848
4949
@@ -242,7 +242,7 @@ end
242242
# ╔═╡ 3454b311-9545-411d-b47a-b43724b84c36
243243
md"""
244244
As one can see, our model reaches around **74% test accuracy**.
245-
Reasons for the fluctations in accuracy can be explained by the rather small dataset (only 38 test graphs), and usually disappear once one applies GNNs to larger datasets.
245+
Reasons for the fluctuations in accuracy can be explained by the rather small dataset (only 38 test graphs), and usually disappear once one applies GNNs to larger datasets.
246246
247247
## (Optional) Exercise
248248

src/mldatasets.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# We load a Graph Dataset from MLDatasets without explicitely depending on it
1+
# We load a Graph Dataset from MLDatasets without explicitly depending on it
22

33
"""
44
mldataset2gnngraph(dataset)

src/msgpass.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -174,7 +174,7 @@ end
174174
"""
175175
w_mul_xj(xi, xj, w) = reshape(w, (...)) .* xj
176176
177-
Similar to [`e_mul_xj`](@ref) but specialized on scalar edge feautures (weights).
177+
Similar to [`e_mul_xj`](@ref) but specialized on scalar edge features (weights).
178178
"""
179179
w_mul_xj(xi, xj::AbstractArray, w::Nothing) = xj # same as copy_xj if no weights
180180

0 commit comments

Comments
 (0)