Skip to content

Commit d394b91

Browse files
authored
Fix typos (#290)
1 parent 9240dba commit d394b91

File tree

5 files changed

+9
-10
lines changed

5 files changed

+9
-10
lines changed

docs/pluto_output/gnn_intro_pluto.md

Lines changed: 2 additions & 2 deletions
Large diffs are not rendered by default.

docs/pluto_output/graph_classification_pluto.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -134,7 +134,7 @@ end</code></pre>
134134
x =&gt; 7×1191 Matrix{Float32}</pre>
135135
136136
137-
<div class="markdown"><p>Each batched graph object is equipped with a <strong><code>graph_indicator</code> vector</strong>, which maps each node to its respective graph in the batch:</p><p class="tex">$$\textrm{graph-indicator} = [1, \ldots, 1, 2, \ldots, 2, 3, \ldots ]$$</p></div>
137+
<div class="markdown"><p>Each batched graph object is equipped with a <strong><code>graph_indicator</code> vector</strong>, which maps each node to its respective graph in the batch:</p><p class="tex">$$\textrm{graph\_indicator} = [1, \ldots, 1, 2, \ldots, 2, 3, \ldots ]$$</p></div>
138138
139139
140140
```

docs/tutorials/introductory_tutorials/gnn_intro_pluto.jl

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -147,7 +147,7 @@ We can see that for each edge, `edge_index` holds a tuple of two node indices, w
147147
This representation is known as the **COO format (coordinate format)** commonly used for representing sparse matrices.
148148
Instead of holding the adjacency information in a dense representation ``\mathbf{A} \in \{ 0, 1 \}^{|\mathcal{V}| \times |\mathcal{V}|}``, GraphNeuralNetworks.jl represents graphs sparsely, which refers to only holding the coordinates/values for which entries in ``\mathbf{A}`` are non-zero.
149149
150-
Importantly, GraphNeuralNetworks.jl does not distinguish between directed and undirected graphs, and treats undirected graphs as a special case of directed graphs in which reverse edges exist for every entry in the edge_index.
150+
Importantly, GraphNeuralNetworks.jl does not distinguish between directed and undirected graphs, and treats undirected graphs as a special case of directed graphs in which reverse edges exist for every entry in the `edge_index`.
151151
152152
Since a `GNNGraph` is an `AbstractGraph` from the `Graphs.jl` library, it supports graph algorithms and visualization tools from the wider julia graph ecosystem:
153153
"""
@@ -252,10 +252,10 @@ This leads to the conclusion that GNNs introduce a strong inductive bias, leadin
252252
253253
### Training on the Karate Club Network
254254
255-
But can we do better? Let's look at an example on how to train our network parameters based on the knowledge of the community assignments of 4 nodes in the graph (one for each community):
255+
But can we do better? Let's look at an example on how to train our network parameters based on the knowledge of the community assignments of 4 nodes in the graph (one for each community).
256256
257257
Since everything in our model is differentiable and parameterized, we can add some labels, train the model and observe how the embeddings react.
258-
Here, we make use of a semi-supervised or transductive learning procedure: We simply train against one node per class, but are allowed to make use of the complete input graph data.
258+
Here, we make use of a semi-supervised or transductive learning procedure: we simply train against one node per class, but are allowed to make use of the complete input graph data.
259259
260260
Training our model is very similar to any other Flux model.
261261
In addition to defining our network architecture, we define a loss criterion (here, `logitcrossentropy`), and initialize a stochastic gradient optimizer (here, `Adam`).

docs/tutorials/introductory_tutorials/graph_classification_pluto.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,7 @@ md"""
129129
Each batched graph object is equipped with a **`graph_indicator` vector**, which maps each node to its respective graph in the batch:
130130
131131
```math
132-
\textrm{graph-indicator} = [1, \ldots, 1, 2, \ldots, 2, 3, \ldots ]
132+
\textrm{graph\_indicator} = [1, \ldots, 1, 2, \ldots, 2, 3, \ldots ]
133133
```
134134
"""
135135

src/GNNGraphs/gnnheterograph.jl

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ const NDict{T} = Dict{Symbol, T}
66
GNNHeteroGraph(data; ndata, edata, gdata, num_nodes, graph_indicator, dir])
77
88
A type representing a heterogeneous graph structure.
9-
it is similar [`GNNGraph`](@ref) but node and edges are of different types.
9+
It is similar to [`GNNGraph`](@ref) but nodes and edges are of different types.
1010
1111
# Arguments
1212
@@ -69,8 +69,7 @@ julia> hg.ndata[:A].x
6969
0.631286 0.316292 0.705325 0.239211 0.533007 0.249233 0.473736 0.595475 0.0623298 0.159307
7070
```
7171
72-
See also [`GNNGraph`](@ref) for a homogeneous graph type.
73-
and [`rand_heterograph`](@ref) for a function to generate random heterographs.
72+
See also [`GNNGraph`](@ref) for a homogeneous graph type and [`rand_heterograph`](@ref) for a function to generate random heterographs.
7473
"""
7574
struct GNNHeteroGraph
7675
graph::EDict

0 commit comments

Comments
 (0)