Skip to content

Commit 9118038

Browse files
fix docs
1 parent c653600 commit 9118038

File tree

2 files changed

+21
-22
lines changed

2 files changed

+21
-22
lines changed

docs/src/messagepassing.md

Lines changed: 13 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -20,13 +20,15 @@ In GraphNeuralNetworks.jl, the function [`propagate`](@ref) takes care of materi
2020
node features on each edge, applying the message function, performing the
2121
aggregation, and returning ``\bar{\mathbf{m}}``.
2222
It is then left to the user to perform further node and edge updates,
23-
manypulating arrays of size ``D_{node} \times num\_nodes`` and
23+
manipulating arrays of size ``D_{node} \times num\_nodes`` and
2424
``D_{edge} \times num\_edges``.
2525

26-
As part of the [`propagate`](@ref) pipeline, we have the function
27-
[`apply_edges`](@ref). It can be independently used to materialize
28-
node features on edges and perform edge-related computation without
29-
the following neighborhood aggregation one finds in `propagate`.
26+
[`propagate`](@ref) is composed of two steps corresponding to two
27+
exported methods:
28+
1. [`apply_edges`](@ref) materializes node features on edges and
29+
performs edge-related computation without.
30+
2. [`aggregate_neighbors`](@ref) applies a reduction operator on the messages coming
31+
from the neighborhood of each node.
3032

3133
The whole propagation mechanism internally relies on the [`NNlib.gather`](@ref)
3234
and [`NNlib.scatter`](@ref) methods.
@@ -46,17 +48,11 @@ GNNGraph:
4648
num_nodes = 10
4749
num_edges = 20
4850

51+
julia> x = ones(2,10);
4952

50-
julia> x = ones(2,10)
51-
2×10 Matrix{Float64}:
52-
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
53-
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
54-
55-
julia> z = 2ones(2,10)
56-
2×10 Matrix{Float64}:
57-
2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0
58-
2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0
53+
julia> z = 2ones(2,10);
5954

55+
# Return an edge features arrays (D × num_edges)
6056
julia> apply_edges((xi, xj, e) -> xi .+ xj, g, xi=x, xj=z)
6157
2×20 Matrix{Float64}:
6258
3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0
@@ -72,14 +68,16 @@ julia> apply_edges((xi, xj, e) -> xi.a + xi.b .* xj, g, xi=(a=x,b=z), xj=z)
7268
5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0
7369
5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0
7470
```
71+
7572
The function [`propagate`](@ref) instead performs the [`apply_edges`](@ref) operation
76-
but then also applies a reduction over each node's neighborhood.
73+
but then also applies a reduction over each node's neighborhood (see [`aggregate_neighbors`](@ref)).
7774
```julia
7875
julia> propagate((xi, xj, e) -> xi .+ xj, g, +, xi=x, xj=z)
7976
2×10 Matrix{Float64}:
8077
3.0 6.0 9.0 9.0 0.0 6.0 6.0 3.0 15.0 3.0
8178
3.0 6.0 9.0 9.0 0.0 6.0 6.0 3.0 15.0 3.0
8279

80+
# Previous output can be understood by looking at the degree
8381
julia> degree(g)
8482
10-element Vector{Int64}:
8583
1

src/layers/conv.jl

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -269,7 +269,7 @@ Implements the operation
269269
```
270270
where the attention coefficients ``\alpha_{ij}`` are given by
271271
```math
272-
\alpha_{ij} = \frac{1}{z_i} \exp(LeakyReLU(\mathbf{a}^T [W \mathbf{x}_i \,\|\, W \mathbf{x}_j]))
272+
\alpha_{ij} = \frac{1}{z_i} \exp(LeakyReLU(\mathbf{a}^T [W \mathbf{x}_i; W \mathbf{x}_j]))
273273
```
274274
with ``z_i`` a normalization factor.
275275
@@ -568,7 +568,7 @@ GraphSAGE convolution layer from paper [Inductive Representation Learning on Lar
568568
569569
Performs:
570570
```math
571-
\mathbf{x}_i' = W \cdot [\mathbf{x}_i \,\|\, \square_{j \in \mathcal{N}(i)} \mathbf{x}_j]
571+
\mathbf{x}_i' = W \cdot [\mathbf{x}_i; \square_{j \in \mathcal{N}(i)} \mathbf{x}_j]
572572
```
573573
574574
where the aggregation type is selected by `aggr`.
@@ -697,7 +697,7 @@ Performs the operation
697697
```
698698
699699
where ``\mathbf{z}_{ij}`` is the node and edge features concatenation
700-
``[\mathbf{x}_i \| \mathbf{x}_j \| \mathbf{e}_{j\to i}]``
700+
``[\mathbf{x}_i; \mathbf{x}_j; \mathbf{e}_{j\to i}]``
701701
and ``\sigma`` is the sigmoid function.
702702
The residual ``\mathbf{x}_i`` is added only if `residual=true` and the output size is the same
703703
as the input size.
@@ -829,12 +829,13 @@ end
829829
830830
Convolution from [Graph Networks as a Universal Machine Learning Framework for Molecules and Crystals](https://arxiv.org/pdf/1812.05055.pdf)
831831
paper. In the forward pass, takes as inputs node features `x` and edge features `e` and returns
832-
updated features `x̄, ē` according to
832+
updated features `x'` and `e'` according to
833833
834834
```math
835-
ē = ϕe(vcat(xi, xj, e))
836-
= ϕv(vcat(x, \square_{j\in \mathcal{N}(i)} ē_{j\to i}))
835+
\mathbf{e}_{i\to j}' = \phi_e([\mathbf{x}_i; \mathbf{x}_j; \mathbf{e}_{i\to j}])\\
836+
\mathbf{x}_{i}' = \phi_v([\mathbf{x}_i; \square_{j\in \mathcal{N}(i)\,\mathbf{e}_{j\to i}'])
837837
```
838+
838839
`aggr` defines the aggregation to be performed.
839840
840841
If the neural networks `ϕe` and `ϕv` are not provided, they will be constructed from
@@ -849,7 +850,7 @@ g = rand_graph(10, 30)
849850
x = randn(3, 10)
850851
e = randn(3, 30)
851852
m = MEGNetConv(3 => 3)
852-
x̄, ē = m(g, x, e)
853+
x′, e′ = m(g, x, e)
853854
```
854855
"""
855856
struct MEGNetConv <: GNNLayer

0 commit comments

Comments
 (0)