Skip to content

Commit 11d7039

Browse files
improve docs
1 parent 3043cce commit 11d7039

File tree

2 files changed

+28
-5
lines changed

2 files changed

+28
-5
lines changed

src/gnngraph.jl

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,11 @@ is governed by `graph_type`.
2323
When constructed from another graph `g`, the internal graph representation
2424
is preserved and shared.
2525
26+
A `GNNGraph` can also represent multiple graphs batched togheter
27+
(see [`Flux.batch`](@ref) or [`SparseArrays.blockdiag`](@ref)).
28+
The field `g.graph_indicator` contains the graph membership
29+
of each node.
30+
2631
A `GNNGraph` is a LightGraphs' `AbstractGraph`, therefore any functionality
2732
from the LightGraphs' graph library can be used on it.
2833
@@ -432,6 +437,15 @@ function _catgraphs(g1::GNNGraph{<:COO_T}, g2::GNNGraph{<:COO_T})
432437
end
433438

434439
# Cat public interfaces
440+
441+
```
442+
blockdiag(xs::GNNGraph...)
443+
444+
Batch togheter multiple `GNNGraph`s into a single one
445+
containing the total number of nodes and edges of the original graphs.
446+
447+
Equivalent to [`Flux.batch`](@ref).
448+
```
435449
function SparseArrays.blockdiag(g1::GNNGraph, gothers::GNNGraph...)
436450
@assert length(gothers) >= 1
437451
g = g1
@@ -441,6 +455,14 @@ function SparseArrays.blockdiag(g1::GNNGraph, gothers::GNNGraph...)
441455
return g
442456
end
443457

458+
```
459+
batch(xs::Vector{<:GNNGraph})
460+
461+
Batch togheter multiple `GNNGraph`s into a single one
462+
containing the total number of nodes and edges of the original graphs.
463+
464+
Equivalent to [`SparseArrays.blockdiag`](@ref).
465+
```
444466
Flux.batch(xs::Vector{<:GNNGraph}) = blockdiag(xs...)
445467
#########################
446468

src/layers/conv.jl

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -147,8 +147,9 @@ Graph convolution layer from Reference: [Weisfeiler and Leman Go Neural: Higher-
147147
148148
Performs:
149149
```math
150-
\mathbf{x}_i' = W^1 \mathbf{x}_i + \box_{j \in \mathcal{N}(i)} W^2 \mathbf{x}_j)
150+
\mathbf{x}_i' = W^1 \mathbf{x}_i + \square_{j \in \mathcal{N}(i)} W^2 \mathbf{x}_j)
151151
```
152+
152153
where the aggregation type is selected by `aggr`.
153154
154155
# Arguments
@@ -206,7 +207,7 @@ end
206207
concat=true,
207208
init=glorot_uniform
208209
bias=true,
209-
negative_slope=0.2)
210+
negative_slope=0.2f0)
210211
211212
Graph attentional layer from the paper [Graph Attention Networks](https://arxiv.org/abs/1710.10903).
212213
@@ -216,7 +217,7 @@ Implements the operation
216217
```
217218
where the attention coefficient ``\alpha_{ij}`` is given by
218219
```math
219-
\alpha_{ij} = \frac{1}{z_i} exp(LeakyReLU(\mathbf{a}^T [W \mathbf{x}_i || W \mathbf{x}_j]))
220+
\alpha_{ij} = \frac{1}{z_i} \exp(LeakyReLU(\mathbf{a}^T [W \mathbf{x}_i || W \mathbf{x}_j]))
220221
```
221222
with ``z_i`` a normalization factor.
222223
@@ -301,7 +302,7 @@ Gated graph convolution layer from [Gated Graph Sequence Neural Networks](https:
301302
Implements the recursion
302303
```math
303304
\mathbf{h}^{(0)}_i = \mathbf{x}_i || \mathbf{0} \\
304-
\mathbf{h}^{(l)}_i = GRU(\mathbf{h}^{(l-1)}_i, \box_{j \in N(i)} W \mathbf{h}^{(l-1)}_j)
305+
\mathbf{h}^{(l)}_i = GRU(\mathbf{h}^{(l-1)}_i, \square_{j \in N(i)} W \mathbf{h}^{(l-1)}_j)
305306
```
306307
307308
where ``\mathbf{h}^{(l)}_i`` denotes the ``l``-th hidden variables passing through GRU. The dimension of input ``\mathbf{x}_i`` needs to be less or equal to `out`.
@@ -369,7 +370,7 @@ Edge convolutional layer from paper [Dynamic Graph CNN for Learning on Point Clo
369370
370371
Performs the operation
371372
```math
372-
\mathbf{x}_i' = \box_{j \in N(i)} f(\mathbf{x}_i || \mathbf{x}_j - \mathbf{x}_i)
373+
\mathbf{x}_i' = \square_{j \in N(i)} f(\mathbf{x}_i || \mathbf{x}_j - \mathbf{x}_i)
373374
```
374375
375376
where `f` typically denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.

0 commit comments

Comments
 (0)