Skip to content

Commit 8bbfb66

Browse files
Update conv.md (#148)
* Update conv.md added layer GMMConv * improved GMM docstring, added `math` token in. GAT Docstring * Update conv.jl (Sigma inverse updated) * Update src/layers/conv.jl Co-authored-by: Carlo Lucibello <[email protected]> * Update src/layers/conv.jl Co-authored-by: Carlo Lucibello <[email protected]> * Update src/layers/conv.jl Co-authored-by: Carlo Lucibello <[email protected]> * Update src/layers/conv.jl Co-authored-by: Carlo Lucibello <[email protected]> * Update src/layers/conv.jl Co-authored-by: Carlo Lucibello <[email protected]> Co-authored-by: Carlo Lucibello <[email protected]>
1 parent 8eddb5f commit 8bbfb66

File tree

2 files changed

+10
-8
lines changed

2 files changed

+10
-8
lines changed

docs/src/api/conv.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@ The table below lists all graph convolutional layers implemented in the *GraphNe
2424
| [`GatedGraphConv`](@ref) || | |
2525
| [`GCNConv`](@ref) ||| |
2626
| [`GINConv`](@ref) || | |
27+
| [`GMMConv`](@ref) | | ||
2728
| [`GraphConv`](@ref) || | |
2829
| [`MEGNetConv`](@ref) | | ||
2930
| [`NNConv`](@ref) | | ||

src/layers/conv.jl

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -273,7 +273,7 @@ with ``z_i`` a normalization factor.
273273
274274
In case `ein > 0` is given, edge features of dimension `ein` will be expected in the forward pass
275275
and the attention coefficients will be calculated as
276-
```
276+
```math
277277
\alpha_{ij} = \frac{1}{z_i} \exp(LeakyReLU(\mathbf{a}^T [W_e \mathbf{e}_{j\to i}; W \mathbf{x}_i; W \mathbf{x}_j]))
278278
````
279279
@@ -1071,17 +1071,18 @@ end
10711071
Graph mixture model convolution layer from the paper [Geometric deep learning on graphs and manifolds using mixture model CNNs](https://arxiv.org/abs/1611.08402)
10721072
Performs the operation
10731073
```math
1074-
\mathbf{x}_i' = \frac{1}{|N(i)|} \sum_{j\in N(i)}\frac{1}{K}\sum_{k=1}^k \mathbf{w}_k(\mathbf{e}_{j\to i}) \odot \Theta_k \mathbf{x}_j
1074+
\mathbf{x}_i' = \mathbf{x}_i + \frac{1}{|N(i)|} \sum_{j\in N(i)}\frac{1}{K}\sum_{k=1}^K \mathbf{w}_k(\mathbf{e}_{j\to i}) \odot \Theta_k \mathbf{x}_j
10751075
```
1076-
where
1076+
where ``w^a_{k}(e^a)`` for feature `a` and kernel `k` is given by
10771077
```math
1078-
w^a_{k}(e^a) = \exp(\frac{-1}{2}(e^a - \mu^a_k)^T (\Sigma^{-1})^a_k(e^a - \mu^a_k))
1078+
w^a_{k}(e^a) = \exp(-\frac{1}{2}(e^a - \mu^a_k)^T (\Sigma^{-1})^a_k(e^a - \mu^a_k))
10791079
```
1080-
$\Theta_k$, $\mu^a_k$, $\Sigma^{-1})^a_k$ are learnable parameters.
1081-
1082-
The input to the layer is a node feature array 'X' of size `(num_features, num_nodes)` and
1083-
edge pseudo-cordinate array 'U' of size `(num_features, num_edges)`
1080+
``\Theta_k, \mu^a_k, (\Sigma^{-1})^a_k`` are learnable parameters.
10841081
1082+
The input to the layer is a node feature array `x` of size `(num_features, num_nodes)` and
1083+
edge pseudo-coordinate array `e` of size `(num_features, num_edges)`
1084+
The residual ``\mathbf{x}_i`` is added only if `residual=true` and the output size is the same
1085+
as the input size.
10851086
# Arguments
10861087
10871088
- `in`: Number of input node features.

0 commit comments

Comments
 (0)