Skip to content

Commit 2d1a643

Browse files
committed
Add GATConv docstring
1 parent 8b30efd commit 2d1a643

File tree

1 file changed

+63
-0
lines changed

1 file changed

+63
-0
lines changed

GNNLux/src/layers/conv.jl

Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -784,6 +784,69 @@ function Base.show(io::IO, l::DConv)
784784
print(io, "DConv($(l.in_dims) => $(l.out_dims), k=$(l.k))")
785785
end
786786

787+
@doc raw"""
788+
GATConv(in => out, σ = identity; heads = 1, concat = true, negative_slope = 0.2, init_weight = glorot_uniform, init_bias = zeros32, use_bias = true, add_self_loops = true, dropout=0.0)
789+
GATConv((in, ein) => out, ...)
790+
791+
Graph attentional layer from the paper [Graph Attention Networks](https://arxiv.org/abs/1710.10903).
792+
793+
Implements the operation
794+
```math
795+
\mathbf{x}_i' = \sum_{j \in N(i) \cup \{i\}} \alpha_{ij} W \mathbf{x}_j
796+
```
797+
where the attention coefficients ``\alpha_{ij}`` are given by
798+
```math
799+
\alpha_{ij} = \frac{1}{z_i} \exp(LeakyReLU(\mathbf{a}^T [W \mathbf{x}_i; W \mathbf{x}_j]))
800+
```
801+
with ``z_i`` a normalization factor.
802+
803+
In case `ein > 0` is given, edge features of dimension `ein` will be expected in the forward pass
804+
and the attention coefficients will be calculated as
805+
```math
806+
\alpha_{ij} = \frac{1}{z_i} \exp(LeakyReLU(\mathbf{a}^T [W_e \mathbf{e}_{j\to i}; W \mathbf{x}_i; W \mathbf{x}_j]))
807+
```
808+
809+
# Arguments
810+
811+
- `in`: The dimension of input node features.
812+
- `ein`: The dimension of input edge features. Default 0 (i.e. no edge features passed in the forward).
813+
- `out`: The dimension of output node features.
814+
- `σ`: Activation function. Default `identity`.
815+
- `heads`: Number attention heads. Default `1`.
816+
- `concat`: Concatenate layer output or not. If not, layer output is averaged over the heads. Default `true`.
817+
- `negative_slope`: The parameter of LeakyReLU.Default `0.2`.
818+
- `init_weight`: Weights' initializer. Default `glorot_uniform`.
819+
- `init_bias`: Bias initializer. Default `zeros32`.
820+
- `use_bias`: Add learnable bias. Default `true`.
821+
- `add_self_loops`: Add self loops to the graph before performing the convolution. Default `true`.
822+
- `dropout`: Dropout probability on the normalized attention coefficient. Default `0.0`.
823+
824+
# Examples
825+
826+
```julia
827+
using GNNLux, Lux, Random
828+
829+
# initialize random number generator
830+
rng = Random.default_rng()
831+
832+
# create data
833+
s = [1,1,2,3]
834+
t = [2,3,1,1]
835+
in_channel = 3
836+
out_channel = 5
837+
g = GNNGraph(s, t)
838+
x = randn(rng, Float32, 3, g.num_nodes)
839+
840+
# create layer
841+
l = GATConv(in_channel => out_channel; add_self_loops = false, use_bias = false, heads=2, concat=true)
842+
843+
# setup layer
844+
ps, st = LuxCore.setup(rng, l)
845+
846+
# forward pass
847+
y, st = l(g, x, ps, st)
848+
```
849+
"""
787850
@concrete struct GATConv <: GNNLayer
788851
dense_x
789852
dense_e

0 commit comments

Comments
 (0)