Skip to content

Commit 702cdc1

Browse files
Merge pull request #176 from Animiral/pr/doc-GATv2Conv
Documentation fixes for GATv2Conv
2 parents aa755dd + 62111f0 commit 702cdc1

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

src/layers/conv.jl

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -275,16 +275,16 @@ In case `ein > 0` is given, edge features of dimension `ein` will be expected in
275275
and the attention coefficients will be calculated as
276276
```math
277277
\alpha_{ij} = \frac{1}{z_i} \exp(LeakyReLU(\mathbf{a}^T [W_e \mathbf{e}_{j\to i}; W \mathbf{x}_i; W \mathbf{x}_j]))
278-
````
278+
```
279279
280280
# Arguments
281281
282282
- `in`: The dimension of input node features.
283283
- `ein`: The dimension of input edget features. Default 0 (i.e. no edge features passed in the forward).
284284
- `out`: The dimension of output node features.
285285
- `σ`: Activation function. Default `identity`.
286-
- `bias`: Learn the additive bias if true. Dafault `true`.
287-
- `heads`: Number attention heads. Dafault `1.
286+
- `bias`: Learn the additive bias if true. Default `true`.
287+
- `heads`: Number attention heads. Default `1`.
288288
- `concat`: Concatenate layer output or not. If not, layer output is averaged over the heads. Default `true`.
289289
- `negative_slope`: The parameter of LeakyReLU.Default `0.2`.
290290
- `add_self_loops`: Add self loops to the graph before performing the convolution. Default `true`.
@@ -388,14 +388,14 @@ Implements the operation
388388
```
389389
where the attention coefficients ``\alpha_{ij}`` are given by
390390
```math
391-
\alpha_{ij} = \frac{1}{z_i} \exp(\mathbf{a}^T LeakyReLU([W_2 \mathbf{x}_i; W_1 \mathbf{x}_j]))
391+
\alpha_{ij} = \frac{1}{z_i} \exp(\mathbf{a}^T LeakyReLU(W_2 \mathbf{x}_i + W_1 \mathbf{x}_j))
392392
```
393393
with ``z_i`` a normalization factor.
394394
395395
In case `ein > 0` is given, edge features of dimension `ein` will be expected in the forward pass
396396
and the attention coefficients will be calculated as
397397
```math
398-
\alpha_{ij} = \frac{1}{z_i} \exp(\mathbf{a}^T LeakyReLU([W_3 \mathbf{e}_{j\to i}; W_2 \mathbf{x}_i; W_1 \mathbf{x}_j])).
398+
\alpha_{ij} = \frac{1}{z_i} \exp(\mathbf{a}^T LeakyReLU(W_3 \mathbf{e}_{j\to i} + W_2 \mathbf{x}_i + W_1 \mathbf{x}_j)).
399399
```
400400
401401
# Arguments
@@ -404,8 +404,8 @@ and the attention coefficients will be calculated as
404404
- `ein`: The dimension of input edget features. Default 0 (i.e. no edge features passed in the forward).
405405
- `out`: The dimension of output node features.
406406
- `σ`: Activation function. Default `identity`.
407-
- `bias`: Learn the additive bias if true. Dafault `true`.
408-
- `heads`: Number attention heads. Dafault `1.
407+
- `bias`: Learn the additive bias if true. Default `true`.
408+
- `heads`: Number attention heads. Default `1`.
409409
- `concat`: Concatenate layer output or not. If not, layer output is averaged over the heads. Default `true`.
410410
- `negative_slope`: The parameter of LeakyReLU.Default `0.2`.
411411
- `add_self_loops`: Add self loops to the graph before performing the convolution. Default `true`.
@@ -477,7 +477,7 @@ function (l::GATv2Conv)(g::GNNGraph, x::AbstractMatrix, e::Union{Nothing, Abstra
477477

478478

479479
function message(Wix, Wjx, e)
480-
Wx = Wix + Wjx
480+
Wx = Wix + Wjx # Note: this is equivalent to W * vcat(x_i, x_j) as in "How Attentive are Graph Attention Networks?"
481481
if e !== nothing
482482
Wx += reshape(l.dense_e(e), out, heads, :)
483483
end

0 commit comments

Comments
 (0)