Skip to content

Commit 1116b41

Browse files
Merge pull request #224 from CarloLucibello/cl/egnn
doc fixes
2 parents 24a22aa + 6edd405 commit 1116b41

File tree

1 file changed

+20
-12
lines changed

1 file changed

+20
-12
lines changed

src/layers/conv.jl

Lines changed: 20 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -142,9 +142,11 @@ X' = \sum^{K-1}_{k=0} W^{(k)} Z^{(k)}
142142
where ``Z^{(k)}`` is the ``k``-th term of Chebyshev polynomials, and can be calculated by the following recursive form:
143143
144144
```math
145-
Z^{(0)} = X \\
146-
Z^{(1)} = \hat{L} X \\
147-
Z^{(k)} = 2 \hat{L} Z^{(k-1)} - Z^{(k-2)}
145+
\begin{aligned}
146+
Z^{(0)} &= X \\
147+
Z^{(1)} &= \hat{L} X \\
148+
Z^{(k)} &= 2 \hat{L} Z^{(k-1)} - Z^{(k-2)}
149+
\end{aligned}
148150
```
149151
150152
with ``\hat{L}`` the [`scaled_laplacian`](@ref).
@@ -514,11 +516,13 @@ Gated graph convolution layer from [Gated Graph Sequence Neural Networks](https:
514516
515517
Implements the recursion
516518
```math
517-
\mathbf{h}^{(0)}_i = [\mathbf{x}_i; \mathbf{0}] \\
518-
\mathbf{h}^{(l)}_i = GRU(\mathbf{h}^{(l-1)}_i, \square_{j \in N(i)} W \mathbf{h}^{(l-1)}_j)
519+
\begin{aligned}
520+
\mathbf{h}^{(0)}_i &= [\mathbf{x}_i; \mathbf{0}] \\
521+
\mathbf{h}^{(l)}_i &= GRU(\mathbf{h}^{(l-1)}_i, \square_{j \in N(i)} W \mathbf{h}^{(l-1)}_j)
522+
\end{aligned}
519523
```
520524
521-
where ``\mathbf{h}^{(l)}_i`` denotes the ``l``-th hidden variables passing through GRU. The dimension of input ``\mathbf{x}_i`` needs to be less or equal to `out`.
525+
where ``\mathbf{h}^{(l)}_i`` denotes the ``l``-th hidden variables passing through GRU. The dimension of input ``\mathbf{x}_i`` needs to be less or equal to `out`.
522526
523527
# Arguments
524528
@@ -1005,8 +1009,10 @@ paper. In the forward pass, takes as inputs node features `x` and edge features
10051009
updated features `x'` and `e'` according to
10061010
10071011
```math
1012+
\begin{aligned}
10081013
\mathbf{e}_{i\to j}' = \phi_e([\mathbf{x}_i;\, \mathbf{x}_j;\, \mathbf{e}_{i\to j}]),\\
10091014
\mathbf{x}_{i}' = \phi_v([\mathbf{x}_i;\, \square_{j\in \mathcal{N}(i)}\,\mathbf{e}_{j\to i}']).
1015+
\end{aligned}
10101016
```
10111017
10121018
`aggr` defines the aggregation to be performed.
@@ -1312,14 +1318,16 @@ Neural Networks](https://arxiv.org/abs/2102.09844).
13121318
The layer performs the following operation:
13131319
13141320
```math
1315-
\mathbf{m}_{j\to i}=\phi_e(\mathbf{h}_i, \mathbf{h}_j, \lVert\mathbf{x}_i-\mathbf{x}_j\rVert^2, \mathbf{e}_{j\to i}),\\
1316-
\mathbf{x}_i' = \mathbf{h}_i{x_i} + C_i\sum_{j\in\mathcal{N}(i)}(\mathbf{x}_i-\mathbf{x}_j)\phi_x(\mathbf{m}_{j\to i}),\\
1317-
\mathbf{m}_i = C_i\sum_{j\in\mathcal{N}(i)} \mathbf{m}_{j\to i},\\
1318-
\mathbf{h}_i' = \mathbf{h}_i + \phi_h(\mathbf{h}_i, \mathbf{m}_i)
1321+
\begin{aligned}
1322+
\mathbf{m}_{j\to i} &=\phi_e(\mathbf{h}_i, \mathbf{h}_j, \lVert\mathbf{x}_i-\mathbf{x}_j\rVert^2, \mathbf{e}_{j\to i}),\\
1323+
\mathbf{x}_i' &= \mathbf{x}_i + C_i\sum_{j\in\mathcal{N}(i)}(\mathbf{x}_i-\mathbf{x}_j)\phi_x(\mathbf{m}_{j\to i}),\\
1324+
\mathbf{m}_i &= C_i\sum_{j\in\mathcal{N}(i)} \mathbf{m}_{j\to i},\\
1325+
\mathbf{h}_i' &= \mathbf{h}_i + \phi_h(\mathbf{h}_i, \mathbf{m}_i)
1326+
\end{aligned}
13191327
```
1320-
where ``h_i``, ``x_i``, ``e_{ij}`` are invariant node features, equivariance node
1328+
where ``\mathbf{h}_i``, ``\mathbf{x}_i``, ``\mathbef{e}_{j\to i}`` are invariant node features, equivariance node
13211329
features, and edge features respectively. ``\phi_e``, ``\phi_h``, and
1322-
``\phi_x`` are two-layer MLPs. :math:`C` is a constant for normalization,
1330+
``\phi_x`` are two-layer MLPs. `C` is a constant for normalization,
13231331
computed as ``1/|\mathcal{N}(i)|``.
13241332
13251333

0 commit comments

Comments
 (0)