Skip to content

Commit ba11ae1

Browse files
more docs
1 parent b7bdaf4 commit ba11ae1

File tree

10 files changed

+27
-23
lines changed

10 files changed

+27
-23
lines changed

docs/Project.toml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,5 @@
11
[deps]
22
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
3+
Flux = "587475ba-b771-5e3f-ad9e-33799f191a9c"
34
GraphNeuralNetworks = "cffab07f-9bc2-4db1-8861-388f63bf7694"
5+
NNlib = "872c559c-99b0-510c-b3b7-b6c96a88d5cd"

docs/make.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
using GraphNeuralNetworks
1+
using Flux, NNlib, GraphNeuralNetworks
22
using Documenter
33

44
DocMeta.setdocmeta!(GraphNeuralNetworks, :DocTestSetup, :(using GraphNeuralNetworks); recursive=true)

docs/src/api/messagepassing.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
# Message Passing
2+
3+
```@docs
4+
GraphNeuralNetworks.message
5+
GraphNeuralNetworks.update
6+
GraphNeuralNetworks.propagate
7+
```

docs/src/messagepassing.md

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,2 @@
11
# Message Passing
22

3-
```@docs
4-
message
5-
update
6-
propagate
7-
```

docs/src/models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ their models.
66

77
In what follows, we discuss two different styles for model creation:
88
the *explicit modeling* style, more verbose but more flexible,
9-
and the *implicity modeling* style based on [`GNNChain`](@ref), more concise but less flexible.
9+
and the *implicit modeling* style based on [`GNNChain`](@ref), more concise but less flexible.
1010

1111
## Explicit modeling
1212

src/gnngraph.jl

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ is preserved and shared.
2626
A `GNNGraph` is a LightGraphs' `AbstractGraph`, therefore any functionality
2727
from the LightGraphs' graph library can be used on it.
2828
29-
## Arguments
29+
# Arguments
3030
3131
- `data`: Some data representing the graph topology. Possible type are
3232
- An adjacency matrix
@@ -301,7 +301,7 @@ end
301301
302302
Normalized Laplacian matrix of graph `g`.
303303
304-
## Arguments
304+
# Arguments
305305
306306
- `g`: A `GNNGraph`.
307307
- `T`: result element type.
@@ -331,7 +331,7 @@ end
331331
Scaled Laplacian matrix of graph `g`,
332332
defined as ``\hat{L} = \frac{2}{\lambda_{max}} L - I`` where ``L`` is the normalized Laplacian matrix.
333333
334-
## Arguments
334+
# Arguments
335335
336336
- `g`: A `GNNGraph`.
337337
- `T`: result element type.

src/layers/basic.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ and if names are given, `m[:name] == m[1]` etc.
1717
1818
## Examples
1919
20-
```jldoctest
20+
```
2121
julia> m = GNNChain(x -> x^2, x -> x+1);
2222
2323
julia> m(5) == 26

src/layers/conv.jl

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ where ``c_{ij} = \sqrt{N(i)\,N(j)}``.
1212
The input to the layer is a node feature array `X`
1313
of size `(num_features, num_nodes)`.
1414
15-
## Arguments
15+
# Arguments
1616
1717
- `in`: Number of input features.
1818
- `out`: Number of output features.
@@ -89,7 +89,7 @@ Z^{(k)} = 2 \hat{L} Z^{(k-1)} - Z^{(k-2)}
8989
9090
with ``\hat{L}`` the [`scaled_laplacian`](@ref).
9191
92-
## Arguments
92+
# Arguments
9393
9494
- `in`: The dimension of input features.
9595
- `out`: The dimension of output features.
@@ -151,7 +151,7 @@ Performs:
151151
```
152152
where the aggregation type is selected by `aggr`.
153153
154-
## Arguments
154+
# Arguments
155155
156156
- `in`: The dimension of input features.
157157
- `out`: The dimension of output features.
@@ -220,7 +220,7 @@ where the attention coefficient ``\alpha_{ij}`` is given by
220220
```
221221
with ``z_i`` a normalization factor.
222222
223-
## Arguments
223+
# Arguments
224224
225225
- `in`: The dimension of input features.
226226
- `out`: The dimension of output features.
@@ -306,7 +306,7 @@ Implements the recursion
306306
307307
where ``\mathbf{h}^{(l)}_i`` denotes the ``l``-th hidden variables passing through GRU. The dimension of input ``\mathbf{x}_i`` needs to be less or equal to `out`.
308308
309-
## Arguments
309+
# Arguments
310310
311311
- `out`: The dimension of output features.
312312
- `num_layers`: The number of gated recurrent unit.
@@ -374,7 +374,7 @@ Performs the operation
374374
375375
where `f` typically denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.
376376
377-
## Arguments
377+
# Arguments
378378
379379
- `f`: A (possibly learnable) function acting on edge features.
380380
- `aggr`: Aggregation operator for the incoming messages (e.g. `+`, `*`, `max`, `min`, and `mean`).
@@ -418,7 +418,7 @@ Graph Isomorphism convolutional layer from paper [How Powerful are Graph Neural
418418
```
419419
where `f` typically denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.
420420
421-
## Arguments
421+
# Arguments
422422
423423
- `f`: A (possibly learnable) function acting on node features.
424424
- `eps`: Weighting factor.

src/layers/pool.jl

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ Global pooling layer.
77
88
It pools all features with `aggr` operation.
99
10-
## Arguments
10+
# Arguments
1111
1212
- `aggr`: An aggregate function applied to pool all features.
1313
"""
@@ -29,7 +29,7 @@ Local pooling layer.
2929
3030
It pools features with `aggr` operation accroding to `cluster`. It is implemented with `scatter` operation.
3131
32-
## Arguments
32+
# Arguments
3333
3434
- `aggr`: An aggregate function applied to pool all features.
3535
- `cluster`: An index structure which indicates what features to aggregate with.
@@ -46,7 +46,7 @@ end
4646
4747
Top-k pooling layer.
4848
49-
## Arguments
49+
# Arguments
5050
5151
- `adj`: Adjacency matrix of a graph.
5252
- `k`: Top-k nodes are selected to pool together.

src/msgpass.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ in order to [`update`](@ref) the features of node `i`.
6363
By default, the function returns `x_j`.
6464
Custom layer should specialize this method with the desired behavior.
6565
66-
## Arguments
66+
# Arguments
6767
6868
- `mp`: A gnn layer.
6969
- `x_i`: Features of the central node `i`.
@@ -86,7 +86,7 @@ aggregation `m̄`.
8686
By default, the function returns `m̄`.
8787
Custom layers should specialize this method with the desired behavior.
8888
89-
## Arguments
89+
# Arguments
9090
9191
- `mp`: A gnn layer.
9292
- `m̄`: Aggregated edge messages from the [`message`](@ref) function.

0 commit comments

Comments
 (0)