Skip to content

Commit 3ab7bf6

Browse files
fix docs
1 parent a32fb04 commit 3ab7bf6

16 files changed

+4408
-1832
lines changed

.github/workflows/docs.yml

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,15 @@ jobs:
1414
- uses: actions/checkout@v4
1515
- uses: julia-actions/setup-julia@latest
1616
with:
17-
version: '1.9.1'
17+
version: '1.10.4'
1818
- name: Install dependencies
19-
run: julia --project=docs/ -e 'using Pkg; Pkg.develop(PackageSpec(path=pwd())); Pkg.instantiate()'
19+
shell: julia --project=docs/ {0}
20+
run: |
21+
using Pkg
22+
# dev mono repo versions
23+
pkg"registry up"
24+
Pkg.update()
25+
pkg"dev ./GNNGraphs ."
2026
- name: Build and deploy
2127
env:
2228
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # If authenticating with GitHub Actions token

docs/Project.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,11 +2,11 @@
22
DemoCards = "311a05b2-6137-4a5a-b473-18580a3d38b5"
33
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
44
Flux = "587475ba-b771-5e3f-ad9e-33799f191a9c"
5+
GNNGraphs = "aed8fd31-079b-4b5a-b342-a13352159b8c"
56
GraphNeuralNetworks = "cffab07f-9bc2-4db1-8861-388f63bf7694"
67
Graphs = "86223c79-3864-5bf0-83f7-82e725a168b6"
78
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
89
MLDatasets = "eb30cadb-4394-5ae3-aed4-317e484a6458"
9-
MarkdownLiteral = "736d6165-7244-6769-4267-6b50796e6954"
1010
NNlib = "872c559c-99b0-510c-b3b7-b6c96a88d5cd"
1111
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
1212
Pluto = "c3e4b0f8-55cb-11ea-2926-15256bba5781"
@@ -17,4 +17,4 @@ Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
1717

1818
[compat]
1919
DemoCards = "0.5.0"
20-
Documenter = "0.27"
20+
Documenter = "1.5"

docs/make.jl

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,12 @@
11
using Flux, NNlib, GraphNeuralNetworks, Graphs, SparseArrays
2+
using GNNGraphs
23
using Pluto, PlutoStaticHTML # for tutorials
34
using Documenter, DemoCards
45

5-
tutorials, tutorials_cb, tutorial_assets = makedemos("tutorials")
6+
# tutorials, tutorials_cb, tutorial_assets = makedemos("tutorials")
67

78
assets = []
8-
isnothing(tutorial_assets) || push!(assets, tutorial_assets)
9+
# isnothing(tutorial_assets) || push!(assets, tutorial_assets)
910

1011
DocMeta.setdocmeta!(GraphNeuralNetworks, :DocTestSetup,
1112
:(using GraphNeuralNetworks, Graphs, SparseArrays, NNlib, Flux);
@@ -15,7 +16,7 @@ prettyurls = get(ENV, "CI", nothing) == "true"
1516
mathengine = MathJax3()
1617

1718
makedocs(;
18-
modules = [GraphNeuralNetworks, NNlib, Flux, Graphs, SparseArrays],
19+
modules = [GraphNeuralNetworks, GNNGraphs],
1920
doctest = false,
2021
clean = true,
2122
format = Documenter.HTML(; mathengine, prettyurls, assets = assets),
@@ -25,7 +26,7 @@ makedocs(;
2526
"Message Passing" => "messagepassing.md",
2627
"Model Building" => "models.md",
2728
"Datasets" => "datasets.md",
28-
"Tutorials" => tutorials,
29+
# "Tutorials" => tutorials,
2930
"API Reference" => [
3031
"GNNGraph" => "api/gnngraph.md",
3132
"Basic Layers" => "api/basic.md",
@@ -40,6 +41,6 @@ makedocs(;
4041
"Summer Of Code" => "gsoc.md",
4142
])
4243

43-
tutorials_cb()
44+
# tutorials_cb()
4445

4546
deploydocs(repo = "github.com/CarloLucibello/GraphNeuralNetworks.jl.git")

docs/pluto_output/gnn_intro_pluto.md

Lines changed: 15 additions & 15 deletions
Large diffs are not rendered by default.

docs/pluto_output/graph_classification_pluto.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
```@raw html
22
<style>
3-
table {
3+
#documenter-page table {
44
display: table !important;
55
margin: 2rem auto !important;
66
border-top: 2pt solid rgba(0,0,0,0.2);
77
border-bottom: 2pt solid rgba(0,0,0,0.2);
88
}
99
10-
pre, div {
10+
#documenter-page pre, #documenter-page div {
1111
margin-top: 1.4rem !important;
1212
margin-bottom: 1.4rem !important;
1313
}
@@ -25,8 +25,8 @@
2525
<!--
2626
# This information is used for caching.
2727
[PlutoStaticHTML.State]
28-
input_sha = "f145b80b8f1e399d4cd5686b529cf173942102c538702952fe0743defca62210"
29-
julia_version = "1.9.1"
28+
input_sha = "62d9b08cdb51a5d174d1d090f3e4834f98df0c30b8b515e5befdd8fa22bd5c7f"
29+
julia_version = "1.10.4"
3030
-->
3131
<pre class='language-julia'><code class='language-julia'>begin
3232
using Flux
@@ -102,7 +102,7 @@ end</code></pre>
102102
<div class="markdown"><p>We have some useful utilities for working with graph datasets, <em>e.g.</em>, we can shuffle the dataset and use the first 150 graphs as training graphs, while using the remaining ones for testing:</p></div>
103103
104104
<pre class='language-julia'><code class='language-julia'>train_data, test_data = splitobs((graphs, y), at = 150, shuffle = true) |&gt; getobs</code></pre>
105-
<pre class="code-output documenter-example-output" id="var-train_data">((GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}[GNNGraph(12, 24) with x: 7×12 data, GNNGraph(22, 50) with x: 7×22 data, GNNGraph(23, 54) with x: 7×23 data, GNNGraph(25, 56) with x: 7×25 data, GNNGraph(16, 36) with x: 7×16 data, GNNGraph(11, 22) with x: 7×11 data, GNNGraph(18, 38) with x: 7×18 data, GNNGraph(23, 52) with x: 7×23 data, GNNGraph(22, 50) with x: 7×22 data, GNNGraph(20, 46) with x: 7×20 data … GNNGraph(16, 34) with x: 7×16 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(21, 44) with x: 7×21 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(23, 54) with x: 7×23 data, GNNGraph(12, 24) with x: 7×12 data, GNNGraph(22, 50) with x: 7×22 data, GNNGraph(19, 42) with x: 7×19 data, GNNGraph(16, 34) with x: 7×16 data, GNNGraph(16, 36) with x: 7×16 data], Bool[1 0 … 1 0; 0 1 … 0 1]), (GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}[GNNGraph(21, 44) with x: 7×21 data, GNNGraph(22, 50) with x: 7×22 data, GNNGraph(16, 34) with x: 7×16 data, GNNGraph(27, 66) with x: 7×27 data, GNNGraph(13, 26) with x: 7×13 data, GNNGraph(20, 44) with x: 7×20 data, GNNGraph(19, 44) with x: 7×19 data, GNNGraph(20, 46) with x: 7×20 data, GNNGraph(16, 34) with x: 7×16 data, GNNGraph(13, 28) with x: 7×13 data … GNNGraph(11, 22) with x: 7×11 data, GNNGraph(20, 46) with x: 7×20 data, GNNGraph(16, 34) with x: 7×16 data, GNNGraph(18, 40) with x: 7×18 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(20, 44) with x: 7×20 data, GNNGraph(14, 30) with x: 7×14 data, GNNGraph(13, 26) with x: 7×13 data, GNNGraph(21, 44) with x: 7×21 data, GNNGraph(22, 50) with x: 7×22 data], Bool[0 0 … 0 0; 1 1 … 1 1]))</pre>
105+
<pre class="code-output documenter-example-output" id="var-train_data">((GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}[GNNGraph(16, 34) with x: 7×16 data, GNNGraph(22, 50) with x: 7×22 data, GNNGraph(23, 54) with x: 7×23 data, GNNGraph(11, 22) with x: 7×11 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(19, 44) with x: 7×19 data, GNNGraph(16, 34) with x: 7×16 data, GNNGraph(14, 30) with x: 7×14 data, GNNGraph(18, 38) with x: 7×18 data … GNNGraph(12, 26) with x: 7×12 data, GNNGraph(19, 40) with x: 7×19 data, GNNGraph(19, 44) with x: 7×19 data, GNNGraph(26, 60) with x: 7×26 data, GNNGraph(20, 44) with x: 7×20 data, GNNGraph(20, 44) with x: 7×20 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(19, 44) with x: 7×19 data, GNNGraph(19, 42) with x: 7×19 data, GNNGraph(22, 50) with x: 7×22 data], Bool[0 0 … 0 0; 1 1 … 1 1]), (GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}[GNNGraph(26, 60) with x: 7×26 data, GNNGraph(15, 34) with x: 7×15 data, GNNGraph(11, 22) with x: 7×11 data, GNNGraph(24, 50) with x: 7×24 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(21, 44) with x: 7×21 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(12, 26) with x: 7×12 data, GNNGraph(17, 38) with x: 7×17 data … GNNGraph(12, 26) with x: 7×12 data, GNNGraph(23, 52) with x: 7×23 data, GNNGraph(12, 24) with x: 7×12 data, GNNGraph(23, 50) with x: 7×23 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(18, 40) with x: 7×18 data, GNNGraph(16, 36) with x: 7×16 data, GNNGraph(13, 26) with x: 7×13 data, GNNGraph(28, 62) with x: 7×28 data, GNNGraph(11, 22) with x: 7×11 data], Bool[0 0 … 0 1; 1 1 … 1 0]))</pre>
106106
107107
<pre class='language-julia'><code class='language-julia'>begin
108108
train_loader = DataLoader(train_data, batchsize = 32, shuffle = true)
@@ -113,7 +113,7 @@ end</code></pre>
113113
(32-element Vector{GraphNeuralNetworks.GNNGraphs.GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}}, 2×32 OneHotMatrix(::Vector{UInt32}) with eltype Bool,)</pre>
114114
115115
116-
<div class="markdown"><p>Here, we opt for a <code>batch_size</code> of 32, leading to 5 (randomly shuffled) mini-batches, containing all <span class="tex">$4 \cdot 32+22 = 150$</span> graphs.</p></div>
116+
<div class="markdown"><p>Here, we opt for a <code>batch_size</code> of 32, leading to 5 (randomly shuffled) mini-batches, containing all <span class="tex">\(4 \cdot 32+22 = 150\)</span> graphs.</p></div>
117117
118118
119119
```
@@ -123,15 +123,15 @@ end</code></pre>
123123
<p>Since graphs in graph classification datasets are usually small, a good idea is to <strong>batch the graphs</strong> before inputting them into a Graph Neural Network to guarantee full GPU utilization. In the image or language domain, this procedure is typically achieved by <strong>rescaling</strong> or <strong>padding</strong> each example into a set of equally-sized shapes, and examples are then grouped in an additional dimension. The length of this dimension is then equal to the number of examples grouped in a mini-batch and is typically referred to as the <code>batchsize</code>.</p><p>However, for GNNs the two approaches described above are either not feasible or may result in a lot of unnecessary memory consumption. Therefore, GraphNeuralNetworks.jl opts for another approach to achieve parallelization across a number of examples. Here, adjacency matrices are stacked in a diagonal fashion (creating a giant graph that holds multiple isolated subgraphs), and node and target features are simply concatenated in the node dimension (the last dimension).</p><p>This procedure has some crucial advantages over other batching procedures:</p><ol><li><p>GNN operators that rely on a message passing scheme do not need to be modified since messages are not exchanged between two nodes that belong to different graphs.</p></li><li><p>There is no computational or memory overhead since adjacency matrices are saved in a sparse fashion holding only non-zero entries, <em>i.e.</em>, the edges.</p></li></ol><p>GraphNeuralNetworks.jl can <strong>batch multiple graphs into a single giant graph</strong>:</p></div>
124124
125125
<pre class='language-julia'><code class='language-julia'>vec_gs, _ = first(train_loader)</code></pre>
126-
<pre class="code-output documenter-example-output" id="var-vec_gs">(GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}[GNNGraph(17, 38) with x: 7×17 data, GNNGraph(19, 42) with x: 7×19 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(14, 30) with x: 7×14 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(23, 54) with x: 7×23 data, GNNGraph(16, 36) with x: 7×16 data, GNNGraph(24, 50) with x: 7×24 data, GNNGraph(23, 54) with x: 7×23 data, GNNGraph(15, 34) with x: 7×15 data … GNNGraph(16, 34) with x: 7×16 data, GNNGraph(16, 34) with x: 7×16 data, GNNGraph(23, 54) with x: 7×23 data, GNNGraph(12, 26) with x: 7×12 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(20, 44) with x: 7×20 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(26, 60) with x: 7×26 data, GNNGraph(23, 54) with x: 7×23 data, GNNGraph(24, 50) with x: 7×24 data], Bool[0 0 … 0 0; 1 1 … 1 1])</pre>
126+
<pre class="code-output documenter-example-output" id="var-vec_gs">(GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}[GNNGraph(19, 44) with x: 7×19 data, GNNGraph(20, 46) with x: 7×20 data, GNNGraph(15, 34) with x: 7×15 data, GNNGraph(25, 56) with x: 7×25 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(20, 44) with x: 7×20 data, GNNGraph(16, 34) with x: 7×16 data, GNNGraph(11, 22) with x: 7×11 data, GNNGraph(19, 44) with x: 7×19 data, GNNGraph(20, 44) with x: 7×20 data … GNNGraph(12, 24) with x: 7×12 data, GNNGraph(12, 26) with x: 7×12 data, GNNGraph(16, 36) with x: 7×16 data, GNNGraph(11, 22) with x: 7×11 data, GNNGraph(22, 50) with x: 7×22 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(14, 30) with x: 7×14 data, GNNGraph(16, 34) with x: 7×16 data, GNNGraph(22, 50) with x: 7×22 data, GNNGraph(23, 54) with x: 7×23 data], Bool[0 0 … 0 0; 1 1 … 1 1])</pre>
127127
128128
<pre class='language-julia'><code class='language-julia'>MLUtils.batch(vec_gs)</code></pre>
129129
<pre class="code-output documenter-example-output" id="var-hash102363">GNNGraph:
130-
num_nodes: 585
131-
num_edges: 1292
130+
num_nodes: 575
131+
num_edges: 1276
132132
num_graphs: 32
133133
ndata:
134-
x = 7×585 Matrix{Float32}</pre>
134+
x = 7×575 Matrix{Float32}</pre>
135135
136136
137137
<div class="markdown"><p>Each batched graph object is equipped with a <strong><code>graph_indicator</code> vector</strong>, which maps each node to its respective graph in the batch:</p><p class="tex">$$\textrm{graph\_indicator} = [1, \ldots, 1, 2, \ldots, 2, 3, \ldots ]$$</p></div>
@@ -154,7 +154,7 @@ end</code></pre>
154154
<pre class="code-output documenter-example-output" id="var-create_model">create_model (generic function with 1 method)</pre>
155155
156156
157-
<div class="markdown"><p>Here, we again make use of the <code>GCNConv</code> with <span class="tex">$\mathrm{ReLU}(x) = \max(x, 0)$</span> activation for obtaining localized node embeddings, before we apply our final classifier on top of a graph readout layer.</p><p>Let's train our network for a few epochs to see how well it performs on the training as well as test set:</p></div>
157+
<div class="markdown"><p>Here, we again make use of the <code>GCNConv</code> with <span class="tex">\(\mathrm{ReLU}(x) = \max(x, 0)\)</span> activation for obtaining localized node embeddings, before we apply our final classifier on top of a graph readout layer.</p><p>Let's train our network for a few epochs to see how well it performs on the training as well as test set:</p></div>
158158
159159
<pre class='language-julia'><code class='language-julia'>function eval_loss_accuracy(model, data_loader, device)
160160
loss = 0.0

docs/pluto_output/node_classification_pluto.md

Lines changed: 9 additions & 9 deletions
Large diffs are not rendered by default.

docs/src/api/gnngraph.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -26,15 +26,15 @@ GNNGraph
2626
## DataStore
2727

2828
```@autodocs
29-
Modules = [GraphNeuralNetworks.GNNGraphs]
29+
Modules = [GNNGraphs]
3030
Pages = ["datastore.jl"]
3131
Private = false
3232
```
3333

3434
## Query
3535

3636
```@autodocs
37-
Modules = [GraphNeuralNetworks.GNNGraphs]
37+
Modules = [GNNGraphs]
3838
Pages = ["query.jl"]
3939
Private = false
4040
```
@@ -47,7 +47,7 @@ Graphs.inneighbors
4747
## Transform
4848

4949
```@autodocs
50-
Modules = [GraphNeuralNetworks.GNNGraphs]
50+
Modules = [GNNGraphs]
5151
Pages = ["transform.jl"]
5252
Private = false
5353
```
@@ -62,7 +62,7 @@ GNNGraphs.color_refinement
6262
## Generate
6363

6464
```@autodocs
65-
Modules = [GraphNeuralNetworks.GNNGraphs]
65+
Modules = [GNNGraphs]
6666
Pages = ["generate.jl"]
6767
Private = false
6868
Filter = t -> typeof(t) <: Function && t!=rand_temporal_radius_graph && t!=rand_temporal_hyperbolic_graph
@@ -72,7 +72,7 @@ Filter = t -> typeof(t) <: Function && t!=rand_temporal_radius_graph && t!=rand_
7272
## Operators
7373

7474
```@autodocs
75-
Modules = [GraphNeuralNetworks.GNNGraphs]
75+
Modules = [GNNGraphs]
7676
Pages = ["operators.jl"]
7777
Private = false
7878
```
@@ -84,7 +84,7 @@ Graphs.intersect
8484
## Sampling
8585

8686
```@autodocs
87-
Modules = [GraphNeuralNetworks.GNNGraphs]
87+
Modules = [GNNGraphs]
8888
Pages = ["sampling.jl"]
8989
Private = false
9090
```

docs/src/api/heterograph.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Documentation page for the type `GNNHeteroGraph` representing heterogeneous grap
66

77

88
```@autodocs
9-
Modules = [GraphNeuralNetworks.GNNGraphs]
9+
Modules = [GNNGraphs]
1010
Pages = ["gnnheterograph.jl"]
1111
Private = false
1212
```

docs/src/api/temporalconv.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
```@meta
2+
CurrentModule = GraphNeuralNetworks
3+
```
4+
5+
# Temporal Graph-Convolutional Layers
6+
7+
Convolutions for time-varying graphs (temporal graphs) such as the [`TemporalSnapshotsGNNGraph`](@ref).
8+
9+
## Docs
10+
11+
```@autodocs
12+
Modules = [GraphNeuralNetworks]
13+
Pages = ["layers/temporalconv.jl"]
14+
Private = false
15+
```

docs/src/api/temporalgraph.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
Documentation page for the graph type `TemporalSnapshotsGNNGraph` and related methods, representing time varying graphs with time varying features.
66

77
```@autodocs
8-
Modules = [GraphNeuralNetworks.GNNGraphs]
8+
Modules = [GNNGraphs]
99
Pages = ["temporalsnapshotsgnngraph.jl"]
1010
Private = false
1111
```

0 commit comments

Comments
 (0)