Skip to content

Commit 3c877aa

Browse files
Merge pull request #3 from una-auxme/documentation
Fixed broken references
2 parents 79cde0e + 49a2811 commit 3c877aa

File tree

4 files changed

+24
-24
lines changed

4 files changed

+24
-24
lines changed

src/feature_graph.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
"""
77
FeatureGraph(nf, ef, senders, receivers)
88
9-
Data structure that is used as an input for the [GraphNetCore.GraphNetwork](@ref).
9+
Data structure that is used as an input for the [`GraphNetwork`](@ref).
1010
1111
# Arguments
1212
- `nf`: Node features of the graph.

src/graph_network.jl

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ include("graph_net_blocks.jl")
1919
The central data structure that contains the neural network and the normalisers corresponding to the components of the GNN (edge features, node features and output).
2020
2121
# Arguments
22-
- `model`: The Enocde-Process-Decode model as a [Lux.Chain](@ref).
22+
- `model`: The Enocde-Process-Decode model as a [Lux](https://github.com/LuxDL/Lux.jl) Chain.
2323
- `ps`: Parameters of the model.
2424
- `st`: State of the model.
2525
- `e_norm`: Normaliser for the edge features of the GNN.
@@ -57,7 +57,7 @@ end
5757
"""
5858
build_model(quantities_size::Integer, dims, output_size::Integer, mps::Integer, layer_size::Integer, hidden_layers::Integer, device::Function)
5959
60-
Constructs the Encode-Process-Decode model as a [Lux.Chain](@ref) with the given arguments.
60+
Constructs the Encode-Process-Decode model as a [Lux](https://github.com/LuxDL/Lux.jl) Chain with the given arguments.
6161
6262
# Arguments
6363
- `quantities_size`: Sum of dimensions of each node feature.
@@ -66,10 +66,10 @@ Constructs the Encode-Process-Decode model as a [Lux.Chain](@ref) with the given
6666
- `mps`: Number of message passing steps.
6767
- `layer_size`: Size of hidden layers.
6868
- `hidden_layers`: Number of hidden layers.
69-
- `device`: Device where the model should be loaded (see [Lux.gpu_device()](@ref) and [Lux.cpu_device()](@ref)).
69+
- `device`: Device where the model should be loaded (see [Lux GPU Management](https://lux.csail.mit.edu/dev/manual/gpu_management#gpu-management)).
7070
7171
# Returns
72-
- `model`: The Encode-Process-Decode model as a [Lux.Chain](@ref).
72+
- `model`: The Encode-Process-Decode model as a [Lux](https://github.com/LuxDL/Lux.jl) Chain.
7373
"""
7474
function build_model(quantities_size::Integer, dims, output_size::Integer, mps::Integer, layer_size::Integer, hidden_layers::Integer, device::Function)
7575
encoder = Encoder(build_mlp(quantities_size, layer_size, layer_size, hidden_layers, dev=device), build_mlp(dims + 1, layer_size, layer_size, hidden_layers, dev=device))
@@ -103,8 +103,8 @@ end
103103
104104
105105
# Arguments
106-
- `gn`: The used [GraphNetCore.GraphNetwork](@ref).
107-
- `graph`: Input data stored in a [GraphNetCore.FeatureGraph](@ref).
106+
- `gn`: The used [`GraphNetwork`](@ref).
107+
- `graph`: Input data stored in a [`FeatureGraph`](@ref).
108108
- `target_quantities_change`: Derivatives of quantities of interest (e.g. via finite differences from data).
109109
- `mask`: Mask for excluding node types that should not be updated.
110110
- `loss_function`: Loss function that is used to calculate the error.
@@ -124,13 +124,13 @@ end
124124
"""
125125
save!(gn, opt_state, df_train::DataFrame, df_valid::DataFrame, step::Integer, train_loss::Float32, path::String; is_training = true)
126126
127-
Creates a checkpoint of the [GraphNetCore.GraphNetwork](@ref) at the given training step.
127+
Creates a checkpoint of the [`GraphNetwork`](@ref) at the given training step.
128128
129129
# Arguments
130-
- `gn`: The [GraphNetCore.GraphNetwork](@ref) that a checkpoint is created of.
130+
- `gn`: The [`GraphNetwork`](@ref) that a checkpoint is created of.
131131
- `opt_state`: State of the optimiser.
132-
- `df_train`: [DataFrames.DataFram](@ref) that stores the train losses at the checkpoints.
133-
- `df_valid`: [DataFrames.DataFram](@ref) that stores the validation losses at the checkpoints (only improvements are saved).
132+
- `df_train`: [DataFrames.jl](https://github.com/JuliaData/DataFrames.jl) DataFrame that stores the train losses at the checkpoints.
133+
- `df_valid`: [DataFrames.jl](https://github.com/JuliaData/DataFrames.jl) DataFrame that stores the validation losses at the checkpoints (only improvements are saved).
134134
- `step`: Current training step where the checkpoint is created.
135135
- `train_loss`: Current training loss.
136136
- `path`: Path to the folder where checkpoints are saved.
@@ -178,7 +178,7 @@ end
178178
"""
179179
load(quantities, dims, norms, output, message_steps, ls, hl, opt, device::Function, path::String)
180180
181-
Loads the [GraphNetCore.GraphNetwork](@ref) from the latest checkpoint at the given path.
181+
Loads the [`GraphNetwork`](@ref) from the latest checkpoint at the given path.
182182
183183
# Arguments
184184
- `quantities`: Sum of dimensions of each node feature.
@@ -189,14 +189,14 @@ Loads the [GraphNetCore.GraphNetwork](@ref) from the latest checkpoint at the gi
189189
- `ls`: Size of hidden layers.
190190
- `hl`: Number of hidden layers.
191191
- `opt`: Optimiser that is used for training. Set this to `nothing` if you want to use the optimiser from the checkpoint.
192-
- `device`: Device where the model should be loaded (see [Lux.gpu_device()](@ref) and [Lux.cpu_device()](@ref)).
192+
- `device`: Device where the model should be loaded (see [Lux GPU Management](https://lux.csail.mit.edu/dev/manual/gpu_management#gpu-management)).
193193
- `path`: Path to the folder where the checkpoint is.
194194
195195
# Returns
196-
- `gn`: The loaded [GraphNetCore.GraphNetwork](@ref) from the checkpoint.
196+
- `gn`: The loaded [`GraphNetwork`](@ref) from the checkpoint.
197197
- `opt_state`: The loaded optimiser state. Is nothing if no checkpoint was found or an optimiser was passed as an argument.
198-
- `df_train`: [DataFrames.DataFram](@ref) containing the train losses at the checkpoints.
199-
- `df_valid`: [DataFrames.DataFram](@ref) containing the validation losses at the checkpoints (only improvements are saved).
198+
- `df_train`: [DataFrames.jl](https://github.com/JuliaData/DataFrames.jl) DataFrame containing the train losses at the checkpoints.
199+
- `df_valid`: [DataFrames.jl](https://github.com/JuliaData/DataFrames.jl) DataFrame containing the validation losses at the checkpoints (only improvements are saved).
200200
"""
201201
function load(quantities, dims, norms, output, message_steps, ls, hl, opt, device::Function, path::String)
202202
if isfile(joinpath(path, "checkpoints"))

src/normaliser.jl

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ end
3434
Inverses the normalised data.
3535
3636
# Arguments
37-
- `n`: The used [GraphNetCore.NormaliserOffline](@ref).
37+
- `n`: The used [`NormaliserOffline`](@ref).
3838
- `data`: Data to be converted back.
3939
4040
# Returns
@@ -75,7 +75,7 @@ It is recommended to use offline normalization since the minimum and maximum do
7575
7676
# Arguments
7777
- `dims`: Dimension of the quantity to normalize.
78-
- `device`: Device where the Normaliser should be loaded (see [Lux.gpu_device()](@ref) and [Lux.cpu_device()](@ref)).
78+
- `device`: Device where the Normaliser should be loaded (see [Lux GPU Management](https://lux.csail.mit.edu/dev/manual/gpu_management#gpu-management)).
7979
8080
# Keyword Arguments
8181
- `max_acc = 10f6`: Maximum number of accumulation steps.
@@ -92,8 +92,8 @@ Online normalization if the minimum and maximum of the quantity is not known.
9292
It is recommended to use offline normalization since the minimum and maximum do not need to be inferred from data.
9393
9494
# Arguments
95-
- `d`: Dictionary containing the fields of the struct [GraphNetCore.NormaliserOnline](@ref).
96-
- `device`: Device where the Normaliser should be loaded (see [Lux.gpu_device()](@ref) and [Lux.cpu_device()](@ref)).
95+
- `d`: Dictionary containing the fields of the struct [`NormaliserOnline`](@ref).
96+
- `device`: Device where the Normaliser should be loaded (see [Lux GPU Management](https://lux.csail.mit.edu/dev/manual/gpu_management#gpu-management)).
9797
"""
9898
function NormaliserOnline(d::Dict{String, Any}, device::Function)
9999
NormaliserOnline(d["max_accumulations"], d["std_epsilon"], d["acc_count"], d["num_accumulations"], device(d["acc_sum"]), device(d["acc_sum_squared"]))
@@ -114,7 +114,7 @@ end
114114
Inverses the normalised data.
115115
116116
# Arguments
117-
- `n`: The used [GraphNetCore.NormaliserOnline](@ref).
117+
- `n`: The used [`NormaliserOnline`](@ref).
118118
- `data`: Data to be converted back.
119119
120120
# Returns

src/utils.jl

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ Converts the given faces of a mesh to edges.
1414
- `faces`: Two-dimensional array with the node indices in the first dimension.
1515
1616
# Returns
17-
- A tuple containing the edge pairs. (See [parse_edges](@ref))
17+
- A tuple containing the edge pairs. (See [`parse_edges`](@ref))
1818
"""
1919
function triangles_to_edges(faces::AbstractArray{T, 2} where T <: Integer)
2020
edges = hcat(faces[1:2, :], faces[2:3, :], permutedims(hcat(faces[3, :], faces[1, :])))
@@ -94,7 +94,7 @@ end
9494
"""
9595
mse_reduce(target, output)
9696
97-
Calculates the mean squared error of the given arguments with [Tullio](@ref) for GPU compatibility.
97+
Calculates the mean squared error of the given arguments with [Tullio](https://github.com/mcabbott/Tullio.jl) for GPU compatibility.
9898
9999
# Arguments
100100
- `target`: Ground truth from the data.
@@ -111,7 +111,7 @@ end
111111
"""
112112
tullio_reducesum(a, dims)
113113
114-
Implementation of the function [reducesum](@ref) with [Tullio](@ref) for GPU compatibility.
114+
Implementation of the function [`reducesum`](@ref) with [Tullio](https://github.com/mcabbott/Tullio.jl) for GPU compatibility.
115115
116116
# Arguments
117117
- `a`: Array as input for reducesum.

0 commit comments

Comments
 (0)