Skip to content

Commit bc48e5f

Browse files
Merge pull request #76 from SciML/fix-spelling-typos
[ci skip] Fix spelling errors
2 parents 184d980 + 1e2ae74 commit bc48e5f

File tree

3 files changed

+11
-8
lines changed

3 files changed

+11
-8
lines changed

.typos.toml

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,4 +8,7 @@ ists = "ists"
88
ispcs = "ispcs"
99
eqs = "eqs"
1010
rhs = "rhs"
11-
MTK = "MTK"
11+
MTK = "MTK"
12+
13+
# Julia data handling terms
14+
Missings = "Missings" # Julia's Missing data type (plural form)

docs/src/nnblock.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,8 @@
88
This tutorial will introduce the [`NeuralNetworkBlock`](@ref). This representation is useful in the context of hierarchical acausal component-based model.
99

1010
For such models we have a component representation that is converted to a a differential-algebraic equation (DAE) system, where the algebraic equations are given by the constraints and equalities between different component variables.
11-
The process of going from the component representation to the full DAE system at the end is reffered to as [structural simplification](https://docs.sciml.ai/ModelingToolkit/stable/API/model_building/#System-simplification).
12-
In order to formulate Universal Differential Equations (UDEs) in this context, we could operate eiter operate before the structural simplification step or after that, on the
11+
The process of going from the component representation to the full DAE system at the end is referred to as [structural simplification](https://docs.sciml.ai/ModelingToolkit/stable/API/model_building/#System-simplification).
12+
In order to formulate Universal Differential Equations (UDEs) in this context, we could operate either operate before the structural simplification step or after that, on the
1313
resulting DAE system. We call these the component UDE formulation and the system UDE formulation.
1414

1515
The advantage of the component UDE formulation is that it allows us to represent the model
@@ -181,7 +181,7 @@ end
181181
@named model = NeuralPot()
182182
sys3 = mtkcompile(model)
183183
184-
# Let's check that we can succesfully simulate the system in the
184+
# Let's check that we can successfully simulate the system in the
185185
# initial state
186186
prob3 = ODEProblem(sys3, Pair[], (0, 100.0))
187187
sol3 = solve(prob3, Tsit5(), abstol=1e-6, reltol=1e-6)
@@ -192,8 +192,8 @@ Now that we have the system with the embedded neural network, we can start train
192192
The training will be formulated as an optimization problem where we will minimize the mean absolute squared distance
193193
between the predictions of the new system and the data obtained from the original system.
194194
In order to gain some insight into the training process we will also add a callback that will plot various quantities
195-
in the system versus their equivalents in the original system. In a more realistic scenarion we would not have access
196-
to the original system, but we could still monitor how well we fit the traning data and the system predictions.
195+
in the system versus their equivalents in the original system. In a more realistic scenario we would not have access
196+
to the original system, but we could still monitor how well we fit the training data and the system predictions.
197197

198198
```@example potplate
199199
using SymbolicIndexingInterface

src/utils.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,8 @@
33
depth::Int = 1, activation = tanh, use_bias = true, initial_scaling_factor = 1e-8)
44
55
Create a Lux.jl `Chain` for use in [`NeuralNetworkBlock`](@ref)s. The weights of the last layer
6-
are multipled by the `initial_scaling_factor` in order to make the initial contribution
7-
of the network small and thus help with acheiving a stable starting position for the training.
6+
are multiplied by the `initial_scaling_factor` in order to make the initial contribution
7+
of the network small and thus help with achieving a stable starting position for the training.
88
"""
99
function multi_layer_feed_forward(; n_input, n_output, width::Int = 4,
1010
depth::Int = 1, activation = tanh, use_bias = true, initial_scaling_factor = 1e-8)

0 commit comments

Comments
 (0)