Skip to content

Commit 732876a

Browse files
committed
typos
1 parent c92b60f commit 732876a

File tree

11 files changed

+14
-14
lines changed

11 files changed

+14
-14
lines changed

dev/doubleMM.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ train_dataloader = MLUtils.DataLoader(
4242
(xM, xP, y_o, y_unc, 1:n_site);
4343
batchsize = n_batch, partial = false)
4444
σ_o = exp.(y_unc[:, 1] / 2)
45-
# assign the train_loader, otherwise it eatch time creates another version of synthetic data
45+
# assign the train_loader, otherwise it each time creates another version of synthetic data
4646
prob0 = HybridProblem(prob0_; train_dataloader)
4747
#tmp = HVI.get_hybridproblem_ϕunc(prob0; scenario)
4848
#prob0.covar
@@ -248,7 +248,7 @@ end
248248
(y2_K1global, θsP2_K1global, θsMs2_K1global) = (y, θsP, θsMs);
249249
end
250250

251-
() -> begin # otpimize using LUX
251+
() -> begin # optimize using LUX
252252
#using Lux
253253
g_lux = Lux.Chain(
254254
# dense layer with bias that maps to 8 outputs and applies `tanh` activation

docs/src/explanation/theory_hvi.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ In order to learn $\phi_g$, the user needs to provide a batch of $i \in \{1 \ldo
2828
## Estimation using the ELBO
2929

3030
In order to find the parameters of the approximation of the posterior, HVI
31-
minizes the KL divergence between the approximation and the true posterior.
31+
minimizes the KL divergence between the approximation and the true posterior.
3232
This is achieve by maximizing the evidence lower bound (ELBO).
3333

3434
$$\mathcal{L}(\phi) = \mathbb{E}_{q(\theta)} \left[\log p(y,\theta) \right] - \mathbb{E}_{q(\theta)} \left[\log q(\theta) \right]$$
@@ -128,7 +128,7 @@ $\phi = (\phi_P, \phi_g, \phi_u)$, comprises
128128
- $\phi_P = \mu_{\zeta_P}$: the means of the distributions of the transformed global
129129
parameters,
130130
- $\phi_g$: the parameters of the machine learning model, and
131-
- $\phi_u$: paramerization of $\Sigma_\zeta$ that is additional to the means.
131+
- $\phi_u$: parameterization of $\Sigma_\zeta$ that is additional to the means.
132132

133133
### Details
134134
Specifically, $\phi_u= (log\sigma^2_P, log\sigma^2_{M0}, log\sigma^2_{M\eta}, a_P, a_M)$,

docs/src/tutorials/basic_cpu.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ access the components by its symbolic names in the provided `ComponentArray`.
5050
HVI requires the evaluation of the likelihood of the predictions.
5151
It corresponds to the cost of predictions given some observations.
5252

53-
The user specifies a function of the negative log-Likehood
53+
The user specifies a function of the negative log-Likelihood
5454
`neg_logden(obs, pred, uncertainty_parameters)`,
5555
where all of the parameters are arrays with columns for sites.
5656

docs/src/tutorials/basic_cpu.qmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ access the components by its symbolic names in the provided `ComponentArray`.
5959
HVI requires the evaluation of the likelihood of the predictions.
6060
It corresponds to the cost of predictions given some observations.
6161

62-
The user specifies a function of the negative log-Likehood
62+
The user specifies a function of the negative log-Likelihood
6363
`neg_logden(obs, pred, uncertainty_parameters)`,
6464
where all of the parameters are arrays with columns for sites.
6565

docs/src/tutorials/blocks_corr.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# How to model indenpendent parameter-blocks in the posterior
1+
# How to model independent parameter-blocks in the posterior
22

33

44
``` @meta

docs/src/tutorials/blocks_corr.qmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: "How to model indenpendent parameter-blocks in the posterior"
2+
title: "How to model independent parameter-blocks in the posterior"
33
engine: julia
44
execute:
55
echo: true

src/AbstractHybridProblem.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ end
9191
"""
9292
get_hybridproblem_transforms(::AbstractHybridProblem; scenario)
9393
94-
Return a NamedTupe of
94+
Return a NamedTuple of
9595
- `transP`: Bijectors.Transform for the global PBM parameters, θP
9696
- `transM`: Bijectors.Transform for the single-site PBM parameters, θM
9797
"""

src/RRuleMonitor.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ and raises an error if the supplied cotangent or the jacobian
66
contains non-finitie entries.
77
88
Arguments
9-
- label: id (String, or symbole) used in the error message.
9+
- label: id (String, or symbol) used in the error message.
1010
- `ad_backend`: the AD backend used in `DifferentiationInterface.jacobian`.
1111
Defaults to `AutoZygote().`
1212
"""

src/util_ca.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ component information that might be present in the dimensions.
3131
function compose_axes(axtuples::NamedTuple)
3232
ls = map(axtuple -> Val(prod(axis_length.(axtuple))), axtuples)
3333
# to work on types, need to construct value types of intervals
34-
intervals = _construct_invervals(;lengths=ls)
34+
intervals = _construct_intervals(;lengths=ls)
3535
named_intervals = (;zip(keys(axtuples),intervals)...)
3636
axc = map(named_intervals, axtuples) do interval, axtuple
3737
ax = length(axtuple) == 1 ? axtuple[1] : CA.ShapedAxis(axis_length.(axtuple))
@@ -40,7 +40,7 @@ function compose_axes(axtuples::NamedTuple)
4040
CA.Axis(; axc...)
4141
end
4242

43-
function _construct_invervals(;lengths)
43+
function _construct_intervals(;lengths)
4444
reduce((ranges,length) -> _add_interval(;ranges, length),
4545
Iterators.tail(lengths), init=(Val(1:_val_value(first(lengths))),))
4646
end

test/test_ComponentArrayInterpreter.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ using Suppressor
1212
gdev = Suppressor.@suppress gpu_device() # not loaded CUDA
1313
cdev = cpu_device()
1414

15-
@testset "construct StaticComponentArrayInterepreter" begin
15+
@testset "construct StaticComponentArrayInterpreter" begin
1616
intv = @inferred CP.StaticComponentArrayInterpreter(CA.ComponentVector(a=1:3, b=reshape(4:9,3,2)))
1717
ints = @inferred CP.StaticComponentArrayInterpreter((;a=Val(3), b = Val((3,2))))
1818
# @descend_code_warntype CP.StaticComponentArrayInterpreter((;a=Val(3), b = Val((3,2))))

0 commit comments

Comments
 (0)