You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/internals/transformations.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,11 +21,11 @@ For certain inference methods, it's necessary / much more convenient to work wit
21
21
22
22
We write "unconstrained" with quotes because there are many ways to transform a constrained variable to an unconstrained one, *and* DynamicPPL can work with a much broader class of bijective transformations of variables, not just ones that go to the entire real line. But for MCMC, unconstraining is the most common transformation so we'll stick with that terminology.
23
23
24
-
For a large family of constraints encoucntered in practice, it is indeed possible to transform a (partially) contrained model to a completely unconstrained one in such a way that sampling in the unconstrained space is equivalent to sampling in the constrained space.
24
+
For a large family of constraints encountered in practice, it is indeed possible to transform a (partially) constrained model to a completely unconstrained one in such a way that sampling in the unconstrained space is equivalent to sampling in the constrained space.
25
25
26
26
In DynamicPPL.jl, this is often referred to as *linking* (a term originating in the statistics literature) and is done using transformations from [Bijectors.jl](https://github.com/TuringLang/Bijectors.jl).
27
27
28
-
For example, the above model could be transformed into (the following psuedo-code; it's not working code):
28
+
For example, the above model could be transformed into (the following pseudo-code; it's not working code):
29
29
30
30
```julia
31
31
@modelfunctiondemo()
@@ -37,7 +37,7 @@ end
37
37
38
38
Here `log_s` is an unconstrained variable, and `s` is a constrained variable that is a deterministic function of `log_s`.
39
39
40
-
But to ensure that we stay consistent with what the user expects, DynamicPPL.jl does not actually transform the model as above, but can instead makes use of transformed variables internally to achieve the same effect, when desired.
40
+
But to ensure that we stay consistent with what the user expects, DynamicPPL.jl does not actually transform the model as above, but instead makes use of transformed variables internally to achieve the same effect, when desired.
41
41
42
42
In the end, we'll end up with something that looks like this:
43
43
@@ -55,7 +55,7 @@ There are two aspects to transforming from the internal representation of a vari
55
55
56
56
1. Different implementations of [`AbstractVarInfo`](@ref) represent realizations of a model in different ways internally, so we need to transform from this internal representation to the desired representation in the model. For example,
57
57
58
-
+[`VarInfo`](@ref) represents a realization of a model as in a "flattened" / vector representation, regardless of form of the variable in the model.
58
+
+[`VarInfo`](@ref) represents a realization of a model as a "flattened" / vector representation, regardless of the form of the variable in the model.
59
59
+[`SimpleVarInfo`](@ref) represents a realization of a model exactly as in the model (unless it has been transformed; we'll get to that later).
60
60
61
61
2. We need the ability to transform from "constrained space" to "unconstrained space", as we saw in the previous section.
These methods allows us to extract the internal-to-model transformation function depending on the `varinfo`, the variable, and the distribution of the variable:
94
+
These methods allow us to extract the internal-to-model transformation function depending on the `varinfo`, the variable, and the distribution of the variable:
95
95
96
96
-`varinfo` + `vn` defines the internal representation of the variable.
97
97
-`dist` defines the representation expected within the model scope.
@@ -214,7 +214,7 @@ Unfortunately, this is not possible in general. Consider for example the followi
214
214
end
215
215
```
216
216
217
-
Here the variable `x`has is constrained to be on the domain `(m, Inf)`, where `m` is sampled according to a `Normal`.
217
+
Here the variable `x` is constrained to be in the domain `(m, Inf)`, where `m` is sampled according to a `Normal`.
218
218
219
219
```@example transformations-internal
220
220
model = demo_dynamic_constraint()
@@ -263,7 +263,7 @@ we see that we indeed satisfy the constraint `m < x`, as desired.
263
263
264
264
The reason for this is that internally in a model evaluation, we construct the transformation from the internal to the model representation based on the *current* realizations in the model! That is, we take the `dist` in a `x ~ dist` expression _at model evaluation time_ and use that to construct the transformation, thus allowing it to change between model evaluations without invalidating the transformation.
265
265
266
-
But to be able to do this, we need to know whether the variable is linked / "unconstrained" or not, since the transformation is different in the two cases. Hence we need to be able to determine this at model evaluation time. Hence the the internals end up looking something like this:
266
+
But to be able to do this, we need to know whether the variable is linked / "unconstrained" or not, since the transformation is different in the two cases. Hence we need to be able to determine this at model evaluation time. Hence the internals end up looking something like this:
Copy file name to clipboardExpand all lines: docs/src/tutorials/prob-interface.md
+18-2Lines changed: 18 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -107,12 +107,28 @@ To give an example of the probability interface in use, we can use it to estimat
107
107
In cross-validation, we split the dataset into several equal parts.
108
108
Then, we choose one of these sets to serve as the validation set.
109
109
Here, we measure fit using the cross entropy (Bayes loss).[^1]
110
+
(For the sake of simplicity, in the following code, we enforce that `nfolds` must divide the number of data points. For a more competent implementation, see [MLUtils.jl](https://juliaml.github.io/MLUtils.jl/dev/api/#MLUtils.kfolds).)
110
111
111
112
```@example probinterface
112
-
using MLUtils
113
+
# Calculate the train/validation splits across `nfolds` partitions, assume `length(dataset)` divides `nfolds`
114
+
function kfolds(dataset::Array{<:Real}, nfolds::Int)
0 commit comments