You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Documentation improvements (inc. lengthscale explanation) and Matern12Kernel alias (#213)
* various edits for clarity and typos
* remove reference to not-yet-implemented feature (#38)
* adds Matern12Kernel as alias for ExponentialKernel (in line with the explicitly defined Matern32Kernel and Matern52Kernel) and gives all aliases docstrings
* incorporates the lengthscales explanation from #212.
Co-authored-by: David Widmann <[email protected]>
Note that `BaseKernel` do not use `Distances.jl` and can therefore be a bit slower.
31
+
Note that the fallback implementation of the base `Kernel` evaluation does not use `Distances.jl` and can therefore be a bit slower.
32
32
33
33
### Additional Options
34
34
@@ -37,7 +37,7 @@ Finally there are additional functions you can define to bring in more features:
37
37
-`KernelFunctions.dim(x::MyDataType)`: by default the dimension of the inputs will only be checked for vectors of type `AbstractVector{<:Real}`. If you want to check the dimensionality of your inputs, dispatch the `dim` function on your datatype. Note that `0` is the default.
38
38
-`dim` is called within `KernelFunctions.validate_inputs(x::MyDataType, y::MyDataType)`, which can instead be directly overloaded if you want to run special checks for your input types.
39
39
-`kernelmatrix(k::MyKernel, ...)`: you can redefine the diverse `kernelmatrix` functions to eventually optimize the computations.
40
-
-`Base.print(io::IO, k::MyKernel)`: if you want to specialize the printing of your kernel
40
+
-`Base.print(io::IO, k::MyKernel)`: if you want to specialize the printing of your kernel.
41
41
42
42
KernelFunctions uses [Functors.jl](https://github.com/FluxML/Functors.jl) for specifying trainable kernel parameters
43
43
in a way that is compatible with the [Flux ML framework](https://github.com/FluxML/Flux.jl).
where $r$ has the same dimension as $x$ and $r_i >0$.
152
+
where $r$ has the same dimension as $x$ and $r_i >0$.
146
153
147
154
## Piecewise Polynomial Kernel
148
155
149
-
The [`PiecewisePolynomialKernel`](@ref) is defined as
150
-
156
+
The [`PiecewisePolynomialKernel`](@ref) is defined for $x, x'\in \mathbb{R}^D$, a positive-definite matrix $P \in \mathbb{R}^{D \times D}$, and $V \in \{0,1,2,3\}$ as
Copy file name to clipboardExpand all lines: docs/src/metrics.md
+9-6Lines changed: 9 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,16 +1,19 @@
1
1
# Metrics
2
2
3
-
KernelFunctions.jl relies on [Distances.jl](https://github.com/JuliaStats/Distances.jl) for computing the pairwise matrix.
4
-
To do so a distance measure is needed for each kernel. Two very common ones can already be used :`SqEuclidean` and `Euclidean`.
5
-
However all kernels do not rely on distances metrics respecting all the definitions. That's why additional metrics come with the package such as `DotProduct` (`<x,y>`) and `Delta` (`δ(x,y)`).
6
-
Note that every `SimpleKernel` must have a defined metric defined as :
3
+
`SimpleKernel` implementations rely on [Distances.jl](https://github.com/JuliaStats/Distances.jl) for efficiently computing the pairwise matrix.
4
+
This requires a distance measure or metric, such as the commonly used `SqEuclidean` and `Euclidean`.
5
+
6
+
The metric used by a given kernel type is specified as
However, there are kernels that can be implemented efficiently using "metrics" that do not respect all the definitions expected by Distances.jl. For this reason, KernelFunctions.jl provides additional "metrics" such as `DotProduct` ($\langle x, y \rangle$) and `Delta` ($\delta(x,y)$).
12
+
13
+
11
14
## Adding a new metric
12
15
13
-
If you want to create a new distance just implement the following:
16
+
If you want to create a new "metric" just implement the following:
`Transform` is the object that takes care of transforming the input data before distances are being computed. It can be as standard as `IdentityTransform` returning the same input, or multiplying the data by a scalar with `ScaleTransform` or by a vector with `ARDTransform`.
4
-
There is a more general `Transform`: `FunctionTransform` that uses a function and apply it on each vector via `mapslices`.
5
-
You can also create a pipeline of `Transform` via `TransformChain`. For example `LowRankTransform(rand(10,5))∘ScaleTransform(2.0)`.
4
+
There is a more general `Transform`: `FunctionTransform` that uses a function and applies it on each vector via `mapslices`.
5
+
You can also create a pipeline of `Transform` via `TransformChain`. For example,`LowRankTransform(rand(10,5))∘ScaleTransform(2.0)`.
6
6
7
-
One apply a transformation on a matrix or a vector via `KernelFunctions.apply(t::Transform,v::AbstractVecOrMat)`
7
+
A transformation `t` can be applied to a matrix or a vector `v`via `KernelFunctions.apply(t, v)`.
8
8
9
-
Check the list on the [API page](@ref Transforms)
9
+
Check the full list of provided transforms on the [API page](@ref Transforms).
0 commit comments