You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/userguide.md
+34-34Lines changed: 34 additions & 34 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,23 +2,24 @@
2
2
3
3
## Kernel creation
4
4
5
-
To create a kernel chose one of the kernels proposed, see [Base Kernels](@ref), or create your own, see [Creating your own kernel](@ref)
6
-
For example to create a square exponential kernel
5
+
To create a kernel object, choose one of the pre-implemented kernels, see [Base Kernels](@ref), or create your own, see [Creating your own kernel](@ref).
6
+
For example, a square exponential kernel is created by
7
7
```julia
8
8
k =SqExponentialKernel()
9
9
```
10
-
Instead of having lengthscale(s) for each kernel we use `Transform` objects (see [Transform](@ref)) which are directly going to act on the inputs before passing them to the kernel.
11
-
For example to premultiply the input by 2.0 we create the kernel the following options are possible
10
+
Instead of having lengthscale(s) for each kernel we use [`Transform`](@ref) objects which act on the inputs before passing them to the kernel.
11
+
For example, to premultiply the input by 2.0 (equivalent to a lengthscale of 0.5) we can use the following options:
12
12
```julia
13
-
k =transform(SqExponentialKernel(),ScaleTransform(2.0)) # returns a TransformedKernel
14
-
k =@kernelSqExponentialKernel() l=2.0# Will be available soon
15
-
k =TransformedKernel(SqExponentialKernel(),ScaleTransform(2.0))
13
+
k =transform(SqExponentialKernel(),ScaleTransform(2.0))# returns a TransformedKernel
14
+
k =TransformedKernel(SqExponentialKernel(), ScaleTransform(2.0))
15
+
k =@kernelSqExponentialKernel() l=2.0# Will be available soon
16
16
```
17
-
Check the [`Transform`](@ref) page to see the other options.
18
-
To premultiply the kernel by a variance, you can use `*` or create a `ScaledKernel`
17
+
Check the [`Transform`](@ref) page to see all available transforms.
18
+
19
+
To premultiply the kernel by a variance, you can use `*` or create a `ScaledKernel`:
19
20
```julia
20
21
k =3.0*SqExponentialKernel()
21
-
k =ScaledKernel(SqExponentialKernel(),3.0)
22
+
k =ScaledKernel(SqExponentialKernel(),3.0)
22
23
@kernel3.0*SqExponentialKernel()
23
24
```
24
25
@@ -29,60 +30,59 @@ To compute the kernel function on two vectors you can call
29
30
k =SqExponentialKernel()
30
31
x1 =rand(3)
31
32
x2 =rand(3)
32
-
k(x1,x2)
33
+
k(x1,x2)
33
34
```
34
35
35
36
## Creating a kernel matrix
36
37
37
38
Kernel matrices can be created via the `kernelmatrix` function or `kerneldiagmatrix` for only the diagonal.
38
-
An important argument to give is the dimensionality of the input `obsdim`. It tells if the matrix is of the type `# samples X # features`(`obsdim`=1) or `# features X # samples`(`obsdim`=2) (similarly to [Distances.jl](https://github.com/JuliaStats/Distances.jl))
39
+
An important argument to give is the data layout of the input `obsdim`. It specifies whether the number of observed data points is along the first dimension (`obsdim=1`, i.e. the matrix shape is number of samples times number of features) or along the second dimension (`obsdim=2`, i.e. the matrix shape is number of features times number of samples), similarly to [Distances.jl](https://github.com/JuliaStats/Distances.jl). If not given explicitly, `obsdim` defaults to [`defaultobs`](@ref).
39
40
For example:
40
41
```julia
41
42
k =SqExponentialKernel()
42
-
A =rand(10,5)
43
-
kernelmatrix(k,A,obsdim=1) # Return a 10x10 matrix
44
-
kernelmatrix(k,A,obsdim=2) # Return a 5x5 matrix
45
-
k(A,obsdim=1) # Syntactic sugar
43
+
A =rand(10,5)
44
+
kernelmatrix(k, A, obsdim=1) # returns a 10x10 matrix
45
+
kernelmatrix(k, A, obsdim=2) # returns a 5x5 matrix
46
+
k(A,obsdim=1)# Syntactic sugar
46
47
```
47
48
48
-
We also support specific kernel matrices outputs:
49
+
We also support specific kernel matrix outputs:
49
50
- For a positive-definite matrix object`PDMat` from [`PDMats.jl`](https://github.com/JuliaStats/PDMats.jl), you can call the following:
50
51
```julia
51
52
using PDMats
52
53
k =SqExponentialKernel()
53
-
K =kernelpdmat(k,A,obsdim=1) # PDMat
54
+
K =kernelpdmat(k, A, obsdim=1)# PDMat
54
55
```
55
-
It will create a matrix and in case of bad conditionning will add some diagonal noise until the matrix is considered PSD, it will then return a `PDMat` object. For this method to work in your code you need to include `using PDMats` first
56
+
It will create a matrix and in case of bad conditioning will add some diagonal noise until the matrix is considered positive-definite; it will then return a `PDMat` object. For this method to work in your code you need to include `using PDMats` first.
56
57
- For a Kronecker matrix, we rely on [`Kronecker.jl`](https://github.com/MichielStock/Kronecker.jl). Here are two examples:
57
58
```julia
58
59
using Kronecker
59
-
x =range(0,1,length=10)
60
-
y =range(0,1,length=50)
61
-
K =kernelkronmat(k,[x,y]) # Kronecker matrix
62
-
K =kernelkronmat(k,x,5) # Kronecker matrix
60
+
x =range(0,1, length=10)
61
+
y =range(0,1, length=50)
62
+
K =kernelkronmat(k,[x,y]) # Kronecker matrix
63
+
K =kernelkronmat(k, x, 5) # Kronecker matrix
63
64
```
64
-
Make sure that `k` is a vector compatible with such constructions (with `iskroncompatible`). Both method will return a . For those methods to work in your code you need to include `using Kronecker` first
65
-
- For a Nystrom approximation: `kernelmatrix(nystrom(k, X, ρ, obsdim = 1))` where `ρ` is the proportion of sampled used.
65
+
Make sure that `k` is a kernel compatible with such constructions (with `iskroncompatible(k)`). Both methods will return a Kronecker matrix. For those methods to work in your code you need to include `using Kronecker` first.
66
+
- For a Nystrom approximation: `kernelmatrix(nystrom(k, X, ρ, obsdim=1))` where `ρ` is the fraction of data samples used in the approximation.
66
67
67
68
## Composite kernels
68
69
69
-
One can create combinations of kernels via `KernelSum` and `KernelProduct` or using simple operators `+` and `*`.
70
-
For example:
70
+
Sums and products of kernels are also valid kernels. They can be created via `KernelSum` and `KernelProduct` or using simple operators `+` and `*`.
71
+
For example:
71
72
```julia
72
73
k1 =SqExponentialKernel()
73
74
k2 =Matern32Kernel()
74
-
k =0.5* k1 +0.2* k2 # KernelSum
75
-
k = k1 * k2 # KernelProduct
75
+
k =0.5* k1 +0.2* k2 # KernelSum
76
+
k = k1 * k2 # KernelProduct
76
77
```
77
78
78
-
## Kernel Parameters
79
+
## Kernel parameters
79
80
80
-
What if you want to differentiate through the kernel parameters? Even in a highly nested structure such as:
81
+
What if you want to differentiate through the kernel parameters? Even in a highly nested structure such as:
81
82
```julia
82
-
k =transform(0.5*SqExponentialKernel()*MaternKernel()+0.2*(transform(LinearKernel(),2.0)+PolynomialKernel()),[0.1,0.5])
83
+
k =transform(0.5*SqExponentialKernel()*MaternKernel()+0.2*(transform(LinearKernel(),2.0)+PolynomialKernel()),[0.1,0.5])
83
84
```
84
-
One can get the array of parameters to optimize via `params` from `Flux.jl`
85
-
85
+
One can get the array of parameters to optimize via `params` from `Flux.jl`:
0 commit comments