You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+5-89Lines changed: 5 additions & 89 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,20 +15,11 @@ TemporalGPs.jl is registered, so simply type the following at the REPL:
15
15
```
16
16
While you can install TemporalGPs without AbstractGPs and KernelFunctions, in practice the latter are needed for all common tasks in TemporalGPs.
17
17
18
-
## Note !!!
19
-
20
-
This package is currently not guaranteed to work with all current versions of dependencies. If something is not working on the current release of TemporalGPs,
21
-
please try out v0.6.7, which pins some dependencies in order to circumvent some of the problems. You can do so by typing instead:
Most examples can be found in the [examples](https://github.com/JuliaGaussianProcesses/TemporalGPs.jl/tree/master/examples) directory. In particular see the associated [README](https://github.com/JuliaGaussianProcesses/TemporalGPs.jl/tree/master/examples/README.md).
30
21
31
-
This is a small problem by TemporalGPs' standard. See timing results below for expected performance on larger problems.
22
+
The following is a small problem by TemporalGPs' standard. See timing results below for expected performance on larger problems.
32
23
33
24
```julia
34
25
using AbstractGPs, KernelFunctions, TemporalGPs
@@ -66,72 +57,11 @@ logpdf(f_post(x), y)
66
57
## Learning kernel parameters with [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl), [ParameterHandling.jl](https://github.com/invenia/ParameterHandling.jl), and [Mooncake.jl](https://github.com/compintell/Mooncake.jl/)
67
58
68
59
TemporalGPs.jl doesn't provide scikit-learn-like functionality to train your model (find good kernel parameter settings).
69
-
Instead, we offer the functionality needed to easily implement your own training functionality using standard tools from the Julia ecosystem, as shown below.
70
-
```julia
71
-
# Load our GP-related packages.
72
-
using AbstractGPs
73
-
using KernelFunctions
74
-
using TemporalGPs
75
-
76
-
# Load standard packages from the Julia ecosystem
77
-
using Optim # Standard optimisation algorithms.
78
-
using ParameterHandling # Helper functionality for dealing with model parameters.
79
-
using Mooncake # Algorithmic Differentiation
80
-
81
-
using ParameterHandling: flatten
82
-
83
-
# Declare model parameters using `ParameterHandling.jl` types.
84
-
flat_initial_params, unflatten = flatten((
85
-
var_kernel = positive(0.6),
86
-
λ = positive(2.5),
87
-
var_noise = positive(0.1),
88
-
))
89
-
90
-
# Construct a function to unpack flattened parameters and pull out the raw values.
Once you've learned the parameters, you can use `posterior`, `marginals`, and `rand` to make posterior-predictions with the optimal parameters.
60
+
Instead, we offer the functionality needed to easily implement your own training functionality using standard tools from the Julia ecosystem.
61
+
See [exact_time_learning.jl](https://github.com/JuliaGaussianProcesses/TemporalGPs.jl/blob/master/examples/exact_time_learning.jl).
133
62
134
-
In the above example we optimised the parameters, but we could just as easily have utilised e.g. [AdvancedHMC.jl](https://github.com/TuringLang/AdvancedHMC.jl) in conjunction with a prior over the parameters to perform approximate Bayesian inference in them -- indeed, [this is often a very good idea](http://proceedings.mlr.press/v118/lalchand20a/lalchand20a.pdf). We leave this as an exercise for the interested user (see e.g. the examples in [Stheno.jl](https://github.com/willtebbutt/Stheno.jl/) for inspiration).
63
+
In this example we optimised the parameters, but we could just as easily have utilised e.g. [AdvancedHMC.jl](https://github.com/TuringLang/AdvancedHMC.jl) in conjunction with a prior over the parameters to perform approximate Bayesian inference in them -- indeed, [this is often a very good idea](http://proceedings.mlr.press/v118/lalchand20a/lalchand20a.pdf).
64
+
We leave this as an exercise for the interested user (see e.g. the examples in [Stheno.jl](https://github.com/willtebbutt/Stheno.jl/) for inspiration).
135
65
136
66
Moreover, it should be possible to plug this into probabilistic programming framework such as `Turing` and `Soss` with minimal effort, since `f(x, params.var_noise)` is a plain old `Distributions.MultivariateDistribution`.
137
67
@@ -155,20 +85,6 @@ This tells TemporalGPs that you want all parameters of `f` and anything derived
155
85
Gradient computations use Mooncake. Custom adjoints have been implemented to achieve this level of performance.
156
86
157
87
158
-
159
-
# On-going Work
160
-
161
-
- Optimisation
162
-
+ in-place implementation with `ArrayStorage` to reduce allocations
163
-
+ input data types for posterior inference - the `RegularSpacing` type is great for expressing that the inputs are regularly spaced. A carefully constructed data type to let the user build regularly-spaced data when working with posteriors would also be very beneficial.
164
-
- Interfacing with other packages
165
-
+ When [Stheno.jl](https://github.com/willtebbutt/Stheno.jl/) moves over to the AbstractGPs interface, it should be possible to get some interesting process decomposition functionality in this package.
166
-
- Approximate inference under non-Gaussian observation models
167
-
168
-
If you're interested in helping out with this stuff, please get in touch by opening an issue, commenting on an open one, or messaging me on the Julia Slack.
0 commit comments