Skip to content

Commit 9ddac9c

Browse files
authored
Update readme + example (#131)
* Improve readme + example * Remove outdated section of readme * Fix link in readme * Fix typo in readme
1 parent d0486e9 commit 9ddac9c

File tree

3 files changed

+12
-95
lines changed

3 files changed

+12
-95
lines changed

README.md

Lines changed: 5 additions & 89 deletions
Original file line numberDiff line numberDiff line change
@@ -15,20 +15,11 @@ TemporalGPs.jl is registered, so simply type the following at the REPL:
1515
```
1616
While you can install TemporalGPs without AbstractGPs and KernelFunctions, in practice the latter are needed for all common tasks in TemporalGPs.
1717

18-
## Note !!!
19-
20-
This package is currently not guaranteed to work with all current versions of dependencies. If something is not working on the current release of TemporalGPs,
21-
please try out v0.6.7, which pins some dependencies in order to circumvent some of the problems. You can do so by typing instead:
22-
```julia
23-
] add AbstractGPs KernelFunctions TemporalGPs@0.6.7
24-
```
25-
Please report an issue if this work-around fails.
26-
2718
# Example Usage
2819

2920
Most examples can be found in the [examples](https://github.com/JuliaGaussianProcesses/TemporalGPs.jl/tree/master/examples) directory. In particular see the associated [README](https://github.com/JuliaGaussianProcesses/TemporalGPs.jl/tree/master/examples/README.md).
3021

31-
This is a small problem by TemporalGPs' standard. See timing results below for expected performance on larger problems.
22+
The following is a small problem by TemporalGPs' standard. See timing results below for expected performance on larger problems.
3223

3324
```julia
3425
using AbstractGPs, KernelFunctions, TemporalGPs
@@ -66,72 +57,11 @@ logpdf(f_post(x), y)
6657
## Learning kernel parameters with [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl), [ParameterHandling.jl](https://github.com/invenia/ParameterHandling.jl), and [Mooncake.jl](https://github.com/compintell/Mooncake.jl/)
6758

6859
TemporalGPs.jl doesn't provide scikit-learn-like functionality to train your model (find good kernel parameter settings).
69-
Instead, we offer the functionality needed to easily implement your own training functionality using standard tools from the Julia ecosystem, as shown below.
70-
```julia
71-
# Load our GP-related packages.
72-
using AbstractGPs
73-
using KernelFunctions
74-
using TemporalGPs
75-
76-
# Load standard packages from the Julia ecosystem
77-
using Optim # Standard optimisation algorithms.
78-
using ParameterHandling # Helper functionality for dealing with model parameters.
79-
using Mooncake # Algorithmic Differentiation
80-
81-
using ParameterHandling: flatten
82-
83-
# Declare model parameters using `ParameterHandling.jl` types.
84-
flat_initial_params, unflatten = flatten((
85-
var_kernel = positive(0.6),
86-
λ = positive(2.5),
87-
var_noise = positive(0.1),
88-
))
89-
90-
# Construct a function to unpack flattened parameters and pull out the raw values.
91-
unpack = ParameterHandling.value unflatten
92-
params = unpack(flat_initial_params)
93-
94-
function build_gp(params)
95-
f_naive = GP(params.var_kernel * Matern52Kernel() ScaleTransform(params.λ))
96-
return to_sde(f_naive, SArrayStorage(Float64))
97-
end
98-
99-
# Generate some synthetic data from the prior.
100-
const x = RegularSpacing(0.0, 0.1, 10_000)
101-
const y = rand(build_gp(params)(x, params.var_noise))
102-
103-
# Specify an objective function for Optim to minimise in terms of x and y.
104-
# We choose the usual negative log marginal likelihood (NLML).
105-
function objective(params)
106-
f = build_gp(params)
107-
return -logpdf(f(x, params.var_noise), y)
108-
end
109-
110-
# Check that the objective function works:
111-
objective(params)
112-
113-
# Optimise using Optim. This optimiser often works fairly well in practice,
114-
# but it's not going to be the best choice in all situations. Consult
115-
# Optim.jl for more info on available optimisers and their properties.
116-
training_results = Optim.optimize(
117-
objective unpack,
118-
θ -> only(Mooncake.gradient(objective unpack, θ)),
119-
flat_initial_params + randn(3), # Add some noise to make learning non-trivial
120-
BFGS(
121-
alphaguess = Optim.LineSearches.InitialStatic(scaled=true),
122-
linesearch = Optim.LineSearches.BackTracking(),
123-
),
124-
Optim.Options(show_trace = true);
125-
inplace=false,
126-
)
127-
128-
# Extracting the final values of the parameters.
129-
# Should be close to truth.
130-
final_params = unpack(training_results.minimizer)
131-
```
132-
Once you've learned the parameters, you can use `posterior`, `marginals`, and `rand` to make posterior-predictions with the optimal parameters.
60+
Instead, we offer the functionality needed to easily implement your own training functionality using standard tools from the Julia ecosystem.
61+
See [exact_time_learning.jl](https://github.com/JuliaGaussianProcesses/TemporalGPs.jl/blob/master/examples/exact_time_learning.jl).
13362

134-
In the above example we optimised the parameters, but we could just as easily have utilised e.g. [AdvancedHMC.jl](https://github.com/TuringLang/AdvancedHMC.jl) in conjunction with a prior over the parameters to perform approximate Bayesian inference in them -- indeed, [this is often a very good idea](http://proceedings.mlr.press/v118/lalchand20a/lalchand20a.pdf). We leave this as an exercise for the interested user (see e.g. the examples in [Stheno.jl](https://github.com/willtebbutt/Stheno.jl/) for inspiration).
63+
In this example we optimised the parameters, but we could just as easily have utilised e.g. [AdvancedHMC.jl](https://github.com/TuringLang/AdvancedHMC.jl) in conjunction with a prior over the parameters to perform approximate Bayesian inference in them -- indeed, [this is often a very good idea](http://proceedings.mlr.press/v118/lalchand20a/lalchand20a.pdf).
64+
We leave this as an exercise for the interested user (see e.g. the examples in [Stheno.jl](https://github.com/willtebbutt/Stheno.jl/) for inspiration).
13565

13666
Moreover, it should be possible to plug this into probabilistic programming framework such as `Turing` and `Soss` with minimal effort, since `f(x, params.var_noise)` is a plain old `Distributions.MultivariateDistribution`.
13767

@@ -155,20 +85,6 @@ This tells TemporalGPs that you want all parameters of `f` and anything derived
15585
Gradient computations use Mooncake. Custom adjoints have been implemented to achieve this level of performance.
15686

15787

158-
159-
# On-going Work
160-
161-
- Optimisation
162-
+ in-place implementation with `ArrayStorage` to reduce allocations
163-
+ input data types for posterior inference - the `RegularSpacing` type is great for expressing that the inputs are regularly spaced. A carefully constructed data type to let the user build regularly-spaced data when working with posteriors would also be very beneficial.
164-
- Interfacing with other packages
165-
+ When [Stheno.jl](https://github.com/willtebbutt/Stheno.jl/) moves over to the AbstractGPs interface, it should be possible to get some interesting process decomposition functionality in this package.
166-
- Approximate inference under non-Gaussian observation models
167-
168-
If you're interested in helping out with this stuff, please get in touch by opening an issue, commenting on an open one, or messaging me on the Julia Slack.
169-
170-
171-
17288
# Relevant literature
17389

17490
See chapter 12 of [1] for the basics.

examples/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
Ideally, you would have worked through a few examples involving AbstractGPs.jl, as the code
44
in TemporalGPs.jl implements the (primary) interface specified there.
5-
Equally, these examples stand alone, so if you're not familiar with AbstractGPS.jl, don't
5+
Equally, these examples stand alone, so if you're not familiar with AbstractGPs.jl, don't
66
worry too much.
77

88
The examples in this directory are best worked through in the following order:

examples/exact_time_learning.jl

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,6 @@
66
using AbstractGPs
77
using TemporalGPs
88

9-
# Load up the separable kernel from TemporalGPs.
10-
using TemporalGPs: RegularSpacing
11-
129
# Load standard packages from the Julia ecosystem
1310
using Optim # Standard optimisation algorithms.
1411
using ParameterHandling # Helper functionality for dealing with model parameters.
@@ -27,14 +24,17 @@ flat_initial_params, unpack = ParameterHandling.value_flatten((
2724
# Pull out the raw values.
2825
params = unpack(flat_initial_params);
2926

27+
# Functionality to load build a TemporalGPs.jl GP given the model parameters.
28+
# Specifying SArrayStorage ensures that StaticArrays.jl is used to represent model
29+
# parameters under the hood, which enables very strong peformance.
3030
function build_gp(params)
3131
k = params.var_kernel * Matern52Kernel() ScaleTransform(params.λ)
3232
return to_sde(GP(params.mean, k), SArrayStorage(Float64))
3333
end
3434

3535
# Specify a collection of inputs. Must be increasing.
3636
T = 1_000_000;
37-
x = RegularSpacing(0.0, 1e-4, T);
37+
x = TemporalGPs.RegularSpacing(0.0, 1e-4, T);
3838

3939
# Generate some noisy synthetic data from the GP.
4040
f = build_gp(params)
@@ -48,6 +48,7 @@ function objective(flat_params)
4848
return -logpdf(f(x, params.var_noise), y)
4949
end
5050

51+
# A helper function to get the gradient.
5152
function objective_grad(rule, flat_params)
5253
return Mooncake.value_and_gradient!!(rule, objective, flat_params)[2][2]
5354
end
@@ -74,7 +75,7 @@ f_post = posterior(f_final(x, final_params.var_noise), y);
7475

7576
# Specify some locations at which to make predictions.
7677
T_pr = 1_200_000;
77-
x_pr = RegularSpacing(0.0, 1e-4, T_pr);
78+
x_pr = TemporalGPs.RegularSpacing(0.0, 1e-4, T_pr);
7879

7980
# Compute the exact posterior marginals at `x_pr`.
8081
f_post_marginals = marginals(f_post(x_pr));

0 commit comments

Comments
 (0)