You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Change API to support different T values at train and test time (#3032)
Summary:
Pull Request resolved: #3032
Previously, LKGP takes in a `T` tensor during `__init__` and assumes that these values will be used at train and test time. However, the math also supports different `T` values at train and test time.
This is useful, for example, when we supply training data for a complete learning curve, but are only interested in predicting the final value. With the previous API, we would have to predict the whole learning curve with LKGP and select the final value manually afterwards. The new API allows us to specify at test time that we only want to predict the final value, potentially saving lots of compute.
Additionally, this diff makes the following updates:
- removed `train_Y_valid` as an explicit argument from `__init__` and instead infer the mask from `train_Y` via `torch.isfinite`
- removed `MinMaxStandardize` because it was mostly a hack to make a stationary T kernel work
- removed the dodgy and rather complex caching mechanism in `_rsample_from_base_samples` (realistically, the caching won't be very useful if we aren't using pathwise conditioning to optimize acquisition functions)
Reviewed By: Balandat
Differential Revision: D83694022
fbshipit-source-id: 95586855f502c79d1d716e0d662e80c2cfd81cea
0 commit comments