Skip to content

Commit 80f5885

Browse files
Ethan Chefacebook-github-bot
authored andcommitted
Remove_outcome_transform=Standardize for SingleTaskGP [documentation] (#2966)
Summary: Pull Request resolved: #2966 Default behavior of SingleTaskGP is to standardize outcomes, changed examples in various documentation to reflect this. Reviewed By: saitcakmak Differential Revision: D80029965 fbshipit-source-id: 3a651aa3d21088a90a97c46f5f6e393b50c53b98
1 parent a0c8337 commit 80f5885

File tree

3 files changed

+5
-17
lines changed

3 files changed

+5
-17
lines changed

README.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -136,7 +136,7 @@ For more details see our [Documentation](https://botorch.org/docs/introduction)
136136
```python
137137
import torch
138138
from botorch.models import SingleTaskGP
139-
from botorch.models.transforms import Normalize, Standardize
139+
from botorch.models.transforms import Normalize
140140
from botorch.fit import fit_gpytorch_mll
141141
from gpytorch.mlls import ExactMarginalLogLikelihood
142142

@@ -150,7 +150,6 @@ For more details see our [Documentation](https://botorch.org/docs/introduction)
150150
train_X=train_X,
151151
train_Y=Y,
152152
input_transform=Normalize(d=2),
153-
outcome_transform=Standardize(m=1),
154153
)
155154
mll = ExactMarginalLogLikelihood(gp.likelihood, gp)
156155
fit_gpytorch_mll(mll)

botorch/models/gp_regression.py

Lines changed: 3 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -90,30 +90,20 @@ class SingleTaskGP(BatchedMultiOutputGPyTorchModel, ExactGP, FantasizeMixin):
9090
9191
>>> import torch
9292
>>> from botorch.models.gp_regression import SingleTaskGP
93-
>>> from botorch.models.transforms.outcome import Standardize
9493
>>>
9594
>>> train_X = torch.rand(20, 2, dtype=torch.float64)
9695
>>> train_Y = torch.sin(train_X).sum(dim=1, keepdim=True)
97-
>>> outcome_transform = Standardize(m=1)
98-
>>> inferred_noise_model = SingleTaskGP(
99-
... train_X, train_Y, outcome_transform=outcome_transform,
100-
... )
96+
>>> inferred_noise_model = SingleTaskGP(train_X, train_Y)
10197
10298
Model with a known observation variance of 0.2:
10399
104100
>>> train_Yvar = torch.full_like(train_Y, 0.2)
105-
>>> observed_noise_model = SingleTaskGP(
106-
... train_X, train_Y, train_Yvar,
107-
... outcome_transform=outcome_transform,
108-
... )
101+
>>> observed_noise_model = SingleTaskGP(train_X, train_Y, train_Yvar)
109102
110103
With noise-free observations:
111104
112105
>>> train_Yvar = torch.full_like(train_Y, 1e-6)
113-
>>> noise_free_model = SingleTaskGP(
114-
... train_X, train_Y, train_Yvar,
115-
... outcome_transform=outcome_transform,
116-
... )
106+
>>> noise_free_model = SingleTaskGP(train_X, train_Y, train_Yvar)
117107
"""
118108

119109
train_targets: Tensor

docs/getting_started.mdx

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ Here's a quick run down of the main components of a Bayesian Optimization loop.
4949
```python
5050
import torch
5151
from botorch.models import SingleTaskGP
52-
from botorch.models.transforms import Normalize, Standardize
52+
from botorch.models.transforms import Normalize
5353
from botorch.fit import fit_gpytorch_mll
5454
from gpytorch.mlls import ExactMarginalLogLikelihood
5555

@@ -62,7 +62,6 @@ Here's a quick run down of the main components of a Bayesian Optimization loop.
6262
train_X=train_X,
6363
train_Y=train_Y,
6464
input_transform=Normalize(d=2),
65-
outcome_transform=Standardize(m=1),
6665
)
6766
mll = ExactMarginalLogLikelihood(gp.likelihood, gp)
6867
fit_gpytorch_mll(mll)

0 commit comments

Comments
 (0)