|
2 | 2 |
|
3 | 3 | The release log for BoTorch. |
4 | 4 |
|
| 5 | + |
| 6 | +## [0.1.1] - June 27, 2019 |
| 7 | + |
| 8 | +API updates, more robust model fitting |
| 9 | + |
| 10 | +#### Breaking changes |
| 11 | +* rename `botorch.qmc` to `botorch.sampling`, move MC samplers from |
| 12 | + `acquisition.sampler` to `botorch.sampling.samplers` (#172) |
| 13 | + |
| 14 | +#### New Features |
| 15 | +* Add `condition_on_observations` and `fantasize` to the Model level API (#173) |
| 16 | +* Support pending observations generically for all `MCAcqusitionFunctions` (#176) |
| 17 | +* Add fidelity kernel for training iterations/training data points (#178) |
| 18 | +* Support for optimization constraints across `q`-batches (to support things like |
| 19 | + sample budget constraints) (2a95a6c3f80e751d5cf8bc7240ca9f5b1529ec5b) |
| 20 | +* Add ModelList <-> Batched Model converter (#187) |
| 21 | +* New test functions |
| 22 | + * basic: `neg_ackley`, `cosine8`, `neg_levy`, `neg_rosenbrock`, `neg_shekel` |
| 23 | + (e26dc7576c7bf5fa2ba4cb8fbcf45849b95d324b) |
| 24 | + * for multi-fidelity BO: `neg_aug_branin`, `neg_aug_hartmann6`, |
| 25 | + `neg_aug_rosenbrock` (ec4aca744f65ca19847dc368f9fee4cc297533da) |
| 26 | + |
| 27 | +#### Improved functionality: |
| 28 | +* More robust model fitting |
| 29 | + * Catch gpytorch numerical issues and return `NaN` to the optimizer (#184) |
| 30 | + * Restart optimization upon failure by sampling hyperparameters from their prior (#188) |
| 31 | + * Sequentially fit batched and `ModelListGP` models by default (#189) |
| 32 | + * Change minimum inferred noise level (e2c64fef1e76d526a33951c5eb75ac38d5581257) |
| 33 | +* Introduce optional batch limit in `joint_optimize` to increases scalability of |
| 34 | + parallel optimization (baab5786e8eaec02d37a511df04442471c632f8a) |
| 35 | +* Change constructor of `ModelListGP` to comply with GPyTorch’s `IndependentModelList` |
| 36 | + constructor (a6cf739e769c75319a67c7525a023ece8806b15d) |
| 37 | +* Use `torch.random` to set default seed for samplers (rather than `random`) to |
| 38 | + making sampling reproducible when setting `torch.manual_seed` |
| 39 | + (ae507ad97255d35f02c878f50ba68a2e27017815) |
| 40 | + |
| 41 | +#### Performance Improvements |
| 42 | +* Use `einsum` in `LinearMCObjective` (22ca29535717cda0fcf7493a43bdf3dda324c22d) |
| 43 | +* Change default Sobol sample size for `MCAquisitionFunctions` to be base-2 for |
| 44 | + better MC integration performance (5d8e81866a23d6bfe4158f8c9b30ea14dd82e032) |
| 45 | +* Add ability to fit models in `SumMarginalLogLikelihood` sequentially (and make |
| 46 | + that the default setting) (#183) |
| 47 | +* Do not construct the full covariance matrix when computing posterior of |
| 48 | + single-output BatchedMultiOutputGPyTorchModel (#185) |
| 49 | + |
| 50 | +#### Bug fixes |
| 51 | +* Properly handle observation_noise kwarg for BatchedMultiOutputGPyTorchModels (#182) |
| 52 | +* Fix a issue where `f_best` was always max for NoisyExpectedImprovement |
| 53 | + (de8544a75b58873c449b41840a335f6732754c77) |
| 54 | +* Fix bug and numerical issues in `initialize_q_batch` |
| 55 | + (844dcd1dc8f418ae42639e211c6bb8e31a75d8bf) |
| 56 | +* Fix numerical issues with `inv_transform` for qMC sampling (#162) |
| 57 | + |
| 58 | +#### Other |
| 59 | +* Bump GPyTorch minimum requirement to 0.3.3 |
| 60 | + |
| 61 | + |
| 62 | + |
5 | 63 | ## [0.1.0] - April 30, 2019 |
6 | 64 |
|
7 | 65 | First public beta release. |
0 commit comments