|
2 | 2 |
|
3 | 3 | The release log for BoTorch. |
4 | 4 |
|
| 5 | +## [0.5.0] - Jun 29, 2021 |
| 6 | + |
| 7 | +#### Compatibility |
| 8 | +* Require PyTorch >=1.8.1 (#832). |
| 9 | +* Require GPyTorch >=1.5 (#848). |
| 10 | +* Changes to how input transforms are applied: `transform_inputs` is applied in `model.forward` if the model is in `train` mode, otherwise it is applied in the `posterior` call (#819, #835). |
| 11 | + |
| 12 | +#### New Features |
| 13 | +* Improved multi-objective optimization capabilities: |
| 14 | + * `qNoisyExpectedHypervolumeImprovement` acquisition function that improves on `qExpectedHypervolumeImprovement` in terms of tolerating observation noise and speeding up computation for large `q`-batches (#797, #822). |
| 15 | + * `qMultiObjectiveMaxValueEntropy` acqusition function (913aa0e510dde10568c2b4b911124cdd626f6905, #760). |
| 16 | + * Heuristic for reference point selection (#830). |
| 17 | + * `FastNondominatedPartitioning` for Hypervolume computations (#699). |
| 18 | + * `DominatedPartitioning` for partitioning the dominated space (#726). |
| 19 | + * `BoxDecompositionList` for handling box decompositions of varying sizes (#712). |
| 20 | + * Direct, batched dominated partitioning for the two-outcome case (#739). |
| 21 | + * `get_default_partitioning_alpha` utility providing heuristic for selecting approximation level for partitioning algorithms (#793). |
| 22 | + * New method for computing Pareto Frontiers with less memory overhead (#842, #846). |
| 23 | +* New `qLowerBoundMaxValueEntropy` acquisition function (a.k.a. GIBBON), a lightweight variant of Multi-fidelity Max-Value Entropy Search using a Determinantal Point Process approximation (#724, #737, #749). |
| 24 | +* Support for discrete and mixed input domains: |
| 25 | + * `CategoricalKernel` for categorical inputs (#771). |
| 26 | + * `MixedSingleTaskGP` for mixed search spaces (containing both categorical and ordinal parameters) (#772, #847). |
| 27 | + * `optimize_acqf_discrete` for optimizing acquisition functions over fully discrete domains (#777). |
| 28 | + * Extend `optimize_acqf_mixed` to allow batch optimization (#804). |
| 29 | +* Support for robust / risk-aware optimization: |
| 30 | + * Risk measures for robust / risk-averse optimization (#821). |
| 31 | + * `AppendFeatures` transform (#820). |
| 32 | + * `InputPerturbation` input transform for for risk averse BO with implementation errors (#827). |
| 33 | + * Tutorial notebook for Bayesian Optimization of risk measures (#823). |
| 34 | + * Tutorial notebook for risk-averse Bayesian Optimization under input perturbations (#828). |
| 35 | +* More scalable multi-task modeling and sampling: |
| 36 | + * `KroneckerMultiTaskGP` model for efficient multi-task modeling for block-design settings (all tasks observed at all inputs) (#637). |
| 37 | + * Support for transforms in Multi-Task GP models (#681). |
| 38 | + * Posterior sampling based on Matheron's rule for Multi-Task GP models (#841). |
| 39 | +* Various changes to simplify and streamline integration with Ax: |
| 40 | + * Handle non-block designs in `TrainingData` (#794). |
| 41 | + * Acquisition function input constructor registry (#788, #802, #845). |
| 42 | +* Random Fourier Feature (RFF) utilties for fast (approximate) GP function sampling (#750). |
| 43 | +* `DelaunayPolytopeSampler` for fast uniform sampling from (simple) polytopes (#741). |
| 44 | +* Add `evaluate` method to `ScalarizedObjective` (#795). |
| 45 | + |
| 46 | +#### Bug Fixes |
| 47 | +* Handle the case when all features are fixed in `optimize_acqf` (#770). |
| 48 | +* Pass `fixed_features` to initial candidate generation functions (#806). |
| 49 | +* Handle batch empty pareto frontier in `FastPartitioning` (#740). |
| 50 | +* Handle empty pareto set in `is_non_dominated` (#743). |
| 51 | +* Handle edge case of no or a single observation in `get_chebyshev_scalarization` (#762). |
| 52 | +* Fix an issue in `gen_candidates_torch` that caused problems with acqusition functions using fantasy models (#766). |
| 53 | +* Fix `HigherOrderGP` `dtype` bug (#728). |
| 54 | +* Normalize before clamping in `Warp` input warping transform (#722). |
| 55 | +* Fix bug in GP sampling (#764). |
| 56 | + |
| 57 | +#### Other Changes |
| 58 | +* Modify input transforms to support one-to-many transforms (#819, #835). |
| 59 | +* Make initial conditions for acquisition function optimization honor parameter constraints (#752). |
| 60 | +* Perform optimization only over unfixed features if `fixed_features` is passed (#839). |
| 61 | +* Refactor Max Value Entropy Search Methods (#734). |
| 62 | +* Use Linear Algebra functions from the `torch.linalg` module (#735). |
| 63 | +* Use PyTorch's `Kumaraswamy` distribution (#746). |
| 64 | +* Improved capabilities and some bugfixes for batched models (#723, #767). |
| 65 | +* Pass `callback` argument to `scipy.optim.minimize` in `gen_candidates_scipy` (#744). |
| 66 | +* Modify behavior of `X_pending` in in multi-objective acqusiition functions (#747). |
| 67 | +* Allow multi-dimensional batch shapes in test functions (#757). |
| 68 | +* Utility for converting batched multi-output models into batched single-output models (#759). |
| 69 | +* Explicitly raise `NotPSDError` in `_scipy_objective_and_grad` (#787). |
| 70 | +* Make `raw_samples` optional if `batch_initial_conditions` is passed (#801). |
| 71 | +* Use powers of 2 in qMC docstrings & examples (#812). |
| 72 | + |
5 | 73 |
|
6 | 74 | ## [0.4.0] - Feb 23, 2021 |
7 | 75 |
|
|
0 commit comments