Releases: meta-pytorch/botorch
Releases · meta-pytorch/botorch
Bayesian Optimization with Preference Exploration, SAASBO for High-Dimensional Bayesian Optimization
Bayesian Optimization with Preference Exploration, SAASBO for High-Dimensional Bayesian Optimization
New Features
- Implement SAASBO -
SaasFullyBayesianSingleTaskGPmodel for sample-efficient high-dimensional Bayesian optimization (#1123). - Add SAASBO tutorial (#1127).
- Add
LearnedObjective(#1131),AnalyticExpectedUtilityOfBestOptionacquisition function (#1135), and a few auxiliary classes to support Bayesian optimization with preference exploration (BOPE). - Add BOPE tutorial (#1138).
Other Changes
Bug Fixes
Bug fix release
Non-linear input constraints, new MOO problems, bug fixes, and performance improvements.
New Features
- Add
Standardizeinput transform (#1053). - Low-rank Cholesky updates for NEI (#1056).
- Add support for non-linear input constraints (#1067).
- New MOO problems: MW7 (#1077), disc brake (#1078), penicillin (#1079), RobustToy (#1082), GMM (#1083).
Other Changes
- Add
Dispatcher(#1009). - Modify qNEHVI to support deterministic models (#1026).
- Store tensor attributes of input transforms as buffers (#1035).
- Modify NEHVI to support MTGPs (#1037).
- Make
Normalizeinput transform input column-specific (#1047). - Improve
find_interior_point(#1049). - Remove deprecated
botorch.distributionsmodule (#1061). - Avoid costly application of posterior transform in Kronecker & HOGP models (#1076).
- Support heteroscedastic perturbations in
InputPerturbations(#1088).
Performance Improvements
- Make risk measures more memory efficient (#1034).
Bug Fixes
- Properly handle empty
fixed_featuresin optimization (#1029). - Fix missing weights in
VaRrisk measure (#1038). - Fix
find_interior_pointfor negative variables & allow unbounded problems (#1045). - Filter out indefinite bounds in constraint utilities (#1048).
- Make non-interleaved base samples use intuitive shape (#1057).
- Pad small diagonalization with zeros for
KroneckerMultitaskGP(#1071). - Disable learning of bounds in
preprocess_transform(#1089). - Catch runtime errors with ill-conditioned covar (#1095).
- Fix
compare_mc_analytic_acquisitiontutorial (#1099).
Approximate GP model, Multi-Output Risk Measures, Bug Fixes and Performance Improvements
Compatibility
New Features
- New
ApproximateGPyTorchModelwrapper for various (variational) approximate GP models (#1012). - New
SingleTaskVariationalGPstochastic variational Gaussian Process model (#1012). - Support for Multi-Output Risk Measures (#906, #965).
- Introduce
ModelListandPosteriorList(#829). - New Constraint Active Search tutorial (#1010).
- Add additional multi-objective optimization test problems (#958).
Other Changes
- Add
covar_moduleas an optional input ofMultiTaskGPmodels (#941). - Add
min_rangeargument toNormalizetransform to prevent division by zero (#931). - Add initialization heuristic for acquisition function optimization that samples around best points (#987).
- Update initialization heuristic to perturb a subset of the dimensions of the best points if the dimension is > 20 (#988).
- Modify
apply_constraintsutility to work with multi-output objectives (#994). - Short-cut
t_batch_mode_transformdecorator on non-tensor inputs (#991).
Performance Improvements
- Use lazy covariance matrix in
BatchedMultiOutputGPyTorchModel.posterior(#976). - Fast low-rank Cholesky updates for
qNoisyExpectedHypervolumeImprovement(#747, #995, #996).
Bug Fixes
- Update error handling to new PyTorch linear algebra messages (#940).
- Avoid test failures on Ampere devices (#944).
- Fixes to the
Griewanktest function (#972). - Handle empty base_sample_shape in
Posterior.rsample(#986). - Handle
NotPSDErrorand hittingmaxiterinfit_gpytorch_model(#1007). - Use TransformedPosterior for subclasses of GPyTorchPosterior (#983).
- Propagate
best_fargument toqProbabilityOfImprovementin input constructors (f5a5f8b)
Maintenance Release + New Tutorials
Compatibility
- Require GPyTorch >=1.5.1 (#928).
New Features
- Add
HigherOrderGPcomposite Bayesian Optimization tutorial notebook (#864). - Add Multi-Task Bayesian Optimization tutorial (#867).
- New multi-objective test problems from (#876).
- Add
PenalizedMCObjectiveandL1PenaltyObjective(#913). - Add a
ProximalAcquisitionFunctionfor regularizing new candidates towards previously generated ones (#919, #924). - Add a
Poweroutcome transform (#925).
Bug Fixes
- Batch mode fix for
HigherOrderGPinitialization (#856). - Improve
CategoricalKernelprecision (#857). - Fix an issue with
qMultiFidelityKnowledgeGradient.evaluate(#858). - Fix an issue with transforms with
HigherOrderGP. (#889) - Fix initial candidate generation when parameter constraints are on different device (#897).
- Fix bad in-place op in
_generate_unfixed_lin_constraints(#901). - Fix an input transform bug in
fantasizecall (#902). - Fix outcome transform bug in
batched_to_model_list(#917).
Other Changes
- Make variance optional for
TransformedPosterior.mean(#855). - Support transforms in
DeterministicModel(#869). - Support
batch_shapeinRandomFourierFeatures(#877). - Add a
maximizeflag toPosteriorMean(#881). - Ignore categorical dimensions when validating training inputs in
MixedSingleTaskGP(#882). - Refactor
HigherOrderGPPosteriorfor memory efficiency (#883). - Support negative weights for minimization objectives in
get_chebyshev_scalarization(#884). - Move
train_inputstransforms tomodel.train/evalcalls (#894).
Improved Multi-Objective Optimization, Support for categorical/mixed domains, robust/risk-aware optimization, efficient MTGP sampling
Compatibility
- Require PyTorch >=1.8.1 (#832).
- Require GPyTorch >=1.5 (#848).
- Changes to how input transforms are applied:
transform_inputsis applied inmodel.forwardif the model is intrainmode, otherwise it is applied in theposteriorcall (#819, #835).
New Features
- Improved multi-objective optimization capabilities:
qNoisyExpectedHypervolumeImprovementacquisition function that improves onqExpectedHypervolumeImprovementin terms of tolerating observation noise and speeding up computation for largeq-batches (#797, #822).qMultiObjectiveMaxValueEntropyacqusition function (913aa0e, #760).- Heuristic for reference point selection (#830).
FastNondominatedPartitioningfor Hypervolume computations (#699).DominatedPartitioningfor partitioning the dominated space (#726).BoxDecompositionListfor handling box decompositions of varying sizes (#712).- Direct, batched dominated partitioning for the two-outcome case (#739).
get_default_partitioning_alphautility providing heuristic for selecting approximation level for partitioning algorithms (#793).- New method for computing Pareto Frontiers with less memory overhead (#842, #846).
- New
qLowerBoundMaxValueEntropyacquisition function (a.k.a. GIBBON), a lightweight variant of Multi-fidelity Max-Value Entropy Search using a Determinantal Point Process approximation (#724, #737, #749). - Support for discrete and mixed input domains:
CategoricalKernelfor categorical inputs (#771).MixedSingleTaskGPfor mixed search spaces (containing both categorical and ordinal parameters) (#772, #847).optimize_acqf_discretefor optimizing acquisition functions over fully discrete domains (#777).- Extend
optimize_acqf_mixedto allow batch optimization (#804).
- Support for robust / risk-aware optimization:
- Risk measures for robust / risk-averse optimization (#821).
AppendFeaturestransform (#820).InputPerturbationinput transform for for risk averse BO with implementation errors (#827).- Tutorial notebook for Bayesian Optimization of risk measures (#823).
- Tutorial notebook for risk-averse Bayesian Optimization under input perturbations (#828).
- More scalable multi-task modeling and sampling:
- Various changes to simplify and streamline integration with Ax:
- Random Fourier Feature (RFF) utilties for fast (approximate) GP function sampling (#750).
DelaunayPolytopeSamplerfor fast uniform sampling from (simple) polytopes (#741).- Add
evaluatemethod toScalarizedObjective(#795).
Bug Fixes
- Handle the case when all features are fixed in
optimize_acqf(#770). - Pass
fixed_featuresto initial candidate generation functions (#806). - Handle batch empty pareto frontier in
FastPartitioning(#740). - Handle empty pareto set in
is_non_dominated(#743). - Handle edge case of no or a single observation in
get_chebyshev_scalarization(#762). - Fix an issue in
gen_candidates_torchthat caused problems with acqusition functions using fantasy models (#766). - Fix
HigherOrderGPdtypebug (#728). - Normalize before clamping in
Warpinput warping transform (#722). - Fix bug in GP sampling (#764).
Other Changes
- Modify input transforms to support one-to-many transforms (#819, #835).
- Make initial conditions for acquisition function optimization honor parameter constraints (#752).
- Perform optimization only over unfixed features if
fixed_featuresis passed (#839). - Refactor Max Value Entropy Search Methods (#734).
- Use Linear Algebra functions from the
torch.linalgmodule (#735). - Use PyTorch's
Kumaraswamydistribution (#746). - Improved capabilities and some bugfixes for batched models (#723, #767).
- Pass
callbackargument toscipy.optim.minimizeingen_candidates_scipy(#744). - Modify behavior of
X_pendingin in multi-objective acqusiition functions (#747). - Allow multi-dimensional batch shapes in test functions (#757).
- Utility for converting batched multi-output models into batched single-output models (#759).
- Explicitly raise
NotPSDErrorin_scipy_objective_and_grad(#787). - Make
raw_samplesoptional ifbatch_initial_conditionsis passed (#801). - Use powers of 2 in qMC docstrings & examples (#812).
High Order GP model, multi-step look-ahead acquisition function
Compatibility
New Features
HigherOrderGP- High-Order Gaussian Process (HOGP) model for
high-dimensional output regression (#631, #646, #648, #680).qMultiStepLookaheadacquisition function for general look-ahead
optimization approaches (#611, #659).ScalarizedPosteriorMeanandproject_to_sample_pointsfor more
advanced MFKG functionality (#645).- Large-scale Thompson sampling tutorial (#654, #713).
- Tutorial for optimizing mixed continuous/discrete domains (application
to multi-fidelity KG with discrete fidelities) (#716). GPDrawutility for sampling from (exact) GP priors (#655).- Add
Xas optional arg to call signature ofMCAcqusitionObjective(#487). OSYsynthetic test problem (#679).
Bug Fixes
- Fix matrix multiplication in
scalarize_posterior(#638). - Set
X_pendinginget_acquisition_functioninqEHVI(#662). - Make contextual kernel device-aware (#666).
- Do not use an
MCSamplerinMaxPosteriorSampling(#701). - Add ability to subset outcome transforms (#711).
Performance Improvements
- Batchify box decomposition for 2d case (#642).
Other Changes
- Use scipy distribution in MES quantile bisect (#633).
- Use new closure definition for GPyTorch priors (#634).
- Allow enabling of approximate root decomposition in
posteriorcalls (#652). - Support for upcoming 21201-dimensional PyTorch
SobolEngine(#672, #674). - Refactored various MOO utilities to allow future additions (#656, #657, #658, #661).
- Support input_transform in PairwiseGP (#632).
- Output shape checks for t_batch_mode_transform (#577).
- Check for NaN in
gen_candidates_scipy(#688). - Introduce
base_sample_shapeproperty toPosteriorobjects (#718).
Contextual Bayesian Optimization, Input Warping, TuRBO, sampling from polytopes.
Compatibility
New Features
- Models (LCE-A, LCE-M and SAC ) for Contextual Bayesian Optimziation (#581).
- Implements core models from:
High-Dimensional Contextual Policy Search with Unknown Context Rewards using Bayesian Optimization.
Q. Feng, B. Letham, H. Mao, E. Bakshy. NeurIPS 2020. - See Ax for usage of these models.
- Implements core models from:
- Hit and run sampler for uniform sampling from a polytope (#592).
- Input warping:
- TuRBO-1 tutorial (#598).
- Implements the method from:
Scalable Global Optimization via Local Bayesian Optimization.
D. Eriksson, M. Pearce, J. Gardner, R. D. Turner, M. Poloczek. NeurIPS 2019.
- Implements the method from:
Bug fixes
Other changes
- Add
train_inputsoption toqMaxValueEntropy(#593). - Enable gpytorch settings to override BoTorch defaults for
fast_pred_varanddebug(#595). - Rename
set_train_data_transform->preprocess_transform(#575). - Modify
_expand_bounds()shape checks to work with >2-dim bounds (#604). - Add
batch_shapeproperty to models (#588). - Modify
qMultiFidelityKnowledgeGradient.evaluate()to work withproject,expandandcost_aware_utility(#594). - Add list of papers using BoTorch to website docs (#617).
Maintenance Release
New Features
- Add
PenalizedAcquisitionFunctionwrapper (#585) - Input transforms
- Differentiable approximate rounding for integers (#561)
Bug fixes
- Fix sign error in UCB when
maximize=False(a4bfacbfb2109d3b89107d171d2101e1995822bb) - Fix batch_range sample shape logic (#574)
Other changes
- Better support for two stage sampling in preference learning
(0cd13d0) - Remove noise term in
PairwiseGPand addScaleKernelby default (#571) - Rename
priortotask_covar_priorinMultiTaskGPandFixedNoiseMultiTaskGP
(16573fe) - Support only transforming inputs on training or evaluation (#551)
- Add
equalsmethod forInputTransform(#552)
Maintenance Release
New Features
- Constrained Multi-Objective tutorial (#493)
- Multi-fidelity Knowledge Gradient tutorial (#509)
- Support for batch qMC sampling (#510)
- New
evaluatemethod forqKnowledgeGradient(#515)
Compatibility
- Require PyTorch >=1.6 (#535)
- Require GPyTorch >=1.2 (#535)
- Remove deprecated
botorch.gen module(#532)
Bug fixes
- Fix bad backward-indexing of task_feature in
MultiTaskGP(#485) - Fix bounds in constrained Branin-Currin test function (#491)
- Fix max_hv for C2DTLZ2 and make Hypervolume always return a float (#494)
- Fix bug in
draw_sobol_samplesthat did not use the proper effective dimension (#505) - Fix constraints for
q>1inqExpectedHypervolumeImprovement(c80c4fd) - Only use feasible observations in partitioning for
qExpectedHypervolumeImprovement
inget_acquisition_function(#523) - Improved GPU compatibility for
PairwiseGP(#537)
Performance Improvements
- Reduce memory footprint in
qExpectedHypervolumeImprovement(#522) - Add
(q)ExpectedHypervolumeImprovementto nonnegative functions
[for better initialization] (#496)
Other changes
- Support batched
best_finqExpectedImprovement(#487) - Allow to return full tree of solutions in
OneShotAcquisitionFunction(#488) - Added
construct_inputsclass method to models to programmatically construct the
inputs to the constructor from a standardizedTrainingDatarepresentation
(#477, #482, 3621198) - Acquisition function constructors now accept catch-all
**kwargsoptions
(#478, e5b6935) - Use
psd_safe_choleskyinqMaxValueEntropyfor better numerical stabilty (#518) - Added
WeightedMCMultiOutputObjective(81d91fd) - Add ability to specify
outcomesto all multi-output objectives (#524) - Return optimization output in
info_dictforfit_gpytorch_scipy(#534) - Use
setuptools_scmfor versioning (#539)