Skip to content

Releases: meta-pytorch/botorch

Bayesian Optimization with Preference Exploration, SAASBO for High-Dimensional Bayesian Optimization

28 Mar 00:30

Choose a tag to compare

New Features

  • Implement SAASBO - SaasFullyBayesianSingleTaskGP model for sample-efficient high-dimensional Bayesian optimization (#1123).
  • Add SAASBO tutorial (#1127).
  • Add LearnedObjective (#1131), AnalyticExpectedUtilityOfBestOption acquisition function (#1135), and a few auxiliary classes to support Bayesian optimization with preference exploration (BOPE).
  • Add BOPE tutorial (#1138).

Other Changes

  • Use qKG.evaluate in optimize_acqf_mixed (#1133).
  • Add construct_inputs to SAASBO (#1136).

Bug Fixes

  • Fix "Constraint Active Search" tutorial (#1124).
  • Update "Discrete Multi-Fidelity BO" tutorial (#1134).

Bug fix release

09 Mar 22:48

Choose a tag to compare

New Features

  • Use BOTORCH_MODULAR in tutorials with Ax (#1105).
  • Add optimize_acqf_discrete_local_search for discrete search spaces (#1111).

Bug Fixes

  • Fix missing posterior_transform in qNEI and get_acquisition_function (#1113).

Non-linear input constraints, new MOO problems, bug fixes, and performance improvements.

28 Feb 22:41

Choose a tag to compare

New Features

  • Add Standardize input transform (#1053).
  • Low-rank Cholesky updates for NEI (#1056).
  • Add support for non-linear input constraints (#1067).
  • New MOO problems: MW7 (#1077), disc brake (#1078), penicillin (#1079), RobustToy (#1082), GMM (#1083).

Other Changes

  • Add Dispatcher (#1009).
  • Modify qNEHVI to support deterministic models (#1026).
  • Store tensor attributes of input transforms as buffers (#1035).
  • Modify NEHVI to support MTGPs (#1037).
  • Make Normalize input transform input column-specific (#1047).
  • Improve find_interior_point (#1049).
  • Remove deprecated botorch.distributions module (#1061).
  • Avoid costly application of posterior transform in Kronecker & HOGP models (#1076).
  • Support heteroscedastic perturbations in InputPerturbations (#1088).

Performance Improvements

  • Make risk measures more memory efficient (#1034).

Bug Fixes

  • Properly handle empty fixed_features in optimization (#1029).
  • Fix missing weights in VaR risk measure (#1038).
  • Fix find_interior_point for negative variables & allow unbounded problems (#1045).
  • Filter out indefinite bounds in constraint utilities (#1048).
  • Make non-interleaved base samples use intuitive shape (#1057).
  • Pad small diagonalization with zeros for KroneckerMultitaskGP (#1071).
  • Disable learning of bounds in preprocess_transform (#1089).
  • Catch runtime errors with ill-conditioned covar (#1095).
  • Fix compare_mc_analytic_acquisition tutorial (#1099).

Approximate GP model, Multi-Output Risk Measures, Bug Fixes and Performance Improvements

09 Dec 00:16

Choose a tag to compare

Compatibility

  • Require PyTorch >=1.9 (#1011).
  • Require GPyTorch >=1.6 (#1011).

New Features

  • New ApproximateGPyTorchModel wrapper for various (variational) approximate GP models (#1012).
  • New SingleTaskVariationalGP stochastic variational Gaussian Process model (#1012).
  • Support for Multi-Output Risk Measures (#906, #965).
  • Introduce ModelList and PosteriorList (#829).
  • New Constraint Active Search tutorial (#1010).
  • Add additional multi-objective optimization test problems (#958).

Other Changes

  • Add covar_module as an optional input of MultiTaskGP models (#941).
  • Add min_range argument to Normalize transform to prevent division by zero (#931).
  • Add initialization heuristic for acquisition function optimization that samples around best points (#987).
  • Update initialization heuristic to perturb a subset of the dimensions of the best points if the dimension is > 20 (#988).
  • Modify apply_constraints utility to work with multi-output objectives (#994).
  • Short-cut t_batch_mode_transform decorator on non-tensor inputs (#991).

Performance Improvements

  • Use lazy covariance matrix in BatchedMultiOutputGPyTorchModel.posterior (#976).
  • Fast low-rank Cholesky updates for qNoisyExpectedHypervolumeImprovement (#747, #995, #996).

Bug Fixes

  • Update error handling to new PyTorch linear algebra messages (#940).
  • Avoid test failures on Ampere devices (#944).
  • Fixes to the Griewank test function (#972).
  • Handle empty base_sample_shape in Posterior.rsample (#986).
  • Handle NotPSDError and hitting maxiter in fit_gpytorch_model (#1007).
  • Use TransformedPosterior for subclasses of GPyTorchPosterior (#983).
  • Propagate best_f argument to qProbabilityOfImprovement in input constructors (f5a5f8b)

Maintenance Release + New Tutorials

02 Sep 20:44

Choose a tag to compare

Compatibility

  • Require GPyTorch >=1.5.1 (#928).

New Features

  • Add HigherOrderGP composite Bayesian Optimization tutorial notebook (#864).
  • Add Multi-Task Bayesian Optimization tutorial (#867).
  • New multi-objective test problems from (#876).
  • Add PenalizedMCObjective and L1PenaltyObjective (#913).
  • Add a ProximalAcquisitionFunction for regularizing new candidates towards previously generated ones (#919, #924).
  • Add a Power outcome transform (#925).

Bug Fixes

  • Batch mode fix for HigherOrderGP initialization (#856).
  • Improve CategoricalKernel precision (#857).
  • Fix an issue with qMultiFidelityKnowledgeGradient.evaluate (#858).
  • Fix an issue with transforms with HigherOrderGP. (#889)
  • Fix initial candidate generation when parameter constraints are on different device (#897).
  • Fix bad in-place op in _generate_unfixed_lin_constraints (#901).
  • Fix an input transform bug in fantasize call (#902).
  • Fix outcome transform bug in batched_to_model_list (#917).

Other Changes

  • Make variance optional for TransformedPosterior.mean (#855).
  • Support transforms in DeterministicModel (#869).
  • Support batch_shape in RandomFourierFeatures (#877).
  • Add a maximize flag to PosteriorMean (#881).
  • Ignore categorical dimensions when validating training inputs in MixedSingleTaskGP (#882).
  • Refactor HigherOrderGPPosterior for memory efficiency (#883).
  • Support negative weights for minimization objectives in get_chebyshev_scalarization (#884).
  • Move train_inputs transforms to model.train/eval calls (#894).

Improved Multi-Objective Optimization, Support for categorical/mixed domains, robust/risk-aware optimization, efficient MTGP sampling

29 Jun 19:31

Choose a tag to compare

Compatibility

  • Require PyTorch >=1.8.1 (#832).
  • Require GPyTorch >=1.5 (#848).
  • Changes to how input transforms are applied: transform_inputs is applied in model.forward if the model is in train mode, otherwise it is applied in the posterior call (#819, #835).

New Features

  • Improved multi-objective optimization capabilities:
    • qNoisyExpectedHypervolumeImprovement acquisition function that improves on qExpectedHypervolumeImprovement in terms of tolerating observation noise and speeding up computation for large q-batches (#797, #822).
    • qMultiObjectiveMaxValueEntropy acqusition function (913aa0e, #760).
    • Heuristic for reference point selection (#830).
    • FastNondominatedPartitioning for Hypervolume computations (#699).
    • DominatedPartitioning for partitioning the dominated space (#726).
    • BoxDecompositionList for handling box decompositions of varying sizes (#712).
    • Direct, batched dominated partitioning for the two-outcome case (#739).
    • get_default_partitioning_alpha utility providing heuristic for selecting approximation level for partitioning algorithms (#793).
    • New method for computing Pareto Frontiers with less memory overhead (#842, #846).
  • New qLowerBoundMaxValueEntropy acquisition function (a.k.a. GIBBON), a lightweight variant of Multi-fidelity Max-Value Entropy Search using a Determinantal Point Process approximation (#724, #737, #749).
  • Support for discrete and mixed input domains:
    • CategoricalKernel for categorical inputs (#771).
    • MixedSingleTaskGP for mixed search spaces (containing both categorical and ordinal parameters) (#772, #847).
    • optimize_acqf_discrete for optimizing acquisition functions over fully discrete domains (#777).
    • Extend optimize_acqf_mixed to allow batch optimization (#804).
  • Support for robust / risk-aware optimization:
    • Risk measures for robust / risk-averse optimization (#821).
    • AppendFeatures transform (#820).
    • InputPerturbation input transform for for risk averse BO with implementation errors (#827).
    • Tutorial notebook for Bayesian Optimization of risk measures (#823).
    • Tutorial notebook for risk-averse Bayesian Optimization under input perturbations (#828).
  • More scalable multi-task modeling and sampling:
    • KroneckerMultiTaskGP model for efficient multi-task modeling for block-design settings (all tasks observed at all inputs) (#637).
    • Support for transforms in Multi-Task GP models (#681).
    • Posterior sampling based on Matheron's rule for Multi-Task GP models (#841).
  • Various changes to simplify and streamline integration with Ax:
    • Handle non-block designs in TrainingData (#794).
    • Acquisition function input constructor registry (#788, #802, #845).
  • Random Fourier Feature (RFF) utilties for fast (approximate) GP function sampling (#750).
  • DelaunayPolytopeSampler for fast uniform sampling from (simple) polytopes (#741).
  • Add evaluate method to ScalarizedObjective (#795).

Bug Fixes

  • Handle the case when all features are fixed in optimize_acqf (#770).
  • Pass fixed_features to initial candidate generation functions (#806).
  • Handle batch empty pareto frontier in FastPartitioning (#740).
  • Handle empty pareto set in is_non_dominated (#743).
  • Handle edge case of no or a single observation in get_chebyshev_scalarization (#762).
  • Fix an issue in gen_candidates_torch that caused problems with acqusition functions using fantasy models (#766).
  • Fix HigherOrderGP dtype bug (#728).
  • Normalize before clamping in Warp input warping transform (#722).
  • Fix bug in GP sampling (#764).

Other Changes

  • Modify input transforms to support one-to-many transforms (#819, #835).
  • Make initial conditions for acquisition function optimization honor parameter constraints (#752).
  • Perform optimization only over unfixed features if fixed_features is passed (#839).
  • Refactor Max Value Entropy Search Methods (#734).
  • Use Linear Algebra functions from the torch.linalg module (#735).
  • Use PyTorch's Kumaraswamy distribution (#746).
  • Improved capabilities and some bugfixes for batched models (#723, #767).
  • Pass callback argument to scipy.optim.minimize in gen_candidates_scipy (#744).
  • Modify behavior of X_pending in in multi-objective acqusiition functions (#747).
  • Allow multi-dimensional batch shapes in test functions (#757).
  • Utility for converting batched multi-output models into batched single-output models (#759).
  • Explicitly raise NotPSDError in _scipy_objective_and_grad (#787).
  • Make raw_samples optional if batch_initial_conditions is passed (#801).
  • Use powers of 2 in qMC docstrings & examples (#812).

High Order GP model, multi-step look-ahead acquisition function

23 Feb 21:33

Choose a tag to compare

Compatibility

  • Require PyTorch >=1.7.1 (#714).
  • Require GPyTorch >=1.4 (#714).

New Features

  • HigherOrderGP - High-Order Gaussian Process (HOGP) model for
    high-dimensional output regression (#631, #646, #648, #680).
  • qMultiStepLookahead acquisition function for general look-ahead
    optimization approaches (#611, #659).
  • ScalarizedPosteriorMean and project_to_sample_points for more
    advanced MFKG functionality (#645).
  • Large-scale Thompson sampling tutorial (#654, #713).
  • Tutorial for optimizing mixed continuous/discrete domains (application
    to multi-fidelity KG with discrete fidelities) (#716).
  • GPDraw utility for sampling from (exact) GP priors (#655).
  • Add X as optional arg to call signature of MCAcqusitionObjective (#487).
  • OSY synthetic test problem (#679).

Bug Fixes

  • Fix matrix multiplication in scalarize_posterior (#638).
  • Set X_pending in get_acquisition_function in qEHVI (#662).
  • Make contextual kernel device-aware (#666).
  • Do not use an MCSampler in MaxPosteriorSampling (#701).
  • Add ability to subset outcome transforms (#711).

Performance Improvements

  • Batchify box decomposition for 2d case (#642).

Other Changes

  • Use scipy distribution in MES quantile bisect (#633).
  • Use new closure definition for GPyTorch priors (#634).
  • Allow enabling of approximate root decomposition in posterior calls (#652).
  • Support for upcoming 21201-dimensional PyTorch SobolEngine (#672, #674).
  • Refactored various MOO utilities to allow future additions (#656, #657, #658, #661).
  • Support input_transform in PairwiseGP (#632).
  • Output shape checks for t_batch_mode_transform (#577).
  • Check for NaN in gen_candidates_scipy (#688).
  • Introduce base_sample_shape property to Posterior objects (#718).

Contextual Bayesian Optimization, Input Warping, TuRBO, sampling from polytopes.

08 Dec 06:42

Choose a tag to compare

Compatibility

  • Require PyTorch >=1.7 (#614).
  • Require GPyTorch >=1.3 (#614).

New Features

Bug fixes

  • Fix bounds of HolderTable synthetic function (#596).
  • Fix device issue in MOO tutorial (#621).

Other changes

  • Add train_inputs option to qMaxValueEntropy (#593).
  • Enable gpytorch settings to override BoTorch defaults for fast_pred_var and debug (#595).
  • Rename set_train_data_transform -> preprocess_transform (#575).
  • Modify _expand_bounds() shape checks to work with >2-dim bounds (#604).
  • Add batch_shape property to models (#588).
  • Modify qMultiFidelityKnowledgeGradient.evaluate() to work with project, expand and cost_aware_utility (#594).
  • Add list of papers using BoTorch to website docs (#617).

Maintenance Release

26 Oct 04:28

Choose a tag to compare

New Features

  • Add PenalizedAcquisitionFunction wrapper (#585)
  • Input transforms
    • Reversible input transform (#550)
    • Rounding input transform (#562)
    • Log input transform (#563)
  • Differentiable approximate rounding for integers (#561)

Bug fixes

  • Fix sign error in UCB when maximize=False (a4bfacbfb2109d3b89107d171d2101e1995822bb)
  • Fix batch_range sample shape logic (#574)

Other changes

  • Better support for two stage sampling in preference learning
    (0cd13d0)
  • Remove noise term in PairwiseGP and add ScaleKernel by default (#571)
  • Rename prior to task_covar_prior in MultiTaskGP and FixedNoiseMultiTaskGP
    (16573fe)
  • Support only transforming inputs on training or evaluation (#551)
  • Add equals method for InputTransform (#552)

Maintenance Release

16 Sep 01:58

Choose a tag to compare

New Features

  • Constrained Multi-Objective tutorial (#493)
  • Multi-fidelity Knowledge Gradient tutorial (#509)
  • Support for batch qMC sampling (#510)
  • New evaluate method for qKnowledgeGradient (#515)

Compatibility

  • Require PyTorch >=1.6 (#535)
  • Require GPyTorch >=1.2 (#535)
  • Remove deprecated botorch.gen module (#532)

Bug fixes

  • Fix bad backward-indexing of task_feature in MultiTaskGP (#485)
  • Fix bounds in constrained Branin-Currin test function (#491)
  • Fix max_hv for C2DTLZ2 and make Hypervolume always return a float (#494)
  • Fix bug in draw_sobol_samples that did not use the proper effective dimension (#505)
  • Fix constraints for q>1 in qExpectedHypervolumeImprovement (c80c4fd)
  • Only use feasible observations in partitioning for qExpectedHypervolumeImprovement
    in get_acquisition_function (#523)
  • Improved GPU compatibility for PairwiseGP (#537)

Performance Improvements

  • Reduce memory footprint in qExpectedHypervolumeImprovement (#522)
  • Add (q)ExpectedHypervolumeImprovement to nonnegative functions
    [for better initialization] (#496)

Other changes

  • Support batched best_f in qExpectedImprovement (#487)
  • Allow to return full tree of solutions in OneShotAcquisitionFunction (#488)
  • Added construct_inputs class method to models to programmatically construct the
    inputs to the constructor from a standardized TrainingData representation
    (#477, #482, 3621198)
  • Acquisition function constructors now accept catch-all **kwargs options
    (#478, e5b6935)
  • Use psd_safe_cholesky in qMaxValueEntropy for better numerical stabilty (#518)
  • Added WeightedMCMultiOutputObjective (81d91fd)
  • Add ability to specify outcomes to all multi-output objectives (#524)
  • Return optimization output in info_dict for fit_gpytorch_scipy (#534)
  • Use setuptools_scm for versioning (#539)