You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Summary:
## Motivation
I want to update this documentation, but the changes would be hard to review due to very long line length.
This PR:
* Adds line breaks that have no effect on how the website renders.
Pull Request resolved: #2436
Test Plan:
Built the website locally. Screenshot:
<img width="854" alt="Screenshot 2024-07-21 at 10 40 27 AM" src="https://github.com/user-attachments/assets/98f2c927-0093-4ff6-94f9-d5280e7c858f">
Reviewed By: saitcakmak
Differential Revision: D60019275
Pulled By: esantorella
fbshipit-source-id: 82f3b245dc5a08608e2f390f2532631b3fd744c2
Copy file name to clipboardExpand all lines: docs/multi_objective.md
+66-10Lines changed: 66 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,27 +3,83 @@ id: multi_objective
3
3
title: Multi-Objective Bayesian Optimization
4
4
---
5
5
6
-
BoTorch provides first-class support for Multi-Objective (MO) Bayesian Optimization (BO) including implementations of [`qNoisyExpectedHypervolumeImprovement`](../api/acquisition.html#botorch.acquisition.multi_objective.monte_carlo.qNoisyExpectedHypervolumeImprovement) (qNEHVI)[^qNEHVI], [`qExpectedHypervolumeImprovement`](../api/acquisition.html#botorch.acquisition.multi_objective.monte_carlo.qExpectedHypervolumeImprovement) (qEHVI), qParEGO[^qEHVI], qNParEGO[^qNEHVI], and analytic [`ExpectedHypervolumeImprovement`](../api/acquisition.html#botorch.acquisition.multi_objective.analytic.ExpectedHypervolumeImprovement) (EHVI) with gradients via auto-differentiation acquisition functions[^qEHVI].
6
+
BoTorch provides first-class support for Multi-Objective (MO) Bayesian
(EHVI) with gradients via auto-differentiation acquisition functions[^qEHVI].
7
14
8
-
The goal in MOBO is learn the *Pareto front*: the set of optimal trade-offs, where an improvement in one objective means deteriorating another objective. Botorch provides implementations for a number of acquisition functions specifically for the multi-objective scenario, as well as generic interfaces for implemented new multi-objective acquisition functions.
15
+
The goal in MOBO is learn the *Pareto front*: the set of optimal trade-offs,
16
+
where an improvement in one objective means deteriorating another objective.
17
+
Botorch provides implementations for a number of acquisition functions
18
+
specifically for the multi-objective scenario, as well as generic interfaces for
19
+
implemented new multi-objective acquisition functions.
9
20
10
21
## Multi-Objective Acquisition Functions
11
-
MOBO leverages many advantages of BoTorch to make provide practical algorithms for computationally intensive and analytically intractable problems. For example, analytic EHVI has no known analytical gradient for when there are more than two objectives, but BoTorch computes analytic gradients for free via auto-differentiation, regardless of the number of objectives [^qEHVI].
22
+
MOBO leverages many advantages of BoTorch to make provide practical algorithms
23
+
for computationally intensive and analytically intractable problems. For
24
+
example, analytic EHVI has no known analytical gradient for when there are more
25
+
than two objectives, but BoTorch computes analytic gradients for free via
26
+
auto-differentiation, regardless of the number of objectives [^qEHVI].
12
27
13
-
For analytic and MC-based MOBO acquisition functions like qNEHVI, qEHVI, and qParEGO, BoTorch leverages GPU acceleration and quasi-second order methods for acquisition optimization for efficient computation and optimization in many practical scenarios [^qNEHVI][^qEHVI]. The MC-based acquisition functions support using the sample average approximation for rapid convergence [^BoTorch].
28
+
For analytic and MC-based MOBO acquisition functions like qNEHVI, qEHVI, and
29
+
qParEGO, BoTorch leverages GPU acceleration and quasi-second order methods for
30
+
acquisition optimization for efficient computation and optimization in many
31
+
practical scenarios [^qNEHVI][^qEHVI]. The MC-based acquisition functions
32
+
support using the sample average approximation for rapid convergence [^BoTorch].
14
33
15
-
All analytic MO acquisition functions derive from [`MultiObjectiveAnalyticAcquisitionFunction`](../api/acquisition.html#botorch.acquisition.multi_objective.analytic.MultiObjectiveAnalyticAcquisitionFunction) and all MC-based acquisition functions derive from [`MultiObjectiveMCAcquisitionFunction`](../api/acquisition.html#botorch.acquisition.multi_objective.monte_carlo.MultiObjectiveMCAcquisitionFunction). These abstract classes easily integrate with BoTorch's standard optimization machinery.
These abstract classes easily integrate with BoTorch's standard optimization
39
+
machinery.
16
40
17
-
Additionally, qParEGO and qNParEGO are trivially implemented using an augmented Chebyshev scalarization as the objective with the [`qExpectedImprovement`](../api/acquisition.html#qexpectedimprovement) acquisition function or the [`qNoisyExpectedImprovement`](../api/acquisition.html#qnoisyexpectedimprovement) acquisition function, respectively. Botorch provides a [`get_chebyshev_scalarization`](../api/utils.html#botorch.utils.multi_objective.scalarization.get_chebyshev_scalarizationconvenience) convenience function for generating these scalarizations. In the batch setting, qParEGO and qNParEGO both use a new random scalarization for each candidate [^qEHVI]. Candidates are selected in a sequential greedy fashion, each with a different scalarization, via the [`optimize_acqf_list`](../api/optim.html#botorch.optim.optimize.optimize_acqf_list) function.
41
+
Additionally, qParEGO and qNParEGO are trivially implemented using an augmented
For a more in-depth example using these acquisition functions, check out the [Multi-Objective Bayesian Optimization tutorial notebook](../tutorials/multi_objective_bo).
55
+
For a more in-depth example using these acquisition functions, check out the
56
+
[Multi-Objective Bayesian Optimization tutorial
57
+
notebook](../tutorials/multi_objective_bo).
20
58
21
59
## Multi-Objective Utilities
22
60
23
-
BoTorch provides several utility functions for evaluating performance in MOBO including a method for computing the Pareto front [`is_non_dominated`](../api/utils.html#botorch.utils.multi_objective.pareto.is_non_dominated) and efficient box decomposition algorithms for efficiently partitioning the the space dominated [`DominatedPartitioning`](../api/utils.html#botorch.utils.multi_objective.box_decompositions.dominated.DominatedPartitioning) or non-dominated [`NonDominatedPartitioning`](../api/utils.html#botorch.utils.multi_objective.box_decompositions.non_dominated.NonDominatedPartitioning) by the Pareto frontier into axis-aligned hyperrectangular boxes. For exact box decompositions, BoTorch uses a two-step approach similar to that in [^Yang2019], where (1) Algorithm 1 from [Lacour17]_ is used to find the local lower bounds for the maximization problem and (2) the local lower bounds are used as the Pareto frontier for the minimization problem, and [Lacour17]_ is applied again to partition the space dominated by that Pareto frontier. Approximate box decompositions are also supported using the algorithm from [^Couckuyt2012]. See Appendix F.4 in [^qEHVI] for an analysis of approximate vs exact box decompositions with EHVI. These box decompositions (approximate or exact) can also be used to efficiently compute hypervolumes.
61
+
BoTorch provides several utility functions for evaluating performance in MOBO
by the Pareto frontier into axis-aligned hyperrectangular boxes. For exact box
70
+
decompositions, BoTorch uses a two-step approach similar to that in [^Yang2019],
71
+
where (1) Algorithm 1 from [Lacour17]_ is used to find the local lower bounds
72
+
for the maximization problem and (2) the local lower bounds are used as the
73
+
Pareto frontier for the minimization problem, and [Lacour17]_ is applied again
74
+
to partition the space dominated by that Pareto frontier. Approximate box
75
+
decompositions are also supported using the algorithm from [^Couckuyt2012]. See
76
+
Appendix F.4 in [^qEHVI] for an analysis of approximate vs exact box
77
+
decompositions with EHVI. These box decompositions (approximate or exact) can
78
+
also be used to efficiently compute hypervolumes.
24
79
25
-
[^qNEHVI]: S. Daulton, M. Balandat, and E. Bakshy. Parallel Bayesian Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement. Advances in Neural
26
-
Information Processing Systems 34, 2021.
80
+
[^qNEHVI]: S. Daulton, M. Balandat, and E. Bakshy. Parallel Bayesian
81
+
Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement.
82
+
Advances in Neural Information Processing Systems 34, 2021.
27
83
[paper](https://arxiv.org/abs/2105.08195)
28
84
29
85
[^qEHVI]: S. Daulton, M. Balandat, and E. Bakshy. Differentiable Expected Hypervolume
0 commit comments