Skip to content

Commit b229bf8

Browse files
esantorellafacebook-github-bot
authored andcommitted
Add line breaks to MOBO documentation (#2436)
Summary: ## Motivation I want to update this documentation, but the changes would be hard to review due to very long line length. This PR: * Adds line breaks that have no effect on how the website renders. Pull Request resolved: #2436 Test Plan: Built the website locally. Screenshot: <img width="854" alt="Screenshot 2024-07-21 at 10 40 27 AM" src="https://github.com/user-attachments/assets/98f2c927-0093-4ff6-94f9-d5280e7c858f"> Reviewed By: saitcakmak Differential Revision: D60019275 Pulled By: esantorella fbshipit-source-id: 82f3b245dc5a08608e2f390f2532631b3fd744c2
1 parent 53d8f88 commit b229bf8

File tree

1 file changed

+66
-10
lines changed

1 file changed

+66
-10
lines changed

docs/multi_objective.md

Lines changed: 66 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -3,27 +3,83 @@ id: multi_objective
33
title: Multi-Objective Bayesian Optimization
44
---
55

6-
BoTorch provides first-class support for Multi-Objective (MO) Bayesian Optimization (BO) including implementations of [`qNoisyExpectedHypervolumeImprovement`](../api/acquisition.html#botorch.acquisition.multi_objective.monte_carlo.qNoisyExpectedHypervolumeImprovement) (qNEHVI)[^qNEHVI], [`qExpectedHypervolumeImprovement`](../api/acquisition.html#botorch.acquisition.multi_objective.monte_carlo.qExpectedHypervolumeImprovement) (qEHVI), qParEGO[^qEHVI], qNParEGO[^qNEHVI], and analytic [`ExpectedHypervolumeImprovement`](../api/acquisition.html#botorch.acquisition.multi_objective.analytic.ExpectedHypervolumeImprovement) (EHVI) with gradients via auto-differentiation acquisition functions[^qEHVI].
6+
BoTorch provides first-class support for Multi-Objective (MO) Bayesian
7+
Optimization (BO) including implementations of
8+
[`qNoisyExpectedHypervolumeImprovement`](../api/acquisition.html#botorch.acquisition.multi_objective.monte_carlo.qNoisyExpectedHypervolumeImprovement)
9+
(qNEHVI)[^qNEHVI],
10+
[`qExpectedHypervolumeImprovement`](../api/acquisition.html#botorch.acquisition.multi_objective.monte_carlo.qExpectedHypervolumeImprovement)
11+
(qEHVI), qParEGO[^qEHVI], qNParEGO[^qNEHVI], and analytic
12+
[`ExpectedHypervolumeImprovement`](../api/acquisition.html#botorch.acquisition.multi_objective.analytic.ExpectedHypervolumeImprovement)
13+
(EHVI) with gradients via auto-differentiation acquisition functions[^qEHVI].
714

8-
The goal in MOBO is learn the *Pareto front*: the set of optimal trade-offs, where an improvement in one objective means deteriorating another objective. Botorch provides implementations for a number of acquisition functions specifically for the multi-objective scenario, as well as generic interfaces for implemented new multi-objective acquisition functions.
15+
The goal in MOBO is learn the *Pareto front*: the set of optimal trade-offs,
16+
where an improvement in one objective means deteriorating another objective.
17+
Botorch provides implementations for a number of acquisition functions
18+
specifically for the multi-objective scenario, as well as generic interfaces for
19+
implemented new multi-objective acquisition functions.
920

1021
## Multi-Objective Acquisition Functions
11-
MOBO leverages many advantages of BoTorch to make provide practical algorithms for computationally intensive and analytically intractable problems. For example, analytic EHVI has no known analytical gradient for when there are more than two objectives, but BoTorch computes analytic gradients for free via auto-differentiation, regardless of the number of objectives [^qEHVI].
22+
MOBO leverages many advantages of BoTorch to make provide practical algorithms
23+
for computationally intensive and analytically intractable problems. For
24+
example, analytic EHVI has no known analytical gradient for when there are more
25+
than two objectives, but BoTorch computes analytic gradients for free via
26+
auto-differentiation, regardless of the number of objectives [^qEHVI].
1227

13-
For analytic and MC-based MOBO acquisition functions like qNEHVI, qEHVI, and qParEGO, BoTorch leverages GPU acceleration and quasi-second order methods for acquisition optimization for efficient computation and optimization in many practical scenarios [^qNEHVI][^qEHVI]. The MC-based acquisition functions support using the sample average approximation for rapid convergence [^BoTorch].
28+
For analytic and MC-based MOBO acquisition functions like qNEHVI, qEHVI, and
29+
qParEGO, BoTorch leverages GPU acceleration and quasi-second order methods for
30+
acquisition optimization for efficient computation and optimization in many
31+
practical scenarios [^qNEHVI][^qEHVI]. The MC-based acquisition functions
32+
support using the sample average approximation for rapid convergence [^BoTorch].
1433

15-
All analytic MO acquisition functions derive from [`MultiObjectiveAnalyticAcquisitionFunction`](../api/acquisition.html#botorch.acquisition.multi_objective.analytic.MultiObjectiveAnalyticAcquisitionFunction) and all MC-based acquisition functions derive from [`MultiObjectiveMCAcquisitionFunction`](../api/acquisition.html#botorch.acquisition.multi_objective.monte_carlo.MultiObjectiveMCAcquisitionFunction). These abstract classes easily integrate with BoTorch's standard optimization machinery.
34+
All analytic MO acquisition functions derive from
35+
[`MultiObjectiveAnalyticAcquisitionFunction`](../api/acquisition.html#botorch.acquisition.multi_objective.analytic.MultiObjectiveAnalyticAcquisitionFunction)
36+
and all MC-based acquisition functions derive from
37+
[`MultiObjectiveMCAcquisitionFunction`](../api/acquisition.html#botorch.acquisition.multi_objective.monte_carlo.MultiObjectiveMCAcquisitionFunction).
38+
These abstract classes easily integrate with BoTorch's standard optimization
39+
machinery.
1640

17-
Additionally, qParEGO and qNParEGO are trivially implemented using an augmented Chebyshev scalarization as the objective with the [`qExpectedImprovement`](../api/acquisition.html#qexpectedimprovement) acquisition function or the [`qNoisyExpectedImprovement`](../api/acquisition.html#qnoisyexpectedimprovement) acquisition function, respectively. Botorch provides a [`get_chebyshev_scalarization`](../api/utils.html#botorch.utils.multi_objective.scalarization.get_chebyshev_scalarizationconvenience) convenience function for generating these scalarizations. In the batch setting, qParEGO and qNParEGO both use a new random scalarization for each candidate [^qEHVI]. Candidates are selected in a sequential greedy fashion, each with a different scalarization, via the [`optimize_acqf_list`](../api/optim.html#botorch.optim.optimize.optimize_acqf_list) function.
41+
Additionally, qParEGO and qNParEGO are trivially implemented using an augmented
42+
Chebyshev scalarization as the objective with the
43+
[`qExpectedImprovement`](../api/acquisition.html#qexpectedimprovement)
44+
acquisition function or the
45+
[`qNoisyExpectedImprovement`](../api/acquisition.html#qnoisyexpectedimprovement)
46+
acquisition function, respectively. Botorch provides a
47+
[`get_chebyshev_scalarization`](../api/utils.html#botorch.utils.multi_objective.scalarization.get_chebyshev_scalarizationconvenience)
48+
convenience function for generating these scalarizations. In the batch setting,
49+
qParEGO and qNParEGO both use a new random scalarization for each candidate
50+
[^qEHVI]. Candidates are selected in a sequential greedy fashion, each with a
51+
different scalarization, via the
52+
[`optimize_acqf_list`](../api/optim.html#botorch.optim.optimize.optimize_acqf_list)
53+
function.
1854

19-
For a more in-depth example using these acquisition functions, check out the [Multi-Objective Bayesian Optimization tutorial notebook](../tutorials/multi_objective_bo).
55+
For a more in-depth example using these acquisition functions, check out the
56+
[Multi-Objective Bayesian Optimization tutorial
57+
notebook](../tutorials/multi_objective_bo).
2058

2159
## Multi-Objective Utilities
2260

23-
BoTorch provides several utility functions for evaluating performance in MOBO including a method for computing the Pareto front [`is_non_dominated`](../api/utils.html#botorch.utils.multi_objective.pareto.is_non_dominated) and efficient box decomposition algorithms for efficiently partitioning the the space dominated [`DominatedPartitioning`](../api/utils.html#botorch.utils.multi_objective.box_decompositions.dominated.DominatedPartitioning) or non-dominated [`NonDominatedPartitioning`](../api/utils.html#botorch.utils.multi_objective.box_decompositions.non_dominated.NonDominatedPartitioning) by the Pareto frontier into axis-aligned hyperrectangular boxes. For exact box decompositions, BoTorch uses a two-step approach similar to that in [^Yang2019], where (1) Algorithm 1 from [Lacour17]_ is used to find the local lower bounds for the maximization problem and (2) the local lower bounds are used as the Pareto frontier for the minimization problem, and [Lacour17]_ is applied again to partition the space dominated by that Pareto frontier. Approximate box decompositions are also supported using the algorithm from [^Couckuyt2012]. See Appendix F.4 in [^qEHVI] for an analysis of approximate vs exact box decompositions with EHVI. These box decompositions (approximate or exact) can also be used to efficiently compute hypervolumes.
61+
BoTorch provides several utility functions for evaluating performance in MOBO
62+
including a method for computing the Pareto front
63+
[`is_non_dominated`](../api/utils.html#botorch.utils.multi_objective.pareto.is_non_dominated)
64+
and efficient box decomposition algorithms for efficiently partitioning the the
65+
space dominated
66+
[`DominatedPartitioning`](../api/utils.html#botorch.utils.multi_objective.box_decompositions.dominated.DominatedPartitioning)
67+
or non-dominated
68+
[`NonDominatedPartitioning`](../api/utils.html#botorch.utils.multi_objective.box_decompositions.non_dominated.NonDominatedPartitioning)
69+
by the Pareto frontier into axis-aligned hyperrectangular boxes. For exact box
70+
decompositions, BoTorch uses a two-step approach similar to that in [^Yang2019],
71+
where (1) Algorithm 1 from [Lacour17]_ is used to find the local lower bounds
72+
for the maximization problem and (2) the local lower bounds are used as the
73+
Pareto frontier for the minimization problem, and [Lacour17]_ is applied again
74+
to partition the space dominated by that Pareto frontier. Approximate box
75+
decompositions are also supported using the algorithm from [^Couckuyt2012]. See
76+
Appendix F.4 in [^qEHVI] for an analysis of approximate vs exact box
77+
decompositions with EHVI. These box decompositions (approximate or exact) can
78+
also be used to efficiently compute hypervolumes.
2479

25-
[^qNEHVI]: S. Daulton, M. Balandat, and E. Bakshy. Parallel Bayesian Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement. Advances in Neural
26-
Information Processing Systems 34, 2021.
80+
[^qNEHVI]: S. Daulton, M. Balandat, and E. Bakshy. Parallel Bayesian
81+
Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement.
82+
Advances in Neural Information Processing Systems 34, 2021.
2783
[paper](https://arxiv.org/abs/2105.08195)
2884

2985
[^qEHVI]: S. Daulton, M. Balandat, and E. Bakshy. Differentiable Expected Hypervolume

0 commit comments

Comments
 (0)