Skip to content

Commit 5fbbf0e

Browse files
Balandatfacebook-github-bot
authored andcommitted
Update polytope sampling code and add thinning capability (#2358)
Summary: This set of changes does the following: * adds an `n_thinning` argument to `sample_polytope` and `HitAndRunPolytopeSampler`; changes the defaults for `HitAndRunPolytopeSampler` args to `n_burnin=200` and `n_thinning=20` * Changes `HitAndRunPolytopeSampler` to take the `seed` arg in its constructor, and removes the arg from the `draw()` method (the method on the base class is adjusted accordingly). The resulting behavior is that if a `HitAndRunPolytopeSampler` is instantiated with the same args and seed, then the sequence of `draw()`s will be deterministic. `DelaunayPolytopeSampler` is stateless, and so retains its existing behavior. * normalizes the (inequality and equality) constraints in `HitAndRunPolytopeSampler` to avoid the same issue as #1225. If `bounds` are note provided, emits a warning that this cannot be performed (doing this would require vertex enumeration of the constraint polytope, which is NP-hard and too costly). * introduces `normalize_dense_linear_constraints` to normalize constraint given in dense format to the unit cube * removes `normalize_linear_constraint`; `normalize_sparse_linear_constraints` is to be used instead * simplifies some of the testing code Note: This change is in preparation for fixing facebook/Ax#2373 Pull Request resolved: #2358 Test Plan: Ran a stress test to make sure this doesn't cause flaky tests: https://www.internalfb.com/intern/testinfra/testconsole/testrun/3940649908470083/ Reviewed By: saitcakmak Differential Revision: D58068753 Pulled By: Balandat fbshipit-source-id: 9a75c547a3493e393cd7e724edd984318b76e1f4
1 parent e618743 commit 5fbbf0e

File tree

4 files changed

+286
-182
lines changed

4 files changed

+286
-182
lines changed

botorch/optim/initializers.py

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -180,7 +180,7 @@ def sample_q_batches_from_polytope(
180180
q: int,
181181
bounds: Tensor,
182182
n_burnin: int,
183-
thinning: int,
183+
n_thinning: int,
184184
seed: int,
185185
inequality_constraints: Optional[List[Tuple[Tensor, Tensor, float]]] = None,
186186
equality_constraints: Optional[List[Tuple[Tensor, Tensor, float]]] = None,
@@ -192,8 +192,8 @@ def sample_q_batches_from_polytope(
192192
q: Number of samples per q-batch
193193
bounds: A `2 x d` tensor of lower and upper bounds for each column of `X`.
194194
n_burnin: The number of burn-in samples for the Markov chain sampler.
195-
thinning: The amount of thinning (number of steps to take between
196-
returning samples).
195+
n_thinning: The amount of thinning. The sampler will return every
196+
`n_thinning` sample (after burn-in).
197197
seed: The random seed.
198198
inequality_constraints: A list of tuples (indices, coefficients, rhs),
199199
with each tuple encoding an inequality constraint of the form
@@ -225,7 +225,7 @@ def sample_q_batches_from_polytope(
225225
),
226226
seed=seed,
227227
n_burnin=n_burnin,
228-
thinning=thinning * q,
228+
n_thinning=n_thinning * q,
229229
)
230230
else:
231231
samples = get_polytope_samples(
@@ -235,7 +235,7 @@ def sample_q_batches_from_polytope(
235235
equality_constraints=equality_constraints,
236236
seed=seed,
237237
n_burnin=n_burnin,
238-
thinning=thinning,
238+
n_thinning=n_thinning,
239239
)
240240
return samples.view(n, q, -1).cpu()
241241

@@ -250,7 +250,7 @@ def gen_batch_initial_conditions(
250250
options: Optional[Dict[str, Union[bool, float, int]]] = None,
251251
inequality_constraints: Optional[List[Tuple[Tensor, Tensor, float]]] = None,
252252
equality_constraints: Optional[List[Tuple[Tensor, Tensor, float]]] = None,
253-
generator: Optional[Callable[[int, int, int], Tensor]] = None,
253+
generator: Optional[Callable[[int, int, Optional[int]], Tensor]] = None,
254254
fixed_X_fantasies: Optional[Tensor] = None,
255255
) -> Tensor:
256256
r"""Generate a batch of initial conditions for random-restart optimziation.
@@ -283,8 +283,8 @@ def gen_batch_initial_conditions(
283283
with each tuple encoding an inequality constraint of the form
284284
`\sum_i (X[indices[i]] * coefficients[i]) = rhs`.
285285
generator: Callable for generating samples that are then further
286-
processed. It receives `n`, `q` and `seed` as arguments and
287-
returns a tensor of shape `n x q x d`.
286+
processed. It receives `n`, `q` and `seed` as arguments
287+
and returns a tensor of shape `n x q x d`.
288288
fixed_X_fantasies: A fixed set of fantasy points to concatenate to
289289
the `q` candidates being initialized along the `-2` dimension. The
290290
shape should be `num_pseudo_points x d`. E.g., this should be
@@ -343,6 +343,7 @@ def gen_batch_initial_conditions(
343343
f"Sample dimension q*d={effective_dim} exceeding Sobol max dimension "
344344
f"({SobolEngine.MAXDIM}). Using iid samples instead.",
345345
SamplingWarning,
346+
stacklevel=3,
346347
)
347348

348349
while factor < max_factor:
@@ -367,7 +368,7 @@ def gen_batch_initial_conditions(
367368
q=q,
368369
bounds=bounds,
369370
n_burnin=options.get("n_burnin", 10000),
370-
thinning=options.get("thinning", 32),
371+
n_thinning=options.get("n_thinning", 32),
371372
seed=seed,
372373
equality_constraints=equality_constraints,
373374
inequality_constraints=inequality_constraints,

0 commit comments

Comments
 (0)