Replies: 1 comment 1 reply
-
Yeah, you would need to somehow tell the optimizer that the higher fidelity is more reliable. Would MOMF suit your use case? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everybody,
I'm trying to build a BO for a problem with 15 input parameters, 2 objectives, 2 constraints. I can evaluate a point in my design space at either a low (0) or high (1) fidelity (2 discrete levels, assume costs 1 and 100 for evaluation). The constrained nature of my problem prevents me from using Multi-fidelity Bayesian optimization with discrete fidelities using KG directly, as KG doesn't support constraints. I would like to optimize a per cost unit qNEHVI and compare with a single fidelity BO also using qNEHVI, but I'm having trouble understanding how to tell the program that the high fidelity should be the target. This is my code for now. I initialize it with only low fidelity data, and even if I set identical costs for both fidelities (or even a lower cost for high fidelity!) the BO only ever evaluates at low fidelity. This hints to the fact that the BO doesn't actually know what the target fidelity is.
Currently, my code is
and my per-unit acquisition function is
Should I have some cost-based utility to tell the optimizer to balance cost and information gain, and/or use project_to_target_fidelity like in qMultiFidelityKnowledgeGradient?
Any help is appreciated, many thanks :)
Beta Was this translation helpful? Give feedback.
All reactions