Replies: 1 comment 2 replies
-
It depends what acquisition function you use. If you use multi-fidelity hypervolume knowledge gradient as described in the MF-HVKG tutorial, lowering the cost of the low-fidelity model will lead to evaluating more low-fidelity points. If you already have some high-fidelity points, I do expect that in the limit as the cost of the low-fidelity points falls towards zero, you would evaluate almost entirely low-fidelity points, but it's hard to say how this would play out in a realistic setting. As for your second question, I think following the MF-HVKG tutorial would be reasonable in this situation. It would probably lead to generating a mix of low-fidelity and high-fidelity points. But I am not an expert on this kind of setup. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello everyone,
I'm working on multi-objective, multi-fidelity methods, and I have some questions about the use of low-fidelity simulation, particularly when the low-fidelity simulation is derived from a machine learning model:
Suppose we have n high-fidelity points from past experiments. The cost of the low-fidelity model is almost zero. How does this influence the choice of the next high-fidelity point? Do we run low-fidelity simulations until the low-fidelity EI is evaluated "everywhere"?
The low-fidelity model can be a machine learning model or an analytical model that approximates the objective over a larger search space of the problem we want to solve. Our problem is in a part of the space that has not been well explored. What is the best modeling strategy: simply consider this model as a low-fidelity simulation and add the high-fidelity points generated for training to this model, or use a single-fidelity strategy but replace the Gaussian model with this model?
Thanks for your help and for this incredible code, BoTorch.
Beta Was this translation helpful? Give feedback.
All reactions