Mulit-Task Normalization problem #2387
-
|
Hi I am trying to do a Multi-Fidelity optimization using the MultiTaskGP, but I have issues when normalizing the training data. My training data is x-training data y-training data: These are exact observations form the multifidelity forrester function defined in Analytical Benchmark Problems for Multifidelity Optimization Methods. when i do the input transform with the Normalize and outcome_tranform with Standardize the model fit is not correct: When i plot the results with: I get this result: but when I omit the input and outcome transform I get a much better fit: Why is it so different? Has it something to do with the strong priors? Thanks to anyone reading my post! I'm grateful for any help. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
|
I don't think you should be normalizing the task values. This will raise an exception in newer versions of BoTorch. When I tried to run this, it errored out at I'm not an expert on multi-task models, so I'll see if I can get back to you on that. |
Beta Was this translation helpful? Give feedback.


I don't think you should be normalizing the task values. This will raise an exception in newer versions of BoTorch. When I tried to run this, it errored out at
pred2 = model.posterior(eval_x2)withI'm not an expert on multi-task models, so I'll see if I can get back to you on that.