Skip to content
Discussion options

You must be logged in to vote

Hi @radka-j

Yes, but I mean how to use the model in a different script at inference time. I think I found the answer:

import torch
from autoemulate.core.save import ModelSerialiser
from autoemulate.core.logging_config import get_configured_logger

# load model, we need to do `get_configured_logger` otherwise the exceptions are not properly read and give a different but
model = ModelSerialiser(get_configured_logger("info")[0])._load_result("ae_best").model
# test it works
print(model.predict(torch.tensor([0,0,0,0,0,0])).mean)

I am not sure if this is the best/intended usage, since the function is prefixed with _ (_load_result) which usually means it is a private function.

If I give ModelSe…

Replies: 1 comment 4 replies

Comment options

You must be logged in to vote
4 replies
@jvwilliams23
Comment options

Answer selected by jvwilliams23
@radka-j
Comment options

@jvwilliams23
Comment options

@radka-j
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants