❓ How can I test the model and obtain its evaluation metrics after each epoch #2818
-
Your QuestionWhen I was training the model, I found that testing was only conducted after all epochs had ended. I need to obtain the evaluation metrics of the model on the test set after each epoch, but I couldn't find the relevant configuration parameters in the Engine. Forum Check
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Thanks for submitting this issue! It has been added to our triage queue. A maintainer will review it shortly. |
Beta Was this translation helpful? Give feedback.
-
Okay this is a little complicated because there are some models who do not train over multiple validation epochs. For example But for those that do, say from anomalib.data import MVTecAD
from anomalib.engine import Engine
from anomalib.models import Fastflow
from anomalib.metrics import Evaluator, AUROC
from anomalib.loggers import AnomalibTensorBoardLogger
auroc = AUROC(fields=["pred_score", "gt_label"])
evaluator = Evaluator(val_metrics=[auroc])
model = Fastflow(evaluator=evaluator)
datamodule = MVTecAD()
engine = Engine(
logger=AnomalibTensorBoardLogger(
save_dir="logs",
name="fastflow",
),
)
engine.fit(
model=model,
datamodule=datamodule,
) Then you can visualise them in your Tensorboard: ![]() Obviously may want more than just https://anomalib.readthedocs.io/en/latest/markdown/guides/how_to/evaluation/evaluator.html |
Beta Was this translation helpful? Give feedback.
Okay this is a little complicated because there are some models who do not train over multiple validation epochs.
For example
Pathcore
only does one training epoch.But for those that do, say
Fastflow
we can set up anEvaluator
and log the results with a logger likeAnomalibTensorBoardLogger
: