Replies: 1 comment
-
Hi @cugwu, you can use
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I've implemented the tutorial https://github.com/Project-MONAI/tutorials/blob/main/modules/cross_validation_models_ensemble.ipynb
related to the ensemble evaluator. Everything works properly and I have the following out:
INFO:ignite.engine.engine.EnsembleEvaluator:Engine run resuming from iteration 0, epoch 0 until 1 epochs
INFO:ignite.engine.engine.EnsembleEvaluator:Got new best metric of test_mean_dice: 0.9344435930252075
INFO:ignite.engine.engine.EnsembleEvaluator:Epoch[1] Complete. Time taken: 00:00:15
INFO:ignite.engine.engine.EnsembleEvaluator:Engine run complete. Time taken: 00:00:16
How can I gate also the test accuracy for each segmentation class?
I know from https://docs.monai.io/en/stable/metrics.html#metric that the Mean Dice score metric should output "Dice scores per batch and per class". How can I get this extra information using the ensemble evaluator?
My setups are:
def ensemble_evaluate(post_transforms, models, device, loader, inferer):
evaluator = EnsembleEvaluator(
device=device,
val_data_loader=loader,
pred_keys=["pred0", "pred1", "pred2", "pred3", "pred4"],
networks=models,
inferer=inferer,
postprocessing=post_transforms,
key_val_metric={
"test_mean_dice": MeanDice(
include_background=True,
# reduction=MetricReduction.MEAN,
output_transform=from_engine(["pred", "label"])
),
},
)
evaluator.run()
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions