Replies: 1 comment
-
Hi @anais2390 , Thanks for your feedback. Thanks. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone !
I am currently working on a 2D binary segmentation problem and I am using the dice coefficient to evaluate performance during training. To do so I use
MeanDice()
combined with aSupervisedEvaluator
:val_metric = {'mean_dice':MeanDice()}
evaluator= create_supervised_evaluator(net, val_metric , device, non_blocking=True, output_transform=lambda x, y, y_pred: ([postproc_pred(i) for i in decollate_batch(y_pred)], [postproc_label(i) for i in decollate_batch(y)]), prepare_batch=prepare_batch)
As a double check I also add the
DiceLoss
class to my evaluator such as :val_metric = {'diceloss':1 - Loss(DiceLoss())}
However, the results return by the two metrics are highly different, for exemple :
INFO:trainer:Epoch[1]: mean_dice: 0.492 diceloss: 0.176
I suppose this difference comes from the Reduction step but i can't figure out why ?
Could you bring me some highlights ?
Thanks !
Beta Was this translation helpful? Give feedback.
All reactions