Replies: 1 comment 2 replies
-
Hi, what I understood regarding this topic, this soft concept (allowing floating numbers between 0 and 1) is for deep learning training. If it is hardly thresholding to 0 or 1 and calculated loss, it might not be good for calculating the gradient and it ignores the difference between 0.01 and 0.4(both to 0). For me, for training purpose, soft loss makes sense, and for the metric, it should be strict since we need the exact dice score on the current prediction. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I could see the tutorials and the pipeline is the following:
Though, it makes me confuse.
I could see that in this case the Dice Loss could be very quite low, like 0.1 while Dice Metric hardly reaches 0.5 like this.
Additionally, are not they Dice Metric + Dice Loss = 1 ?
I.e.:
loss = loss_function(outputs, labels)
is on softmax//sigmoid outputs and binary labels
is on post-process outputs.
So, why not compute Loss and Metric on binary, as here is pointed: https://github.com/Project-MONAI/tutorials/blob/main/modules/dice_loss_metric_notes.ipynb -> By default the loss and metric will work on the binary case
Or, both on soft?
In general, why all tutorials like Brats, Spleen compute Dice Score and Dice loss like that (loss on softmax/sigmoid, score on binary)
Beta Was this translation helpful? Give feedback.
All reactions