Skip to content

Conversation

@mcihadarslanoglu
Copy link

Since the model is not in eval mode, dropout and normalization parameter updates are applied. However, they should not be used in evaluation mode. The script produces different outputs in each running if we do not enable evaluation mode.

@ricardopizarro
Copy link

That is a great catch. However, there is value in applying dropout during inference to allow for probabilities and uncertainty measures to be estimated: https://proceedings.mlr.press/v48/gal16.pdf
With Monte Carlo (MC) dropout, you can estimate probability, variance, entropy, and mutual information for a single prediction!

We are interested in using CLAM with dropout turned on during the inference to allow for MC dropout. Is it simply a matter of removing the model.eval() line? Or do we need to replace it with something else? What if we are interested in doing this during the inference of an entire slide, instead of heatmap? Do we again remove the model.eval() ? Thank you in advance for your feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants