Skip to content

Commit e4ad21c

Browse files
committed
Line edits2
1 parent 1bc75ba commit e4ad21c

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

learn-pr/azure/optimize-model-performance-roc-auc/includes/2-receiver-operator-characteristic-curve.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ Classification models must assign a sample to a category. For example, it must u
22

33
We can improve classification models in many ways. For example, we can ensure our data are balanced, clean, and scaled. We can also alter our model architecture and use hyperparameters to squeeze as much performance as we possibly can out of our data and architecture. Eventually, we find no better way to improve performance on our test (or hold-out) set and declare our model ready.
44

5-
Model tuning to this point can be complex, but we can use a final simple step to further improve how well our model works. To understand this, though, we need to go back to basics.
5+
Model tuning to this point can be complex, but we can use a final step to further improve how well our model works. To understand this, though, we need to go back to basics.
66

77
## Probabilities and categories
88

learn-pr/azure/optimize-model-performance-roc-auc/includes/4-compare-optimize-curves.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Usually there isn't a single threshold that gives both the best true positive ra
1010

1111
## Comparing models with AUC
1212

13-
You can use ROC curves to compare models to each other, just like you can with cost functions. An ROC curve for a model shows how well it will work for a variety of decision thresholds. At the end of the day, what's most important in a model is how it will perform in the real world, where there's only one decision threshold. Why then would we want to compare models using thresholds we'll never use? There are two answers for this.
13+
You can use ROC curves to compare models to each other just like you can with cost functions. An ROC curve for a model shows how well it will work for a variety of decision thresholds. At the end of the day, what's most important in a model is how it will perform in the real world, where there's only one decision threshold. Why then would we want to compare models using thresholds we'll never use? There are two answers for this.
1414

1515
Firstly, comparing ROC curves in particular ways is like performing a statistical test that tells us not just that one model did better on this particular test set, but whether it's likely to continue to perform better in the future. This is out of the scope of this learning material, but it's worth keeping in mind.
1616

0 commit comments

Comments
 (0)