Skip to content

Commit c1c2b9c

Browse files
Merge branch 'confusion-precision-recall' of github.com:UBC-DSCI/introduction-to-datascience into confusion-precision-recall
2 parents 8775121 + d09b578 commit c1c2b9c

File tree

1 file changed

+6
-1
lines changed

1 file changed

+6
-1
lines changed

source/classification2.Rmd

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -160,7 +160,12 @@ classifier can make, corresponding to the four entries in the confusion matrix:
160160
- **True Negative:** A benign observation that was classified as benign (bottom right in Table \@ref(tab:confusion-matrix)).
161161
- **False Negative:** A malignant observation that was classified as benign (bottom left in Table \@ref(tab:confusion-matrix)).
162162

163-
And then rather than just looking at accuracy, we can also evaluate the **precision** and **recall** of the classifier.
163+
A perfect classifier would have zero False Negatives and False Positives.
164+
However,
165+
in practice there will always be some error in the model,
166+
so it is important to think about what type of errors we think are more harmful,
167+
which we discuss in detail in section \@ref(critically-analyze-performance)
168+
Two commonly used metrics that capture part of what the confusion matrix is telling us are the **precision** and **recall** of the classifier which are often reported together with accuracy.
164169
*Precision* quantifies how many of the positive predictions the classifier made were actually positive. Intuitively,
165170
we would like a classifier to have a *high* precision: for a classifier with high precision, if the
166171
classifier reports that a new observation is positive, we can trust that that

0 commit comments

Comments
 (0)