Skip to content

Commit e9dd467

Browse files
committed
add new sketch
1 parent d4e1742 commit e9dd467

File tree

1 file changed

+6
-1
lines changed

1 file changed

+6
-1
lines changed

notebooks/19_machine_learning_techniques.ipynb

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,6 @@
5252
"The confusion matrix we used earlier will do much better here because this will also reveal that our toy model does not discover any of the fraudulent calls (which makes it completely useless for any application...).\n",
5353
"\n",
5454
"\n",
55-
"\n",
5655
"### Detailed Model Evaluation Metrics (classification)\n",
5756
"Here are a few of the most commonly used metrics to evaluate classification models.\n",
5857
"\n",
@@ -69,6 +68,12 @@
6968
"\n",
7069
"In the fraud detection example, a false negative would be more dangerous and costly than a false positive. A high number of FN means many fraudulent calls are not being detected. Conversely, while FP might cause some inconvenience (e.g., blocking legitimate calls), it is preferable over missing actual frauds.\n",
7170
"\n",
71+
"```{figure} ../images/fig_classification_metrics_sketch.png\n",
72+
":name: fig_classification_metrics\n",
73+
"\n",
74+
"The confusion matrix is a good way to quickly display and assess the true positive value (TP), the true negative value (TN) as well as the false positives (FP) and false negatives (FN). Based on those numbers different standard metrics to evaluate a model are computed such as accuracy, recall, specificiy, and precision.\n",
75+
"```\n",
76+
"\n",
7277
"#### 2. Precision and Recall\n",
7378
"\n",
7479
"Precision and recall are metrics that provide more insight into the accuracy of positive predictions and the classifier's ability to recover all relevant instances, respectively.\n",

0 commit comments

Comments
 (0)