You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/service/how-to-understand-automated-ml.md
+80-48Lines changed: 80 additions & 48 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,13 +3,13 @@ title: Understand automated ML results
3
3
titleSuffix: Azure Machine Learning
4
4
description: Learn how to view and understand charts and metrics for each of your automated machine learning runs.
5
5
services: machine-learning
6
-
author: cartacioS
7
-
ms.author: sacartac
6
+
author: RachelKellam
7
+
ms.author: rakellam
8
8
ms.reviewer: sgilley
9
9
ms.service: machine-learning
10
10
ms.subservice: core
11
11
ms.topic: conceptual
12
-
ms.date: 11/04/2019
12
+
ms.date: 12/05/2019
13
13
---
14
14
15
15
# Understand automated machine learning results
@@ -93,76 +93,108 @@ recall_score_macro|Recall is the percent of correctly labeled elements of a cert
93
93
recall_score_micro|Recall is the percent of correctly labeled elements of a certain class. Micro is computed globally by counting the total true positives, false negatives and false positives|[Calculation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html)|average="micro"|
94
94
recall_score_weighted|Recall is the percent of correctly labeled elements of a certain class. Weighted is the arithmetic mean of recall for each class, weighted by number of true instances in each class.|[Calculation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html)|average="weighted"|
95
95
weighted_accuracy|Weighted accuracy is accuracy where the weight given to each example is equal to the proportion of true instances in that example's true class.|[Calculation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html)|sample_weight is a vector equal to the proportion of that class for each element in the target|
96
-
96
+
<aname="confusion-matrix"></a>
97
97
### Confusion matrix
98
+
#### What is a confusion matrix?
99
+
A confusion matrix is used to describe the performance of a classification model. Each row displays the instances of the true, or actual class in your dataset, and each column represents the instances of the class that was predicted by the model.
98
100
99
-
A confusion matrix is used to describe the performance of a classification model. Each row displays the instances of the true class, and each column represents the instances of the predicted class. The confusion matrix shows the correctly classified labels and the incorrectly classified labels for a given model.
101
+
#### What does automated ML do with the confusion matrix?
102
+
For classification problems, Azure Machine Learning automatically provides a confusion matrix for each model that is built. For each confusion matrix, automated ML will show the frequency of each predicted label (column) compared against the true label (row). The darker the color, the higher the count in that particular part of the matrix.
100
103
101
-
For classification problems, Azure Machine Learning automatically provides a confusion matrix for each model that is built. For each confusion matrix, automated ML will show the frequency of each predicted label and each true label intersection. The darker the color, the higher the count in that particular part of the matrix. Ideally, the darkest colors would be along the diagonal of the matrix.
104
+
#### What does a good model look like?
105
+
We are comparing the actual value of the dataset against the predicted values that the model gave. Because of this, machine learning models have higher accuracy if the model has most of its values along the diagonal, meaning the model predicted the correct value. If a model has class imbalance, the confusion matrix will help to detect a biased model.
102
106
103
-
Example 1: A classification model with poor accuracy
107
+
##### Example 1: A classification model with poor accuracy
104
108

105
109
106
-
Example 2: A classification model with high accuracy (ideal)
110
+
##### Example 2: A classification model with high accuracy
107
111

108
112
113
+
##### Example 3: A classification model with high accuracy and high bias in model predictions
114
+

109
115
116
+
<aname="precision-recall-chart"></a>
110
117
### Precision-recall chart
118
+
#### What is a precision-recall chart?
119
+
The precision-recall curve shows the relationship between precision and recall from a model. The term precision represents that ability for a model to label all instances correctly. Recall represents the ability for a classifier to find all instances of a particular label.
120
+
121
+
#### What does automated ML do with the precision-recall chart?
122
+
123
+
With this chart, you can compare the precision-recall curves for each model to determine which model has an acceptable relationship between precision and recall for your particular business problem. This chart shows Macro Average Precision-Recall, Micro Average Precision-Recall, and the precision-recall associated with all classes for a model.
111
124
112
-
With this chart, you can compare the precision-recall curves for each model to determine which model has an acceptable relationship between precision and recall for your particular business problem. This chart shows Macro Average Precision-Recall, Micro Average Precision-Recall, and the precision-recall associated with all classes for a model.
125
+
Macro-average will compute the metric independently of each class and then take the average, treating all classes equally. However, micro-average will aggregate the contributions of all the classes to compute the average. Micro-average is preferable if there is class imbalance present in the dataset.
113
126
114
-
The term Precision represents that ability for a classifier to label all instances correctly. Recall represents the ability for a classifier to find all instances of a particular label. The precision-recall curve shows the relationship between these two concepts. Ideally, the model would have 100% precision and 100% accuracy.
127
+
#### What does a good model look like?
128
+
Depending on the goal of the business problem, the ideal precision-recall curve could differ. Some examples are given below
115
129
116
-
Example 1: A classification model with low precision and low recall
130
+
##### Example 1: A classification model with low precision and low recall
117
131

118
132
119
-
Example 2: A classification model with ~100% precision and ~100% recall (ideal)
133
+
##### Example 2: A classification model with ~100% precision and ~100% recall
120
134

135
+
<aname="roc"></a>
136
+
### ROC chart
121
137
122
-
### ROC
123
-
138
+
#### What is a ROC chart?
124
139
Receiver operating characteristic (or ROC) is a plot of the correctly classified labels vs. the incorrectly classified labels for a particular model. The ROC curve can be less informative when training models on datasets with high bias, as it will not show the false positive labels.
125
140
126
-
Example 1: A classification model with low true labels and high false labels
127
-

141
+
#### What does automated ML do with the ROC chart?
142
+
Automated ML generates Macro Average Precision-Recall, Micro Average Precision-Recall, and the precision-recall associated with all classes for a model.
128
143
129
-
Example 2: A classification model with high true labels and low false labels
130
-

144
+
Macro-average will compute the metric independently of each class and then take the average, treating all classes equally. However, micro-average will aggregate the contributions of all the classes to compute the average. Micro-average is preferable if there is class imbalance present in the dataset.
131
145
132
-
### Lift curve
146
+
#### What does a good model look like?
147
+
Ideally, the model will have closer to 100% true positive rate and closer to 0% false positive rate.
133
148
134
-
You can compare the lift of the model built automatically with Azure Machine Learning to the baseline in order to view the value gain of that particular model.
149
+
##### Example 1: A classification model with low true labels and high false labels
150
+

135
151
136
-
Lift charts are used to evaluate the performance of a classification model. It shows how much better you can expect to do with a model compared to without a model.
152
+
##### Example 2: A classification model with high true labels and low false labels
153
+

154
+
<aname="lift-curve"></a>
155
+
### Lift chart
156
+
#### What is a lift chart?
157
+
Lift charts are used to evaluate the performance of a classification model. It shows how much better you can expect to do with the generated model compared to without a model in terms of accuracy.
158
+
#### What does automated ML do with the lift chart?
159
+
You can compare the lift of the model built automatically with Azure Machine Learning to the baseline in order to view the value gain of that particular model.
160
+
#### What does a good model look like?
137
161
138
-
Example 1: Model performs worse than a random selection model
162
+
##### Example 1: A classification model that does worse than a random selection model
139
163

140
-
141
-
Example 2: Model performs better than a random selection model
164
+
##### Example 2: A classification model that performs better than a random selection model
142
165

143
-
144
-
### Gains curve
166
+
<aname="gains-curve"></a>
167
+
### Gains chart
168
+
#### What is a gains chart?
145
169
146
170
A gains chart evaluates the performance of a classification model by each portion of the data. It shows for each percentile of the data set, how much better you can expect to perform compared against a random selection model.
147
171
172
+
#### What does automated ML do with the gains chart?
148
173
Use the cumulative gains chart to help you choose the classification cutoff using a percentage that corresponds to a desired gain from the model. This information provides another way of looking at the results in the accompanying lift chart.
149
174
150
-
Example 1: A classification model with minimal gain
175
+
#### What does a good model look like?
176
+
##### Example 1: A classification model with minimal gain
151
177

152
178
153
-
Example 2: A classification model with significant gain
179
+
##### Example 2: A classification model with significant gain
154
180

181
+
<aname="calibration-plot"></a>
182
+
### Calibration chart
155
183
156
-
### Calibration plot
184
+
#### What is a calibration chart?
185
+
A calibration plot is used to display the confidence of a predictive model. It does this by showing the relationship between the predicted probability and the actual probability, where “probability” represents the likelihood that a particular instance belongs under some label.
186
+
#### What does automated ML do with the calibration chart?
187
+
For all classification problems, you can review the calibration line for micro-average, macro-average, and each class in a given predictive model.
157
188
158
-
For all classification problems, you can review the calibration line for micro-average, macro-average, and each class in a given predictive model.
189
+
Macro-average will compute the metric independently of each class and then take the average, treating all classes equally. However, micro-average will aggregate the contributions of all the classes to compute the average.
190
+
#### What does a good model look like?
191
+
A well-calibrated model aligns with the y=x line, where it is reasonably confident in its predictions. An over-confident model aligns with the y=0 line, where the predicted probability is present but there is no actual probability.
159
192
160
-
A calibration plot is used to display the confidence of a predictive model. It does this by showing the relationship between the predicted probability and the actual probability, where “probability” represents the likelihood that a particular instance belongs under some label. A well calibrated model aligns with the y=x line, where it is reasonably confident in its predictions. An over-confident model aligns with the y=0 line, where the predicted probability is present but there is no actual probability.
161
193
162
-
Example 1: A more well-calibrated model
194
+
##### Example 1: A well-calibrated model
163
195

@@ -192,38 +224,38 @@ normalized_root_mean_squared_error|Normalized root mean squared error is root me
192
224
root_mean_squared_log_error|Root mean squared log error is the square root of the expected squared logarithmic error|[Calculation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_log_error.html)|None|
193
225
normalized_root_mean_squared_log_error|Normalized Root mean squared log error is root mean squared log error divided by the range of the data|[Calculation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_log_error.html)|Divide by range of the data|
194
226
195
-
### <aname="pvt"></a> Predicted vs. True
196
-
227
+
### <aname="pvt"></a> Predicted vs. True chart
228
+
#### What is a Predicted vs. True chart?
197
229
Predicted vs. True shows the relationship between a predicted value and its correlating true value for a regression problem. This graph can be used to measure performance of a model as the closer to the y=x line the predicted values are, the better the accuracy of a predictive model.
198
230
231
+
#### What does automated ML do with the Predicted vs. True chart?
199
232
After each run, you can see a predicted vs. true graph for each regression model. To protect data privacy, values are binned together and the size of each bin is shown as a bar graph on the bottom portion of the chart area. You can compare the predictive model, with the lighter shade area showing error margins, against the ideal value of where the model should be.
200
233
201
-
Example 1: A regression model with low accuracy in predictions
234
+
#### What does a good model look like?
235
+
##### Example 1: A classification model with low accuracy
202
236

203
237
204
-
Example 2: A regression model with high accuracy in its predictions
238
+
##### Example 2: A regression model with high accuracy
205
239
[](./media/how-to-understand-automated-ml/azure-machine-learning-auto-ml-regression2-expanded.png)
206
240
207
241
208
242
209
-
### <aname="histo"></a> Histogram of residuals
210
-
243
+
### <aname="histo"></a> Histogram of residuals chart
244
+
#### What is a residuals chart?
211
245
A residual represents an observed y – the predicted y. To show a margin of error with low bias, the histogram of residuals should be shaped as a bell curve, centered around 0.
246
+
#### What does automated ML do with the residuals chart?
247
+
Automated ML automatically provides a residuals chart to show the distribution of errors in the predictions.
248
+
#### What does a good model look like?
249
+
A good model will typically have a bell curve or errors around zero.
212
250
213
-
Example 1: A regression model with bias in its errors
251
+
##### Example 1: A regression model with bias in its errors
214
252

215
253
216
-
Example 2: A regression model with more even distribution of errors
254
+
##### Example 2: A regression model with more even distribution of errors
217
255

218
256
219
257
## <aname="explain-model"></a> Model interpretability and feature importance
220
-
221
-
Feature importance allows you to see how valuable each feature was in the construction of a model. This calculation is turned off by default, as it can significantly increase run time. You can enable model explanation for all models or explain only the best fit model.
222
-
223
-
You can review the feature importance score for the model overall as well as per class on a predictive model. You can see per feature how the importance compares against each class and overall.
Automated ML provides a machine learning interpretability dashboard for your runs.
227
259
For more information on enabling interpretability features, see the [how-to](how-to-machine-learning-interpretability-automl.md) on enabling interpretability in automated ML experiments.
0 commit comments