You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this article, you learn to use the interpretability package of the Azure Machine Learning Python SDK to understand why your model made its predictions. You learn how to:
19
+
In this how-to guide, you learn to use the interpretability package of the Azure Machine Learning Python SDK to perform the following tasks:
20
20
21
-
* Interpret machine learning models trained both locally and on remote compute resources.
22
-
* Store local and global explanations on Azure Run History.
23
-
* View interpretability visualizations in [Azure Machine Learning studio](https://ml.azure.com).
24
-
* Deploy a scoring explainer with your model.
25
21
26
-
For more information, see [Model interpretability in Azure Machine Learning](how-to-machine-learning-interpretability.md).
22
+
* Explain the entire model behavior or individual predictions on your personal machine locally.
27
23
28
-
## Local interpretability
24
+
* Enable interpretability techniques for engineered features.
29
25
30
-
The following example shows how to use the interpretability package locally without contacting Azure services.
26
+
* Explain the behavior for the entire model and individual predictions in Azure.
31
27
32
-
1. If needed, use `pip install azureml-interpret` to get the interpretability package.
28
+
29
+
* Use a visualization dashboard to interact with your model explanations.
30
+
31
+
* Deploy a scoring explainer alongside your model to observe explanations during inferencing.
32
+
33
+
34
+
35
+
For more information on the supported interpretability techniques and machine learning models, see [Model interpretability in Azure Machine Learning](how-to-machine-learning-interpretability.md) and [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/explain-model).
33
36
34
-
1. Train a sample model in a local Jupyter notebook.
37
+
## Generate feature importance value on your personal machine
38
+
The following example shows how to use the interpretability package on your personal machine without contacting Azure services.
39
+
40
+
1. Install `azureml-interpret` and `azureml-interpret-contrib` packages.
41
+
```bash
42
+
pip install azureml-interpret
43
+
pip install azureml-interpret-contrib
44
+
```
45
+
46
+
2. Train a sample model in a local Jupyter notebook.
35
47
36
48
```python
37
49
# load breast cancer dataset, a well-known small dataset that comes with scikit-learn
@@ -51,7 +63,7 @@ The following example shows how to use the interpretability package locally with
51
63
model = clf.fit(x_train, y_train)
52
64
```
53
65
54
-
1. Call the explainer locally.
66
+
3. Call the explainer locally.
55
67
* To initialize an explainer object, pass your model and some training data to the explainer's constructor.
56
68
* To make your explanations and visualizations more informative, you can choose to pass in feature names and output class names if doing classification.
57
69
@@ -106,9 +118,9 @@ The following example shows how to use the interpretability package locally with
106
118
classes=classes)
107
119
```
108
120
109
-
### Overall, global feature importance values
121
+
### Explain the entire model behavior (global explanation)
110
122
111
-
Refer to the following example to help you get the global feature importance values.
123
+
Refer to the following example to help you get the aggregate (global) feature importance values.
The following example shows how you can use the `ExplanationClient`class to enable model interpretability for remote runs. It's conceptually similar to the local process, except you:
148
-
149
-
* Use the `ExplanationClient`in the remote run to upload the interpretability context.
150
-
* Download the context later in a local environment.
151
-
152
-
1. If needed, use `pip install azureml-contrib-interpret` to get the necessary package.
153
-
154
-
1. Create a training script in a local Jupyter notebook. For example, `train_explain.py`.
155
-
156
-
```python
157
-
from azureml.contrib.interpret.explanation.explanation_client import ExplanationClient
158
-
from azureml.core.run import Run
159
-
from interpret.ext.blackbox import TabularExplainer
160
-
161
-
run = Run.get_context()
162
-
client = ExplanationClient.from_run(run)
163
-
164
-
# write code to get and split your data into train and test sets here
165
-
# write code to train your model here
166
-
167
-
# explain predictions on your local machine
168
-
# "features" and "classes" fields are optional
169
-
explainer = TabularExplainer(model,
170
-
x_train,
171
-
features=feature_names,
172
-
classes=classes)
173
-
174
-
# explain overall model predictions (global explanation)
# uploading global model explanation data for storage or visualization in webUX
178
-
# the explanation can then be downloaded on any compute
179
-
# multiple explanations can be uploaded
180
-
client.upload_model_explanation(global_explanation, comment='global explanation: all features')
181
-
# or you can only upload the explanation object with the top k feature info
182
-
#client.upload_model_explanation(global_explanation, top_k=2, comment='global explanation: Only top 2 features')
183
-
```
184
-
185
-
1. Set up an Azure Machine Learning Compute as your compute target and submit your training run. See [setting up compute targets for model training](how-to-set-up-training-targets.md#amlcompute) for instructions. You might also find the [example notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/explain-model/azure-integration/remote-explanation) helpful.
186
-
187
-
1. Download the explanation in your local Jupyter notebook.
188
-
189
-
```python
190
-
from azureml.contrib.interpret.explanation.explanation_client import ExplanationClient
191
-
192
-
client= ExplanationClient.from_run(run)
193
-
194
-
# get model explanation data
195
-
explanation= client.download_model_explanation()
196
-
# or only get the top k (e.g., 4) most important features with their importance values
You can opt to get explanations in terms of raw, untransformed features rather than engineered features. For this option, you pass your feature transformation pipeline to the explainer in `train_explain.py`. Otherwise, the explainer provides explanations in terms of engineered features.
## Generate feature importance values via remote runs
231
+
232
+
The following example shows how you can use the `ExplanationClient` class to enable model interpretability for remote runs. It is conceptually similar to the local process, except you:
233
+
234
+
* Use the `ExplanationClient` in the remote run to upload the interpretability context.
235
+
* Download the context later in a local environment.
236
+
237
+
1. Install `azureml-interpret` and `azureml-interpret-contrib` packages.
238
+
```bash
239
+
pip install azureml-interpret
240
+
pip install azureml-interpret-contrib
241
+
```
242
+
1. Create a training script in a local Jupyter notebook. For example, `train_explain.py`.
243
+
244
+
```python
245
+
from azureml.contrib.interpret.explanation.explanation_client import ExplanationClient
246
+
from azureml.core.run import Run
247
+
from interpret.ext.blackbox import TabularExplainer
248
+
249
+
run = Run.get_context()
250
+
client = ExplanationClient.from_run(run)
251
+
252
+
# write code to get and split your data into train and test sets here
253
+
# write code to train your model here
254
+
255
+
# explain predictions on your local machine
256
+
# "features" and "classes" fields are optional
257
+
explainer = TabularExplainer(model,
258
+
x_train,
259
+
features=feature_names,
260
+
classes=classes)
261
+
262
+
# explain overall model predictions (global explanation)
# uploading global model explanation data for storage or visualization in webUX
266
+
# the explanation can then be downloaded on any compute
267
+
# multiple explanations can be uploaded
268
+
client.upload_model_explanation(global_explanation, comment='global explanation: all features')
269
+
# or you can only upload the explanation object with the top k feature info
270
+
#client.upload_model_explanation(global_explanation, top_k=2, comment='global explanation: Only top 2 features')
271
+
```
272
+
273
+
1. Set up an Azure Machine Learning Compute as your compute target and submit your training run. See [setting up compute targets for model training](how-to-set-up-training-targets.md#amlcompute) for instructions. You might also find the [example notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/explain-model/azure-integration/remote-explanation) helpful.
274
+
275
+
1. Download the explanation in your local Jupyter notebook.
276
+
277
+
```python
278
+
from azureml.contrib.interpret.explanation.explanation_client import ExplanationClient
279
+
280
+
client = ExplanationClient.from_run(run)
281
+
282
+
# get model explanation data
283
+
explanation = client.download_model_explanation()
284
+
# or only get the top k (e.g., 4) most important features with their importance values
After you download the explanations in your local Jupyter notebook, you can use the visualization dashboard to understand and interpret your model.
282
297
283
-
### Global visualizations
298
+
### Understand entire model behavior (global explanation)
284
299
285
-
The following plots provide a global view of the trained model along with its predictions and explanations.
300
+
The following plots provide an overall view of the trained model along with its predictions and explanations.
286
301
287
302
|Plot|Description|
288
303
|----|-----------|
289
304
|Data Exploration| Displays an overview of the dataset along with prediction values.|
290
-
|Global Importance|Shows top K (configurable K) important features globally. Helps understanding of underlying model's global behavior.|
305
+
|Global Importance|Aggregates feature importance values of individual datapoints to show the model's overall top K (configurable K) important features. Helps understanding of underlying model's overall behavior.|
291
306
|Explanation Exploration|Demonstrates how a feature affects a change in model's prediction values, or the probability of prediction values. Shows impact of feature interaction.|
292
-
|Summary Importance|Uses local, feature importance values across all data points to show the distribution of each feature's impact on the prediction value.|
307
+
|Summary Importance|Uses individual feature importance values across all data points to show the distribution of each feature's impact on the prediction value. Using this diagram, you investigate in what direction the feature values affects the prediction values.
You can load the local, feature importance plot forany data point by selecting the individual data pointin the plot.
314
+
You can load the individual feature importance plot for any data point by clicking on any of the individual data points in any of the overall plots.
299
315
300
316
|Plot|Description|
301
317
|----|-----------|
302
-
|Local Importance|Shows the top K (configurable K) important features globally. Helps illustrate the local behavior of the underlying model on a specific data point.|
303
-
|Perturbation Exploration|Allows changes to feature values of the selected data point and observe resulting changes to prediction value.|
318
+
|Local Importance|Shows the top K (configurable K) important features for an individual prediction. Helps illustrate the local behavior of the underlying model on a specific data point.|
319
+
|Perturbation Exploration (what if analysis)|Allows changes to feature values of the selected data point and observe resulting changes to prediction value.|
304
320
|Individual Conditional Expectation (ICE)| Allows feature value changes from a minimum value to a maximum value. Helps illustrate how the data point's prediction changes when a feature changes.|
305
321
306
322
[](./media/how-to-machine-learning-interpretability-aml/local-charts.png#lightbox)
### Visualization in Azure Machine Learning studio
340
356
341
-
If you complete the [remote interpretability](#interpretability-for-remote-runs) steps, you can view the visualization dashboard in [Azure Machine Learning studio](https://ml.azure.com). This dashboard is a simpler version of the visualization dashboard explained above. It only supports two tabs:
342
-
343
-
|Plot|Description|
344
-
|----|-----------|
345
-
|Global Importance|Shows top K (configurable K) important features globally. Helps understanding of underlying model's global behavior.|
346
-
|Summary Importance|Uses local, feature importance values across all data points to show the distribution of each feature's impact on the prediction value.|
357
+
If you complete the [remote interpretability](how-to-machine-learning-interpretability-aml.md#generate-feature-importance-values-via-remote-runs) steps (uploading generated explanation to Azure Machine Learning Run History), you can view the visualization dashboard in [Azure Machine Learning studio](https://ml.azure.com). This dashboard is a simpler version of the visualization dashboard explained above (explanation exploration and ICE plots are disabled as there is no active compute in studio that can perform their real time computations).
347
358
348
-
If both globaland local explanations are available, data populates both tabs. If only a global explanation is available, the Summary Importance tab is disabled.
359
+
If the dataset, global, and local explanations are available, data populates all of the tabs (except Perturbation Exploration and ICE). If only a global explanation is available, the Summary Importance tab and all local explanation tabs are disabled.
349
360
350
361
Follow one of these paths to access the visualization dashboard in Azure Machine Learning studio:
351
362
@@ -362,7 +373,7 @@ Follow one of these paths to access the visualization dashboard in Azure Machine
362
373
363
374
## Interpretability at inference time
364
375
365
-
You can deploy the explainer along with the original model and use it at inference time to provide the local explanation information. We also offer lighter-weight scoring explainers to improve interpretability performance at inference time. The process of deploying a lighter-weight scoring explainer is similar to deploying a model and includes the following steps:
376
+
You can deploy the explainer along with the original model and use it at inference time to provide the individual feature importance values (local explanation) for new any new datapoint. We also offer lighter-weight scoring explainers to improve interpretability performance at inference time. The process of deploying a lighter-weight scoring explainer is similar to deploying a model and includes the following steps:
366
377
367
378
1. Create an explanation object. For example, you can use `TabularExplainer`:
368
379
@@ -380,7 +391,7 @@ You can deploy the explainer along with the original model and use it at inferen
380
391
1. Create a scoring explainer with the explanation object.
381
392
382
393
```python
383
-
from azureml.contrib.interpret.scoring.scoring_explainer import KernelScoringExplainer, save
394
+
from azureml.interpret.scoring.scoring_explainer import KernelScoringExplainer, save
0 commit comments