Skip to content

Commit 036f4d0

Browse files
authored
Merge pull request #107173 from Aniththa/patch-43
Update how-to-machine-learning-interpretability-automl.md
2 parents ae0c234 + 215bd95 commit 036f4d0

File tree

1 file changed

+18
-37
lines changed

1 file changed

+18
-37
lines changed

articles/machine-learning/how-to-machine-learning-interpretability-automl.md

Lines changed: 18 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,9 @@ ms.date: 10/25/2019
1717

1818
[!INCLUDE [applies-to-skus](../../includes/aml-applies-to-basic-enterprise-sku.md)]
1919

20-
In this article, you learn how to enable the interpretability features for automated machine learning (ML) in Azure Machine Learning. Automated ML helps you understand both raw and engineered feature importance. In order to use model interpretability, set `model_explainability=True` in the `AutoMLConfig` object.
20+
In this article, you learn how to enable the interpretability features for automated machine learning (ML) in Azure Machine Learning. Automated ML helps you understand engineered feature importance.
21+
22+
All SDK versions after 1.0.85 set `model_explainability=True` by default. In SDK version 1.0.85 and earlier versions users need to set `model_explainability=True` in the `AutoMLConfig` object in order to use model interpretability.
2123

2224
In this article, you learn how to:
2325

@@ -32,14 +34,14 @@ In this article, you learn how to:
3234

3335
## Interpretability during training for the best model
3436

35-
Retrieve the explanation from the `best_run`, which includes explanations for engineered features and raw features.
37+
Retrieve the explanation from the `best_run`, which includes explanations for engineered features.
3638

3739
### Download engineered feature importance from artifact store
3840

39-
You can use `ExplanationClient` to download the engineered feature explanations from the artifact store of the `best_run`. To get the explanation for the raw features set `raw=True`.
41+
You can use `ExplanationClient` to download the engineered feature explanations from the artifact store of the `best_run`.
4042

4143
```python
42-
from azureml.contrib.interpret.explanation.explanation_client import ExplanationClient
44+
from azureml.explain.model._internal.explanation_client import ExplanationClient
4345

4446
client = ExplanationClient.from_run(best_run)
4547
engineered_explanations = client.download_model_explanation(raw=False)
@@ -48,26 +50,26 @@ print(engineered_explanations.get_feature_importance_dict())
4850

4951
## Interpretability during training for any model
5052

51-
When you compute model explanations and visualize them, you're not limited to an existing model explanation for an automated ML model. You can also get an explanation for your model with different test data. The steps in this section show you how to compute and visualize engineered feature importance and raw feature importance based on your test data.
53+
When you compute model explanations and visualize them, you're not limited to an existing model explanation for an automated ML model. You can also get an explanation for your model with different test data. The steps in this section show you how to compute and visualize engineered feature importance based on your test data.
5254

5355
### Retrieve any other AutoML model from training
5456

5557
```python
56-
automl_run, fitted_model = local_run.get_output(metric='r2_score')
58+
automl_run, fitted_model = local_run.get_output(metric='accuracy')
5759
```
5860

5961
### Set up the model explanations
6062

61-
Use `automl_setup_model_explanations` to get the engineered and raw feature explanations. The `fitted_model` can generate the following items:
63+
Use `automl_setup_model_explanations` to get the engineered explanations. The `fitted_model` can generate the following items:
6264

6365
- Featured data from trained or test samples
64-
- Engineered and raw feature name lists
66+
- Engineered feature name lists
6567
- Findable classes in your labeled column in classification scenarios
6668

6769
The `automl_explainer_setup_obj` contains all the structures from above list.
6870

6971
```python
70-
from azureml.train.automl.runtime.automl_explain_utilities import AutoMLExplainerSetupClass, automl_setup_model_explanations
72+
from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations
7173

7274
automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train,
7375
X_test=X_test, y=y_train,
@@ -82,16 +84,16 @@ To generate an explanation for AutoML models, use the `MimicWrapper` class. You
8284
- Your workspace
8385
- A LightGBM model, which acts as a surrogate to the `fitted_model` automated ML model
8486

85-
The MimicWrapper also takes the `automl_run` object where the raw and engineered explanations will be uploaded.
87+
The MimicWrapper also takes the `automl_run` object where the engineered explanations will be uploaded.
8688

8789
```python
8890
from azureml.explain.model.mimic.models.lightgbm_model import LGBMExplainableModel
8991
from azureml.explain.model.mimic_wrapper import MimicWrapper
9092

9193
# Initialize the Mimic Explainer
92-
explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, LGBMExplainableModel,
94+
explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, LGBMExplainableModel,
9395
init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run,
94-
features=automl_explainer_setup_obj.engineered_feature_names,
96+
features=automl_explainer_setup_obj.engineered_feature_names,
9597
feature_maps=[automl_explainer_setup_obj.feature_map],
9698
classes=automl_explainer_setup_obj.classes)
9799
```
@@ -101,27 +103,8 @@ explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, LGBMEx
101103
You can call the `explain()` method in MimicWrapper with the transformed test samples to get the feature importance for the generated engineered features. You can also use `ExplanationDashboard` to view the dashboard visualization of the feature importance values of the generated engineered features by automated ML featurizers.
102104

103105
```python
104-
from azureml.contrib.interpret.visualize import ExplanationDashboard
105-
engineered_explanations = explainer.explain(['local', 'global'],
106-
eval_dataset=automl_explainer_setup_obj.X_test_transform)
107-
106+
engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform)
108107
print(engineered_explanations.get_feature_importance_dict())
109-
ExplanationDashboard(engineered_explanations, automl_explainer_setup_obj.automl_estimator, automl_explainer_setup_obj.X_test_transform)
110-
```
111-
112-
### Use Mimic Explainer for computing and visualizing raw feature importance
113-
114-
You can call the `explain()` method in MimicWrapper again with the transformed test samples and setting `get_raw=True` to get the feature importance for the raw features. You can also use `ExplanationDashboard` to view the dashboard visualization of the feature importance values of the raw features.
115-
116-
```python
117-
from azureml.contrib.interpret.visualize import ExplanationDashboard
118-
119-
raw_explanations = explainer.explain(['local', 'global'], get_raw=True,
120-
raw_feature_names=automl_explainer_setup_obj.raw_feature_names,
121-
eval_dataset=automl_explainer_setup_obj.X_test_transform)
122-
123-
print(raw_explanations.get_feature_importance_dict())
124-
ExplanationDashboard(raw_explanations, automl_explainer_setup_obj.automl_pipeline, automl_explainer_setup_obj.X_test_raw)
125108
```
126109

127110
### Interpretability during inference
@@ -130,7 +113,7 @@ In this section, you learn how to operationalize an automated ML model with the
130113

131114
### Register the model and the scoring explainer
132115

133-
Use the `TreeScoringExplainer` to create the scoring explainer that'll compute the raw and engineered feature importance values at inference time. You initialize the scoring explainer with the `feature_map` that was computed previously. The scoring explainer uses the `feature_map` to return the raw feature importance.
116+
Use the `TreeScoringExplainer` to create the scoring explainer that'll compute the engineered feature importance values at inference time. You initialize the scoring explainer with the `feature_map` that was computed previously.
134117

135118
Save the scoring explainer, and then register the model and the scoring explainer with the Model Management Service. Run the following code:
136119

@@ -204,21 +187,19 @@ service.wait_for_deployment(show_output=True)
204187

205188
### Inference with test data
206189

207-
Inference with some test data to see the predicted value from automated ML model. View the engineered feature importance for the predicted value and raw feature importance for the predicted value.
190+
Inference with some test data to see the predicted value from automated ML model. View the engineered feature importance for the predicted value.
208191

209192
```python
210193
if service.state == 'Healthy':
211194
# Serialize the first row of the test data into json
212195
X_test_json = X_test[:1].to_json(orient='records')
213196
print(X_test_json)
214-
# Call the service to get the predictions and the engineered and raw explanations
197+
# Call the service to get the predictions and the engineered explanations
215198
output = service.run(X_test_json)
216199
# Print the predicted value
217200
print(output['predictions'])
218201
# Print the engineered feature importances for the predicted value
219202
print(output['engineered_local_importance_values'])
220-
# Print the raw feature importances for the predicted value
221-
print(output['raw_local_importance_values'])
222203
```
223204

224205
### Visualize to discover patterns in data and explanations at training time

0 commit comments

Comments
 (0)