You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this article, you learn how to enable the interpretability features for automated machine learning (ML) in Azure Machine Learning. Automated ML helps you understand both raw and engineered feature importance. In order to use model interpretability, set `model_explainability=True` in the `AutoMLConfig` object.
20
+
In this article, you learn how to enable the interpretability features for automated machine learning (ML) in Azure Machine Learning. Automated ML helps you understand engineered feature importance.
21
+
22
+
All SDK versions after 1.0.85 set `model_explainability=True` by default. In SDK version 1.0.85 and earlier versions users need to set `model_explainability=True` in the `AutoMLConfig` object in order to use model interpretability.
21
23
22
24
In this article, you learn how to:
23
25
@@ -32,14 +34,14 @@ In this article, you learn how to:
32
34
33
35
## Interpretability during training for the best model
34
36
35
-
Retrieve the explanation from the `best_run`, which includes explanations for engineered features and raw features.
37
+
Retrieve the explanation from the `best_run`, which includes explanations for engineered features.
36
38
37
39
### Download engineered feature importance from artifact store
38
40
39
-
You can use `ExplanationClient` to download the engineered feature explanations from the artifact store of the `best_run`. To get the explanation for the raw features set `raw=True`.
41
+
You can use `ExplanationClient` to download the engineered feature explanations from the artifact store of the `best_run`.
40
42
41
43
```python
42
-
from azureml.contrib.interpret.explanation.explanation_client import ExplanationClient
44
+
from azureml.explain.model._internal.explanation_client import ExplanationClient
When you compute model explanations and visualize them, you're not limited to an existing model explanation for an automated ML model. You can also get an explanation for your model with different test data. The steps in this section show you how to compute and visualize engineered feature importance and raw feature importance based on your test data.
53
+
When you compute model explanations and visualize them, you're not limited to an existing model explanation for an automated ML model. You can also get an explanation for your model with different test data. The steps in this section show you how to compute and visualize engineered feature importance based on your test data.
You can call the `explain()` method in MimicWrapper with the transformed test samples to get the feature importance for the generated engineered features. You can also use `ExplanationDashboard` to view the dashboard visualization of the feature importance values of the generated engineered features by automated ML featurizers.
102
104
103
105
```python
104
-
from azureml.contrib.interpret.visualize import ExplanationDashboard
### Use Mimic Explainer for computing and visualizing raw feature importance
113
-
114
-
You can call the `explain()` method in MimicWrapper again with the transformed test samples and setting `get_raw=True` to get the feature importance for the raw features. You can also use `ExplanationDashboard` to view the dashboard visualization of the feature importance values of the raw features.
115
-
116
-
```python
117
-
from azureml.contrib.interpret.visualize import ExplanationDashboard
@@ -130,7 +113,7 @@ In this section, you learn how to operationalize an automated ML model with the
130
113
131
114
### Register the model and the scoring explainer
132
115
133
-
Use the `TreeScoringExplainer` to create the scoring explainer that'll compute the raw and engineered feature importance values at inference time. You initialize the scoring explainer with the `feature_map` that was computed previously. The scoring explainer uses the `feature_map` to return the raw feature importance.
116
+
Use the `TreeScoringExplainer` to create the scoring explainer that'll compute the engineered feature importance values at inference time. You initialize the scoring explainer with the `feature_map` that was computed previously.
134
117
135
118
Save the scoring explainer, and then register the model and the scoring explainer with the Model Management Service. Run the following code:
Inference with some test data to see the predicted value from automated ML model. View the engineered feature importance for the predicted value and raw feature importance for the predicted value.
190
+
Inference with some test data to see the predicted value from automated ML model. View the engineered feature importance for the predicted value.
208
191
209
192
```python
210
193
if service.state =='Healthy':
211
194
# Serialize the first row of the test data into json
0 commit comments