You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this article, you learn how to enable the interpretability features for automated machine learning (ML) in Azure Machine Learning. Automated ML helps you understand both raw and engineered feature importance. In order to use model interpretability, set `model_explainability=True` in the `AutoMLConfig` object.
20
+
In this article, you learn how to enable the interpretability features for automated machine learning (ML) in Azure Machine Learning. Automated ML helps you understand engineered feature importance.
21
+
22
+
All SDK versions after 1.0.85 set `model_explainability=True` by default. In SDK version 1.0.85 and earlier versions users need to set `model_explainability=True` in the `AutoMLConfig` object in order to use model interpretability.
21
23
22
24
In this article, you learn how to:
23
25
@@ -32,11 +34,11 @@ In this article, you learn how to:
32
34
33
35
## Interpretability during training for the best model
34
36
35
-
Retrieve the explanation from the `best_run`, which includes explanations for engineered features and raw features.
37
+
Retrieve the explanation from the `best_run`, which includes explanations for engineered features.
36
38
37
39
### Download engineered feature importance from artifact store
38
40
39
-
You can use `ExplanationClient` to download the engineered feature explanations from the artifact store of the `best_run`. To get the explanation for the raw features set `raw=True`.
41
+
You can use `ExplanationClient` to download the engineered feature explanations from the artifact store of the `best_run`.
40
42
41
43
```python
42
44
from azureml.explain.model._internal.explanation_client import ExplanationClient
When you compute model explanations and visualize them, you're not limited to an existing model explanation for an automated ML model. You can also get an explanation for your model with different test data. The steps in this section show you how to compute and visualize engineered feature importance and raw feature importance based on your test data.
53
+
When you compute model explanations and visualize them, you're not limited to an existing model explanation for an automated ML model. You can also get an explanation for your model with different test data. The steps in this section show you how to compute and visualize engineered feature importance based on your test data.
Use `automl_setup_model_explanations` to get the engineered and raw feature explanations. The `fitted_model` can generate the following items:
63
+
Use `automl_setup_model_explanations` to get the engineered explanations. The `fitted_model` can generate the following items:
62
64
63
65
- Featured data from trained or test samples
64
-
- Engineered and raw feature name lists
66
+
- Engineered feature name lists
65
67
- Findable classes in your labeled column in classification scenarios
66
68
67
69
The `automl_explainer_setup_obj` contains all the structures from above list.
@@ -111,7 +113,7 @@ In this section, you learn how to operationalize an automated ML model with the
111
113
112
114
### Register the model and the scoring explainer
113
115
114
-
Use the `TreeScoringExplainer` to create the scoring explainer that'll compute the raw and engineered feature importance values at inference time. You initialize the scoring explainer with the `feature_map` that was computed previously. The scoring explainer uses the `feature_map` to return the raw feature importance.
116
+
Use the `TreeScoringExplainer` to create the scoring explainer that'll compute the engineered feature importance values at inference time. You initialize the scoring explainer with the `feature_map` that was computed previously.
115
117
116
118
Save the scoring explainer, and then register the model and the scoring explainer with the Model Management Service. Run the following code:
Inference with some test data to see the predicted value from automated ML model. View the engineered feature importance for the predicted value and raw feature importance for the predicted value.
190
+
Inference with some test data to see the predicted value from automated ML model. View the engineered feature importance for the predicted value.
189
191
190
192
```python
191
193
if service.state =='Healthy':
192
194
# Serialize the first row of the test data into json
0 commit comments