Skip to content

Commit 215bd95

Browse files
authored
Update how-to-machine-learning-interpretability-automl.md
1 parent 06e17f9 commit 215bd95

File tree

1 file changed

+11
-11
lines changed

1 file changed

+11
-11
lines changed

articles/machine-learning/how-to-machine-learning-interpretability-automl.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,9 @@ ms.date: 10/25/2019
1717

1818
[!INCLUDE [applies-to-skus](../../includes/aml-applies-to-basic-enterprise-sku.md)]
1919

20-
In this article, you learn how to enable the interpretability features for automated machine learning (ML) in Azure Machine Learning. Automated ML helps you understand both raw and engineered feature importance. In order to use model interpretability, set `model_explainability=True` in the `AutoMLConfig` object.
20+
In this article, you learn how to enable the interpretability features for automated machine learning (ML) in Azure Machine Learning. Automated ML helps you understand engineered feature importance.
21+
22+
All SDK versions after 1.0.85 set `model_explainability=True` by default. In SDK version 1.0.85 and earlier versions users need to set `model_explainability=True` in the `AutoMLConfig` object in order to use model interpretability.
2123

2224
In this article, you learn how to:
2325

@@ -32,11 +34,11 @@ In this article, you learn how to:
3234

3335
## Interpretability during training for the best model
3436

35-
Retrieve the explanation from the `best_run`, which includes explanations for engineered features and raw features.
37+
Retrieve the explanation from the `best_run`, which includes explanations for engineered features.
3638

3739
### Download engineered feature importance from artifact store
3840

39-
You can use `ExplanationClient` to download the engineered feature explanations from the artifact store of the `best_run`. To get the explanation for the raw features set `raw=True`.
41+
You can use `ExplanationClient` to download the engineered feature explanations from the artifact store of the `best_run`.
4042

4143
```python
4244
from azureml.explain.model._internal.explanation_client import ExplanationClient
@@ -48,7 +50,7 @@ print(engineered_explanations.get_feature_importance_dict())
4850

4951
## Interpretability during training for any model
5052

51-
When you compute model explanations and visualize them, you're not limited to an existing model explanation for an automated ML model. You can also get an explanation for your model with different test data. The steps in this section show you how to compute and visualize engineered feature importance and raw feature importance based on your test data.
53+
When you compute model explanations and visualize them, you're not limited to an existing model explanation for an automated ML model. You can also get an explanation for your model with different test data. The steps in this section show you how to compute and visualize engineered feature importance based on your test data.
5254

5355
### Retrieve any other AutoML model from training
5456

@@ -58,10 +60,10 @@ automl_run, fitted_model = local_run.get_output(metric='accuracy')
5860

5961
### Set up the model explanations
6062

61-
Use `automl_setup_model_explanations` to get the engineered and raw feature explanations. The `fitted_model` can generate the following items:
63+
Use `automl_setup_model_explanations` to get the engineered explanations. The `fitted_model` can generate the following items:
6264

6365
- Featured data from trained or test samples
64-
- Engineered and raw feature name lists
66+
- Engineered feature name lists
6567
- Findable classes in your labeled column in classification scenarios
6668

6769
The `automl_explainer_setup_obj` contains all the structures from above list.
@@ -111,7 +113,7 @@ In this section, you learn how to operationalize an automated ML model with the
111113

112114
### Register the model and the scoring explainer
113115

114-
Use the `TreeScoringExplainer` to create the scoring explainer that'll compute the raw and engineered feature importance values at inference time. You initialize the scoring explainer with the `feature_map` that was computed previously. The scoring explainer uses the `feature_map` to return the raw feature importance.
116+
Use the `TreeScoringExplainer` to create the scoring explainer that'll compute the engineered feature importance values at inference time. You initialize the scoring explainer with the `feature_map` that was computed previously.
115117

116118
Save the scoring explainer, and then register the model and the scoring explainer with the Model Management Service. Run the following code:
117119

@@ -185,21 +187,19 @@ service.wait_for_deployment(show_output=True)
185187

186188
### Inference with test data
187189

188-
Inference with some test data to see the predicted value from automated ML model. View the engineered feature importance for the predicted value and raw feature importance for the predicted value.
190+
Inference with some test data to see the predicted value from automated ML model. View the engineered feature importance for the predicted value.
189191

190192
```python
191193
if service.state == 'Healthy':
192194
# Serialize the first row of the test data into json
193195
X_test_json = X_test[:1].to_json(orient='records')
194196
print(X_test_json)
195-
# Call the service to get the predictions and the engineered and raw explanations
197+
# Call the service to get the predictions and the engineered explanations
196198
output = service.run(X_test_json)
197199
# Print the predicted value
198200
print(output['predictions'])
199201
# Print the engineered feature importances for the predicted value
200202
print(output['engineered_local_importance_values'])
201-
# Print the raw feature importances for the predicted value
202-
print(output['raw_local_importance_values'])
203203
```
204204

205205
### Visualize to discover patterns in data and explanations at training time

0 commit comments

Comments
 (0)