Skip to content

Commit 06e17f9

Browse files
authored
Update how-to-machine-learning-interpretability-automl.md
1 parent bb3e7fa commit 06e17f9

File tree

1 file changed

+7
-26
lines changed

1 file changed

+7
-26
lines changed

articles/machine-learning/how-to-machine-learning-interpretability-automl.md

Lines changed: 7 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ Retrieve the explanation from the `best_run`, which includes explanations for en
3939
You can use `ExplanationClient` to download the engineered feature explanations from the artifact store of the `best_run`. To get the explanation for the raw features set `raw=True`.
4040

4141
```python
42-
from azureml.contrib.interpret.explanation.explanation_client import ExplanationClient
42+
from azureml.explain.model._internal.explanation_client import ExplanationClient
4343

4444
client = ExplanationClient.from_run(best_run)
4545
engineered_explanations = client.download_model_explanation(raw=False)
@@ -53,7 +53,7 @@ When you compute model explanations and visualize them, you're not limited to an
5353
### Retrieve any other AutoML model from training
5454

5555
```python
56-
automl_run, fitted_model = local_run.get_output(metric='r2_score')
56+
automl_run, fitted_model = local_run.get_output(metric='accuracy')
5757
```
5858

5959
### Set up the model explanations
@@ -67,7 +67,7 @@ Use `automl_setup_model_explanations` to get the engineered and raw feature expl
6767
The `automl_explainer_setup_obj` contains all the structures from above list.
6868

6969
```python
70-
from azureml.train.automl.runtime.automl_explain_utilities import AutoMLExplainerSetupClass, automl_setup_model_explanations
70+
from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations
7171

7272
automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train,
7373
X_test=X_test, y=y_train,
@@ -82,16 +82,16 @@ To generate an explanation for AutoML models, use the `MimicWrapper` class. You
8282
- Your workspace
8383
- A LightGBM model, which acts as a surrogate to the `fitted_model` automated ML model
8484

85-
The MimicWrapper also takes the `automl_run` object where the raw and engineered explanations will be uploaded.
85+
The MimicWrapper also takes the `automl_run` object where the engineered explanations will be uploaded.
8686

8787
```python
8888
from azureml.explain.model.mimic.models.lightgbm_model import LGBMExplainableModel
8989
from azureml.explain.model.mimic_wrapper import MimicWrapper
9090

9191
# Initialize the Mimic Explainer
92-
explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, LGBMExplainableModel,
92+
explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, LGBMExplainableModel,
9393
init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run,
94-
features=automl_explainer_setup_obj.engineered_feature_names,
94+
features=automl_explainer_setup_obj.engineered_feature_names,
9595
feature_maps=[automl_explainer_setup_obj.feature_map],
9696
classes=automl_explainer_setup_obj.classes)
9797
```
@@ -101,27 +101,8 @@ explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, LGBMEx
101101
You can call the `explain()` method in MimicWrapper with the transformed test samples to get the feature importance for the generated engineered features. You can also use `ExplanationDashboard` to view the dashboard visualization of the feature importance values of the generated engineered features by automated ML featurizers.
102102

103103
```python
104-
from azureml.contrib.interpret.visualize import ExplanationDashboard
105-
engineered_explanations = explainer.explain(['local', 'global'],
106-
eval_dataset=automl_explainer_setup_obj.X_test_transform)
107-
104+
engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform)
108105
print(engineered_explanations.get_feature_importance_dict())
109-
ExplanationDashboard(engineered_explanations, automl_explainer_setup_obj.automl_estimator, automl_explainer_setup_obj.X_test_transform)
110-
```
111-
112-
### Use Mimic Explainer for computing and visualizing raw feature importance
113-
114-
You can call the `explain()` method in MimicWrapper again with the transformed test samples and setting `get_raw=True` to get the feature importance for the raw features. You can also use `ExplanationDashboard` to view the dashboard visualization of the feature importance values of the raw features.
115-
116-
```python
117-
from azureml.contrib.interpret.visualize import ExplanationDashboard
118-
119-
raw_explanations = explainer.explain(['local', 'global'], get_raw=True,
120-
raw_feature_names=automl_explainer_setup_obj.raw_feature_names,
121-
eval_dataset=automl_explainer_setup_obj.X_test_transform)
122-
123-
print(raw_explanations.get_feature_importance_dict())
124-
ExplanationDashboard(raw_explanations, automl_explainer_setup_obj.automl_pipeline, automl_explainer_setup_obj.X_test_raw)
125106
```
126107

127108
### Interpretability during inference

0 commit comments

Comments
 (0)