You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-monitor/platform/itsmc-connections.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -180,11 +180,12 @@ The following sections provide details about how to connect your ServiceNow prod
180
180
### Prerequisites
181
181
Ensure the following prerequisites are met:
182
182
- ITSMC installed. More information: [Adding the IT Service Management Connector Solution](../../azure-monitor/platform/itsmc-overview.md#adding-the-it-service-management-connector-solution).
183
-
- ServiceNow supported versions: London, Kingston, Jakarta, Istanbul, Helsinki, Geneva.
**ServiceNow Admins must do the following in their ServiceNow instance**:
186
186
- Generate client ID and client secret for the ServiceNow product. For information on how to generate client ID and secret, see the following information as required:
187
187
188
+
-[Set up OAuth for Madrid](https://docs.servicenow.com/bundle/madrid-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)
188
189
-[Set up OAuth for London](https://docs.servicenow.com/bundle/london-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)
189
190
-[Set up OAuth for Kingston](https://docs.servicenow.com/bundle/kingston-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)
190
191
- [Set up OAuth for Jakarta](https://docs.servicenow.com/bundle/jakarta-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)
For more information on how model explanations and feature importance can be enabled in other areas of the SDK outside of automated machine learning, see the [concept](machine-learning-interpretability-explainability.md) article on interpretability.
485
+
484
486
## Next steps
485
487
486
488
Learn more about [how and where to deploy a model](how-to-deploy-and-where.md).
Copy file name to clipboardExpand all lines: articles/machine-learning/service/machine-learning-interpretability-explainability.md
+58-30Lines changed: 58 additions & 30 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,38 +10,42 @@ ms.topic: conceptual
10
10
ms.author: mesameki
11
11
author: mesameki
12
12
ms.reviewer: larryfr
13
-
ms.date: 04/29/2019
13
+
ms.date: 05/30/2019
14
14
---
15
15
16
16
# Model interpretability with Azure Machine Learning service
17
17
18
-
In this article, you will learn how to explain why your model made the predictions it did with the interpretability package of the Azure Machine Learning Python SDK.
18
+
In this article, you learn how to explain why your model made the predictions it did with the various interpretability packages of the Azure Machine Learning Python SDK.
19
19
20
-
Using the classes and methods in this package, you can get:
21
-
+ Interpretability on real-world datasets at scale, during training and inference.
20
+
Using the classes and methods in the SDK, you can get:
21
+
+ Feature importance values for both raw and engineered features
22
+
+ Interpretability on real-world datasets at scale, during training and inference.
22
23
+ Interactive visualizations to aid you in the discovery of patterns in data and explanations at training time
23
-
+ Feature importance values: both raw and engineered features
24
24
25
-
During the training phase of the development cycle, model designers and evaluators can use to explain the output of a model to stakeholders to build trust. They also use the insights into the model for debugging, validating model behavior matches their objectives, and to check for bias.
25
+
During the training phase of the development cycle, model designers and evaluators can use interpretability output of a model to verify hypotheses and build trust with stakeholders. They also use the insights into the model for debugging, validating model behavior matches their objectives, and to check for bias.
26
26
27
-
Inference, or model scoring, is the phase where the deployed model is used for prediction, most commonly on production data. During this phase, data scientists can explain the resulting predictions to the people who use your model. For example, why did the model deny a mortgage loan, or predict that an investment portfolio carries a higher risk?
27
+
In machine learning, **features** are the data fields used to predict a target data point. For example,
28
+
to predict credit risk, data fields for age, account size, and account age might be used. In this case,
29
+
age, account size, and account age are **features**. Feature importance tells you how each data field affected the model's predictions. For example, age may be heavily used in the prediction while account size and age don't affect the prediction accuracy significantly. This process allows data scientists to explain resulting predictions, so that stakeholders have visibility into what data points are most important in the model.
28
30
29
-
Using these offering, you can explain machine learning models **globally on all data**, or **locally on a specific data point** using the state-of-art technologies in an easy-to-use and scalable fashion.
31
+
Using these tools, you can explain machine learning models **globally on all data**, or **locally on a specific data points** using the state-of-art technologies in an easy-to-use and scalable fashion.
30
32
31
-
The interpretability classes are made available through two Python packages. Learn how to [install SDK packages for Azure Machine Learning](https://docs.microsoft.com/python/api/overview/azure/ml/install?view=azure-ml-py).
33
+
The interpretability classes are made available through multiple SDK packages. Learn how to [install SDK packages for Azure Machine Learning](https://docs.microsoft.com/python/api/overview/azure/ml/install?view=azure-ml-py).
32
34
33
-
*[`azureml.explain.model`](https://docs.microsoft.com/python/api/azureml-explain-model/?view=azure-ml-py), the main package, containing functionalities supported by Microsoft.
35
+
*[`azureml.explain.model`](https://docs.microsoft.com/python/api/azureml-explain-model/?view=azure-ml-py), the main package, containing functionalities supported by Microsoft.
34
36
35
37
*`azureml.contrib.explain.model`, preview, and experimental functionalities that you can try.
36
38
39
+
*`azureml.train.automl.automlexplainer` package for interpreting automated machine learning models.
40
+
37
41
> [!IMPORTANT]
38
-
> Things in contrib are not fully supported. As the experimental functionalities become mature, they will gradually be moved to the main package.
42
+
> Content in the `contrib` namespace is not fully supported. As the experimental functionalities become mature, they will gradually be moved to the main namespace.
39
43
40
44
## How to interpret your model
41
45
42
46
You can apply the interpretability classes and methods to understand the model’s global behavior or specific predictions. The former is called global explanation and the latter is called local explanation.
43
47
44
-
The methods can be also categorized based on whether the method is model agnostic or model specific. Some methods target certain type of models. For example, SHAP’s tree explainer only applies to tree-based models. Some methods treat the model as a black box, such as mimic explainer or SHAP’s kernel explainer. The `explain` package leverages these different approaches based on data sets, model types, and use cases.
48
+
The methods can be also categorized based on whether the method is model agnostic or model specific. Some methods target certain type of models. For example, SHAP’s tree explainer only applies to tree-based models. Some methods treat the model as a black box, such as mimic explainer or SHAP’s kernel explainer. The `explain` package leverages these different approaches based on data sets, model types, and use cases.
45
49
46
50
The output is a set of information on how a given model makes its prediction, such as:
47
51
* Global/local relative feature importance
@@ -99,7 +103,7 @@ The `explain` package is designed to work with both local and remote compute tar
99
103
100
104
### Train and explain locally
101
105
102
-
1. Train your model in a local Jupyter notebook.
106
+
1. Train your model in a local Jupyter notebook.
103
107
104
108
```python
105
109
# load breast cancer dataset, a well-known small dataset that comes with scikit-learn
@@ -172,14 +176,14 @@ While you can train on the various compute targets supported by Azure Machine Le
172
176
173
177
1. Create a training script in a local Jupyter notebook (for example, run_explainer.py).
@@ -197,7 +201,7 @@ While you can train on the various compute targets supported by Azure Machine Le
197
201
198
202
2. Follow the instructions on [Set up compute targets for model training](how-to-set-up-training-targets.md#amlcompute) to learn about how to set up an Azure Machine Learning Compute as your compute target and submit your training run.
199
203
200
-
3. Download the explanation in your local Jupyter notebook.
204
+
3. Download the explanation in your local Jupyter notebook.
201
205
202
206
```python
203
207
from azureml.contrib.explain.model.explanation.explanation_client import ExplanationClient
Optionally, you can pass your feature transformation pipeline to the explainer to receive explanations in terms of the raw features before the transformation (rather than engineered features). If you skip this, the explainer provides explanations in terms of engineered features.
263
+
Optionally, you can pass your feature transformation pipeline to the explainer to receive explanations in terms of the raw features before the transformation (rather than engineered features). If you skip this, the explainer provides explanations in terms of engineered features.
260
264
261
265
The format of supported transformations is same as the one described in [sklearn-pandas](https://github.com/scikit-learn-contrib/sklearn-pandas). In general, any transformations are supported aslongas they operate on a single column and are therefore clearly one to many.
262
266
@@ -299,7 +303,7 @@ The explainer can be deployed along with the original model and can be used at s
299
303
```
300
304
301
305
1. Create a scoring explainer using the explanation object:
@@ -428,7 +432,7 @@ The explainer can be deployed along with the original model and can be used at s
428
432
image_config=image_config)
429
433
430
434
service.wait_for_deployment(show_output=True)
431
-
```
435
+
```
432
436
433
437
1. Test the deployment
434
438
@@ -452,6 +456,30 @@ The explainer can be deployed along with the original model and can be used at s
452
456
453
457
1. Clean up: To delete a deployed web service, use `service.delete()`.
454
458
459
+
## Interpretability in automated ML
460
+
461
+
Automated machine learning contains packages for interpreting feature importance in auto-trained models. Additionally, classification scenarios allow you to retrieve class-level feature importance. There are two methods to enable this behavior within automated machine learning:
462
+
463
+
* To enable feature importance for a trained ensemble model, use the [`explain_model()`](https://docs.microsoft.com/en-us/python/api/azureml-train-automl/azureml.train.automl.automlexplainer?view=azure-ml-py) function.
464
+
465
+
```python
466
+
from azureml.train.automl.automlexplainer import explain_model
* To enable feature importance for each individual run prior to training, set the `model_explainability` parameter to `True`in the `AutoMLConfig`object, along with providing validation data. Then use the [`retrieve_model_explanation()`](https://docs.microsoft.com/en-us/python/api/azureml-train-automl/azureml.train.automl.automlexplainer?view=azure-ml-py) function.
473
+
474
+
```python
475
+
from azureml.train.automl.automlexplainer import retrieve_model_explanation
For more information, see the [how-to](how-to-configure-auto-train.md#explain-the-model-interpretability) on enabling interpretability features in automated machine learning.
482
+
455
483
## Next steps
456
484
457
485
To see a collection of Jupyter notebooks that demonstrate the instructions above, see the [Azure Machine Learning Interpretability sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/explain-model).
0 commit comments