You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-responsible-ai-vision-insights.md
+15-19Lines changed: 15 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ The [Responsible AI (RAI) dashboard](concept-responsible-ai-dashboard.md) brings
21
21
22
22
The Responsible AI text dashboard provides several mature RAI tools in the areas of model performance, data exploration, and model interpretability. The dashboard supports holistic assessment and debugging of computer vision models, leading to informed mitigations for fairness issues and transparency across stakeholders to build trust.
23
23
24
-
You can generate a Responsible AI image dashboard via an Azure Machine Learning pipeline job by using the Responsible AI vision insights component in a pipeline job. The following sections provide specifications and requirements for the vision insights component and example code snippets in YAML and Python. To view the full code, see the [sample YAML and Python notebooks for Responsible AI](https://github.com/Azure/azureml-examples/tree/main/sdk/python/responsible-ai).
24
+
This article describes the Responsible AI vision insights component and how to use it in a pipeline job to generate a Responsible AI image dashboard. The following sections provide specifications and requirements for the vision insights component and example code snippets in YAML and Python. To view the full code, see the [sample YAML and Python notebooks for Responsible AI](https://github.com/Azure/azureml-examples/tree/main/sdk/python/responsible-ai).
25
25
26
26
> [!IMPORTANT]
27
27
> The Responsible AI vision insights component is currently in public preview. This preview is provided without a service-level agreement, and isn't recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
@@ -37,8 +37,8 @@ The core component for constructing the Responsible AI image dashboard in Azure
37
37
- The dataset inputs must be in `mltable` format.
38
38
- The test dataset is restricted to 5,000 rows of the visualization UI, for performance reasons.
39
39
- Complex objects, such as lists of column names, must be supplied as single JSON-encoded strings to the RAI vision insights component.
40
-
-`Guided_gradcam` doesn't work with vision-transformer models.
41
40
- Hierarchical cohort naming or creating a new cohort from a subset of an existing cohort, and adding images to an existing cohort, aren't supported.
41
+
-`Guided_gradcam` doesn't work with vision-transformer models.
<!-- - IOU threshold values can't be changed. The current default value is 50%. -->
44
44
@@ -64,7 +64,7 @@ The RAI vision insights component also accepts the following optional parameters
64
64
|`use_model_dependency`| The Responsible AI environment doesn't include the model dependencies by default. When set to `True`, installs the model dependency packages. | Boolean |
65
65
|`use_conda`| Install the model dependency packages using `conda` if `True`, otherwise uses `pip`. | Boolean |
66
66
67
-
### Responsible AI vision insights
67
+
### Ports
68
68
69
69
The Responsible AI vision insights component has three major input ports:
70
70
@@ -103,17 +103,15 @@ The component assembles the generated insights into a single Responsible AI imag
103
103
- The `insights_pipeline_job.outputs.dashboard` port contains the completed `RAIVisionInsights` object.
104
104
- The `insights_pipeline_job.outputs.ux_json` port contains the data required to display a minimal dashboard.
105
105
106
-
### Create the pipeline job
106
+
### Pipeline job
107
107
108
-
To create the Responsible AI image dashboard, you can define the RAI components in a pipeline and submit the pipeline job. After you specify and submit the pipeline to Azure Machine Learning for execution, the dashboard should appear in the Machine Learning studio in the registered model view
108
+
To create the Responsible AI image dashboard, you can define the RAI components in a pipeline and submit the pipeline job.
109
109
.
110
110
111
111
# [YAML](#tab/yaml)
112
112
113
113
You can specify the pipeline in a YAML file, as in the following example.
114
114
115
-
# [YAML](#tab/yaml)
116
-
117
115
```yml
118
116
analyse_model:
119
117
type: command
@@ -133,7 +131,6 @@ You can specify the pipeline in a YAML file, as in the following example.
133
131
classes: '["cat", "dog"]'
134
132
precompute_explanation: True
135
133
enable_error_analysis: True
136
-
137
134
```
138
135
139
136
# [Python SDK](#tab/python)
@@ -172,17 +169,15 @@ And assemble the output:
172
169
173
170
---
174
171
175
-
### Submit the Responsible AI vision insights pipeline
176
-
177
172
You can submit the RAI vision insights pipeline through one of the following methods:
178
173
179
174
-**Azure CLI:** You can submit the pipeline by using the Azure CLI `az ml job create` command.
180
175
-**Python SDK:** To learn how to submit the pipeline through Python, see the [AutoML Image Classification scenario with RAI Dashboard sample notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/python/responsible-ai).
181
-
-**Azure Machine Learning studio UI**: You can use the RAI-vision insights component to create and submit a pipeline from the **Designer** in Azure Machine Learning studio.
176
+
-**Azure Machine Learning studio UI**: You can use the RAI-vision insights component to [create and submit a pipeline from the Designer in Azure Machine Learning studio](how-to-create-component-pipelines-ui.md).
182
177
183
178
After you specify and submit the pipeline and it executes, the dashboard should appear in the Machine Learning studio in the registered model view.
184
179
185
-
## Integration with AutoML Image Classification
180
+
## Integration with AutoML image classification
186
181
187
182
Automated ML in Azure Machine Learning supports model training for computer vision tasks like image classification and object detection. AutoML models for computer vision are integrated with the RAI image dashboard for debugging AutoML vision models and explaining model predictions.
188
183
@@ -193,22 +188,23 @@ To generate Responsible AI insights for AutoML computer vision models, register
193
188
194
189
For notebooks related to AutoML supported computer vision tasks, see [azureml-examples](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs).
195
190
196
-
### AutoML specific RAI vision insights parameters
191
+
### AutoML-specific RAI vision insights parameters
|`model_type`| Flavor of the model. Select `pyfunc` for AutoML models. | Enum |-`Pyfunc` <br> -`fastai`|
206
-
|`dataset_type`| Whether the images in the dataset are read from publicly available URLs or are stored in the user's datastore. <br> For AutoML models, images are always read from the user's workspace datastore, so the dataset type for AutoML models is `private`. For `private` dataset type, you download the images on the compute before generating the explanations. | Enum |-`public` <br> -`private`|
207
-
|`xai_algorithm`| Type of XAI algorithm supported for AutoML models <br> Note: SHAP isn't supported for AutoML models. | Enum |-`guided_backprop` <br> -`guided_gradCAM` <br> -`integrated_gradients` <br> -`xrai`|
201
+
|`model_type`| Flavor of the model. Select `pyfunc` for AutoML models. | Enum |•`Pyfunc` <br> •`fastai`|
202
+
|`dataset_type`| Whether the images in the dataset are read from publicly available URLs or are stored in the user's datastore. <br> For AutoML models, images are always read from the user's workspace datastore, so the dataset type for AutoML models is `private`. For `private` dataset type, you download the images on the compute before generating the explanations. | Enum |•`public` <br> •`private`|
203
+
|`xai_algorithm`| Type of XAI algorithm supported for AutoML models <br> Note: SHAP isn't supported for AutoML models. | Enum |•`guided_backprop` <br> •`guided_gradCAM` <br> •`integrated_gradients` <br> •`xrai`|
208
204
|`xrai_fast`| Whether to use the faster version of `xrai`. If `True`, computation time for explanations is faster but leads to less accurate explanations or attributions. | Boolean ||
209
-
|`approximation_method`| This parameter is specific to `integrated gradients`. <br> Method for approximating the integral.| Enum |-`riemann_middle` <br> -`gausslegendre`|
205
+
|`approximation_method`| This parameter is specific to `integrated gradients`. <br> Method for approximating the integral.| Enum |•`riemann_middle` <br> •`gausslegendre`|
210
206
|`n_steps`| This parameter is specific to `integrated gradients` and `xrai`. <br> The number of steps used by the approximation method. Larger number of steps lead to better approximations of attributions or explanations. The range of `n_steps` is [2, inf], but the performance of attributions starts to converge after 50 steps.| Integer||
211
-
|`confidence_score_threshold_multilabel`| This parameter is specific to multilabel classification. Specify the confidence score threshold above which labels are selected for generating explanations. | Float ||
207
+
|`confidence_score_threshold_multilabel`| This parameter is specific to multilabel classification. The confidence score threshold above which labels are selected for generating explanations. | Float ||
0 commit comments