You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-responsible-ai-vision-insights.md
+12-16Lines changed: 12 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ This article describes the Responsible AI vision insights component and how to u
28
28
29
29
## Responsible AI vision insights component
30
30
31
-
The core component for constructing the Responsible AI image dashboard in Azure Machine Learning is the **RAI Vision Insights component**, which differs from how to construct the [Responsible AI dashboard for tabular data](how-to-responsible-ai-insights-sdk-cli.md#responsible-ai-components).
31
+
The core component for constructing the Responsible AI image dashboard in Azure Machine Learning is the **RAI vision insights component**, which differs from how to construct the [Responsible AI dashboard for tabular data](how-to-responsible-ai-insights-sdk-cli.md#responsible-ai-components).
32
32
33
33
### Requirements and limitations
34
34
@@ -48,9 +48,9 @@ The Responsible AI vision insights component supports the following scenarios th
48
48
49
49
| Name | Description | Parameter name in RAI Vision Insights component |
| Image Classification (Binary and Multiclass) | Predict a single class for the given image. |`task_type="image_classification"`|
52
-
| Image Multilabel Classification| Predict multiple labels for the given image. |`task_type="multilabel_image_classification"`|
53
-
| Object Detection| Locate and identify the classes of multiple objects for a given image, and define objects with a bounding box. |`task_type="object_detection"`|
51
+
| Image classification (binary and multiclass) | Predict a single class for the given image. |`task_type="image_classification"`|
52
+
| Image multilabel classification| Predict multiple labels for the given image. |`task_type="multilabel_image_classification"`|
53
+
| Object detection| Locate and identify classes of multiple objects for a given image, and define objects with a bounding box. |`task_type="object_detection"`|
54
54
55
55
The RAI vision insights component also accepts the following optional parameters:
56
56
@@ -72,9 +72,9 @@ The Responsible AI vision insights component has three major input ports:
72
72
- The training dataset
73
73
- The test dataset
74
74
75
-
Register your input model in Azure Machine Learning and reference the same model in the `model_input` port of the Responsible AI vision insights component. To generate model-debugging insights like model performance, data explorer, model interpretability tools, and visualizations in your RAI dashboard, use the image dataset that you used to train your model.
75
+
To start, register your input model in Azure Machine Learning and reference the same model in the `model_input` port of the Responsible AI vision insights component.
76
76
77
-
The training and test datasets should be in `mltable` format. The two datasets don't have to be, but can be the same dataset.
77
+
To generate RAI image dashboard model-debugging insights like model performance, data explorer, and model interpretability, and populate visualizations, use the same training and test datasets as for training your model. The datasets should be in `mltable` format and don't have to be, but can be the same dataset.
78
78
79
79
The following example shows the dataset schema for the object detection task type:
80
80
@@ -106,7 +106,6 @@ The component assembles the generated insights into a single Responsible AI imag
106
106
### Pipeline job
107
107
108
108
To create the Responsible AI image dashboard, you can define the RAI components in a pipeline and submit the pipeline job.
109
-
.
110
109
111
110
# [YAML](#tab/yaml)
112
111
@@ -181,28 +180,25 @@ After you specify and submit the pipeline and it executes, the dashboard should
181
180
182
181
Automated ML in Azure Machine Learning supports model training for computer vision tasks like image classification and object detection. AutoML models for computer vision are integrated with the RAI image dashboard for debugging AutoML vision models and explaining model predictions.
183
182
184
-
To generate Responsible AI insights for AutoML computer vision models, register your best AutoML model in the Azure Machine Learning workspace and run it through the Responsible AI vision insights pipeline. For more information, see:
-[Set up AutoML to train computer vision models](how-to-auto-train-image-models.md)
183
+
To generate Responsible AI insights for AutoML computer vision models, register your best AutoML model in the Azure Machine Learning workspace and run it through the Responsible AI vision insights pipeline. For more information, see [Set up AutoML to train computer vision models](how-to-auto-train-image-models.md).
188
184
189
185
For notebooks related to AutoML supported computer vision tasks, see [azureml-examples](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs).
190
186
191
187
### AutoML-specific RAI vision insights parameters
|`model_type`| Flavor of the model. Select `pyfunc` for AutoML models. | Enum |• `Pyfunc` <br> • `fastai`|
202
-
|`dataset_type`| Whether the images in the dataset are read from publicly available URLs or are stored in the user's datastore. <br> For AutoML models, images are always read from the user's workspace datastore, so the dataset type for AutoML models is `private`. For `private` dataset type, you download the images on the compute before generating the explanations. | Enum |• `public` <br> • `private`|
203
-
|`xai_algorithm`| Type of XAI algorithm supported for AutoML models <br> Note: SHAP isn't supported for AutoML models. | Enum |• `guided_backprop` <br> • `guided_gradCAM` <br> • `integrated_gradients` <br> • `xrai`|
197
+
|`model_type`| Flavor of the model. Select `pyfunc` for AutoML models. | Enum |`• pyfunc` <br> `• fastai`|
198
+
|`dataset_type`| Whether the images in the dataset are read from publicly available URLs or are stored in the user's datastore. <br> For AutoML models, images are always read from the user's workspace datastore, so the dataset type for AutoML models is `private`. For `private` dataset type, you download the images on the compute before generating the explanations. | Enum |`• public` <br> `• private`|
199
+
|`xai_algorithm`| Type of XAI algorithm supported for AutoML models <br> Note: SHAP isn't supported for AutoML models. | Enum |`• guided_backprop` <br> `• guided_gradCAM` <br> `• integrated_gradients` <br> `• xrai`|
204
200
|`xrai_fast`| Whether to use the faster version of `xrai`. If `True`, computation time for explanations is faster but leads to less accurate explanations or attributions. | Boolean ||
205
-
|`approximation_method`| This parameter is specific to `integrated gradients`. <br> Method for approximating the integral.| Enum |• `riemann_middle` <br> • `gausslegendre`|
201
+
|`approximation_method`| This parameter is specific to `integrated gradients`. <br> Method for approximating the integral.| Enum |`• riemann_middle` <br> `• gausslegendre`|
206
202
|`n_steps`| This parameter is specific to `integrated gradients` and `xrai`. <br> The number of steps used by the approximation method. Larger number of steps lead to better approximations of attributions or explanations. The range of `n_steps` is [2, inf], but the performance of attributions starts to converge after 50 steps.| Integer||
207
203
|`confidence_score_threshold_multilabel`| This parameter is specific to multilabel classification. The confidence score threshold above which labels are selected for generating explanations. | Float ||
0 commit comments