Skip to content

Commit a01e904

Browse files
committed
touchups
1 parent 3a86227 commit a01e904

File tree

1 file changed

+12
-16
lines changed

1 file changed

+12
-16
lines changed

articles/machine-learning/how-to-responsible-ai-vision-insights.md

Lines changed: 12 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ This article describes the Responsible AI vision insights component and how to u
2828
2929
## Responsible AI vision insights component
3030

31-
The core component for constructing the Responsible AI image dashboard in Azure Machine Learning is the **RAI Vision Insights component**, which differs from how to construct the [Responsible AI dashboard for tabular data](how-to-responsible-ai-insights-sdk-cli.md#responsible-ai-components).
31+
The core component for constructing the Responsible AI image dashboard in Azure Machine Learning is the **RAI vision insights component**, which differs from how to construct the [Responsible AI dashboard for tabular data](how-to-responsible-ai-insights-sdk-cli.md#responsible-ai-components).
3232

3333
### Requirements and limitations
3434

@@ -48,9 +48,9 @@ The Responsible AI vision insights component supports the following scenarios th
4848

4949
| Name | Description | Parameter name in RAI Vision Insights component |
5050
|-----------------------------------------------|---------------------------------------------|------------------------------------------------------------|
51-
| Image Classification (Binary and Multiclass) | Predict a single class for the given image. | `task_type="image_classification"` |
52-
| Image Multilabel Classification | Predict multiple labels for the given image. | `task_type="multilabel_image_classification"` |
53-
| Object Detection | Locate and identify the classes of multiple objects for a given image, and define objects with a bounding box. |`task_type="object_detection"` |
51+
| Image classification (binary and multiclass) | Predict a single class for the given image. | `task_type="image_classification"` |
52+
| Image multilabel classification | Predict multiple labels for the given image. | `task_type="multilabel_image_classification"` |
53+
| Object detection | Locate and identify classes of multiple objects for a given image, and define objects with a bounding box. |`task_type="object_detection"` |
5454

5555
The RAI vision insights component also accepts the following optional parameters:
5656

@@ -72,9 +72,9 @@ The Responsible AI vision insights component has three major input ports:
7272
- The training dataset
7373
- The test dataset
7474

75-
Register your input model in Azure Machine Learning and reference the same model in the `model_input` port of the Responsible AI vision insights component. To generate model-debugging insights like model performance, data explorer, model interpretability tools, and visualizations in your RAI dashboard, use the image dataset that you used to train your model.
75+
To start, register your input model in Azure Machine Learning and reference the same model in the `model_input` port of the Responsible AI vision insights component.
7676

77-
The training and test datasets should be in `mltable` format. The two datasets don't have to be, but can be the same dataset.
77+
To generate RAI image dashboard model-debugging insights like model performance, data explorer, and model interpretability, and populate visualizations, use the same training and test datasets as for training your model. The datasets should be in `mltable` format and don't have to be, but can be the same dataset.
7878

7979
The following example shows the dataset schema for the object detection task type:
8080

@@ -106,7 +106,6 @@ The component assembles the generated insights into a single Responsible AI imag
106106
### Pipeline job
107107

108108
To create the Responsible AI image dashboard, you can define the RAI components in a pipeline and submit the pipeline job.
109-
.
110109

111110
# [YAML](#tab/yaml)
112111

@@ -181,28 +180,25 @@ After you specify and submit the pipeline and it executes, the dashboard should
181180

182181
Automated ML in Azure Machine Learning supports model training for computer vision tasks like image classification and object detection. AutoML models for computer vision are integrated with the RAI image dashboard for debugging AutoML vision models and explaining model predictions.
183182

184-
To generate Responsible AI insights for AutoML computer vision models, register your best AutoML model in the Azure Machine Learning workspace and run it through the Responsible AI vision insights pipeline. For more information, see:
185-
186-
- [AutoML Image Classification](component-reference-v2/image-classification.md)
187-
- [Set up AutoML to train computer vision models](how-to-auto-train-image-models.md)
183+
To generate Responsible AI insights for AutoML computer vision models, register your best AutoML model in the Azure Machine Learning workspace and run it through the Responsible AI vision insights pipeline. For more information, see [Set up AutoML to train computer vision models](how-to-auto-train-image-models.md).
188184

189185
For notebooks related to AutoML supported computer vision tasks, see [azureml-examples](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs).
190186

191187
### AutoML-specific RAI vision insights parameters
192188
<a name="responsible-ai-vision-insights-component-parameter-automl-specific"></a>
193189

194-
In addition to the parameters in the preceding section, the following RAI vision component parameters apply specifically to AutoML models.
190+
In addition to the parameters in the preceding section, AutoML models can use the following AutoML-specific RAI vision component parameters.
195191

196192
> [!NOTE]
197193
> A few parameters are specific to the Explainable AI (XAI) algorithm chosen and are optional for other algorithms.
198194
199195
| Parameter name | Description | Type | Values |
200196
|----------------|-------------------------------------------------------|----------------------------------|--------|
201-
| `model_type` | Flavor of the model. Select `pyfunc` for AutoML models. | Enum |`Pyfunc` <br> `fastai` |
202-
| `dataset_type` | Whether the images in the dataset are read from publicly available URLs or are stored in the user's datastore. <br> For AutoML models, images are always read from the user's workspace datastore, so the dataset type for AutoML models is `private`. For `private` dataset type, you download the images on the compute before generating the explanations. | Enum | `public` <br> `private` |
203-
| `xai_algorithm` | Type of XAI algorithm supported for AutoML models <br> Note: SHAP isn't supported for AutoML models. | Enum | `guided_backprop` <br> `guided_gradCAM` <br> `integrated_gradients` <br> `xrai` |
197+
| `model_type` | Flavor of the model. Select `pyfunc` for AutoML models. | Enum |`• pyfunc` <br> `fastai` |
198+
| `dataset_type` | Whether the images in the dataset are read from publicly available URLs or are stored in the user's datastore. <br> For AutoML models, images are always read from the user's workspace datastore, so the dataset type for AutoML models is `private`. For `private` dataset type, you download the images on the compute before generating the explanations. | Enum | `public` <br> `private` |
199+
| `xai_algorithm` | Type of XAI algorithm supported for AutoML models <br> Note: SHAP isn't supported for AutoML models. | Enum | `guided_backprop` <br> `guided_gradCAM` <br> `integrated_gradients` <br> `xrai` |
204200
| `xrai_fast` | Whether to use the faster version of `xrai`. If `True`, computation time for explanations is faster but leads to less accurate explanations or attributions. | Boolean ||
205-
| `approximation_method` | This parameter is specific to `integrated gradients`. <br> Method for approximating the integral.| Enum | `riemann_middle` <br> `gausslegendre` |
201+
| `approximation_method` | This parameter is specific to `integrated gradients`. <br> Method for approximating the integral.| Enum | `riemann_middle` <br> `gausslegendre` |
206202
| `n_steps` | This parameter is specific to `integrated gradients` and `xrai`. <br> The number of steps used by the approximation method. Larger number of steps lead to better approximations of attributions or explanations. The range of `n_steps` is [2, inf], but the performance of attributions starts to converge after 50 steps.| Integer||
207203
| `confidence_score_threshold_multilabel` | This parameter is specific to multilabel classification. The confidence score threshold above which labels are selected for generating explanations. | Float ||
208204

0 commit comments

Comments
 (0)