Skip to content

Commit 5eb17c2

Browse files
committed
minor updates
1 parent 1f86465 commit 5eb17c2

File tree

1 file changed

+0
-89
lines changed

1 file changed

+0
-89
lines changed

articles/machine-learning/how-to-auto-train-image-models.md

Lines changed: 0 additions & 89 deletions
Original file line numberDiff line numberDiff line change
@@ -782,95 +782,6 @@ If you want to use tiling, and want to control tiling behavior, the following pa
782782
### Test the deployment
783783
Please check this [Test the deployment](./tutorial-auto-train-image-models.md#test-the-deployment) section to test the deployment and visualize the detections from the model.
784784

785-
## Generate explanations for predictions
786-
> [!WARNING]
787-
> **Model Explainability** is supported only for **multi-class classification** and **multi-label classification**.
788-
789-
Some of the advantages of using Explainable AI (XAI) with AutoML for images:
790-
- Improves the transparency in the complex vision model predictions
791-
- Helps the users to understand the important features/pixels in the input image that are contributing to the model predictions
792-
- Helps in troubleshooting the models
793-
- Helps in discovering the bias
794-
795-
### Explanations
796-
Explanations are **feature attributions** or weights given to each pixel in the input image based on it's contribution to model's prediction. Each weight can be negative (negatively correlated with the prediction) or positive (positively correlated with the prediction). These attributions are calculated against the predicted class. For multi-class classification, exactly one attribution matrix of size `[3, valid_crop_size, valid_crop_size]` will be generated per sample. Where as for multi-label classification, attribution matrix of size `[3, valid_crop_size, valid_crop_size]` will be generated for each predicted label/class for each sample.
797-
798-
Using Explainable AI in AutoML for Images on the deployed endpoint, users can get **visualizations** of explanations (attributions overlayed on an innut image) and/or **attributions** (multi-dimensional array of size `[3, valid_crop_size, valid_crop_size]`) for each image. Apart from visualizations, users can also get attribution matrices to gain more control over the explanations (like generating custom visualizations using attributions or scrutinizing segments of attributions). All the explanation algorithms will use cropped square images with size `valid_crop_size` for generating attributions.
799-
800-
Following picture describes the Visualization of explanations for a sample input image.
801-
![visualizations generated by XAI for AutoML for images.](./media/how-to-auto-train-image-models/$$$$$$$$$-1.png).
802-
803-
Explanations can be generated either from online endpoint or batch endpoint. Once the deployment is done, this endpoint can be utilized to generate the explanations for predictions. In case of online deployment, make sure to pass `request_settings = OnlineRequestSettings(request_timeout_ms=90000)` parameter to `ManagedOnlineDeployment` and set `request_timeout_ms` to its maximum value to avoid timeout issues while generating explnations (refer to [register and deploy model section](#register-and-deploy-model)). Some of the explainability (XAI) methods like `xrai` consume more time (specially for multi-label classification as we need to generate attributions and/or visualizations against each predicted label). So, we recommend any GPU instance for faster explanations. For more details on input and output schema for generating explanations, refer to the [schema docs](reference-automl-images-schema.md).
804-
805-
806-
We support following state-of-the-art exaplainability algorithms in AutoML for images:
807-
- [XRAI](https://arxiv.org/abs/1906.02825) (xrai)
808-
- [Integrated Gradients](https://arxiv.org/pdf/1703.01365.pdf) (integrated_gradients)
809-
- [Guided GradCAM](https://arxiv.org/pdf/1610.02391.pdf) (guided_gradcam)
810-
- [Guided BackPropagation](https://arxiv.org/pdf/1412.6806.pdf) (guided_backprop)
811-
812-
Following table describes the explainability algorithm specific tuning parameters for XRAI and Integrated Gradients. Guided backpropagation and guided gradcam doesn't require any tuning parameters.
813-
814-
| XAI algorithm | Algorithm specific parameters | Default Values |
815-
|--------- |------------- | --------- |
816-
| `xrai` | 1. `n_steps`: The number of steps used by the approximation method. Larger number of steps lead to better approximations of attributions (explanations). Range of n_steps is [1, inf), but the the performance of atttributions starts to converge after 50 steps. <br> `Optional, Int` <br><br> 2. `xrai_fast`: Whether to use faster version of XRAI. if `True`, then computation time for explanations is faster but leads to less accurate explanations (attributions) <br>`Optional, Bool` <br> | `n_steps = 50` <br> `xrai_fast = True` |
817-
| `integrated_gradients` | 1. `n_steps`: The number of steps used by the approximation method. Larger number of steps lead to better attributions or explanations. <br> `Optional, Int` <br><br> 2. `approximation_method`: Method for approximating the integral.<br> `Optional, String` | `n_steps = 50` <br> `approximation_method = riemann_middle` |
818-
819-
820-
Internally XRAI algorithm uses integrated gradients. So, `n_steps` parameter is required by both integrated gradiesnts and XRAI algorithms.
821-
822-
A sample request to the online endpoint looks like the following. This request generates explanations when `model_explainability` is set to `True`. Following request will generate visualizations and attributions using faster version of XRAI algorithm with 50 steps.
823-
824-
```python
825-
sample_image = "./test_image.jpg"
826-
827-
# Define explainability (XAI) parameters
828-
model_explainability = True
829-
xai_parameters = {"xai_algorithm": "xrai",
830-
"n_steps": 50,
831-
"xrai_fast": True,
832-
"visualizations": True,
833-
"attributions": True}
834-
835-
# Create request json
836-
request_json = {
837-
838-
"input_data": {
839-
"columns": ["image_details"],
840-
"data": [[{"image": base64.encodebytes(read_image(sample_image)).decode("utf-8"),
841-
"model_explainability": model_explainability,
842-
"xai_parameters": xai_parameters}]],
843-
}
844-
}
845-
846-
request_file_name = "sample_request_data.json"
847-
848-
with open(request_file_name, "w") as request_file:
849-
json.dump(request_json, request_file)
850-
851-
resp = ml_client.online_endpoints.invoke(
852-
endpoint_name=online_endpoint_name,
853-
deployment_name=deployment.name,
854-
request_file=request_file_name,
855-
)
856-
predictions = json.loads(resp)
857-
```
858-
859-
For more details on generating explanations, refer to [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/sdk/jobs/automl-standalone-jobs).
860-
861-
### Interpreting Visualizations
862-
Deployed endpoint returns base64 encoded image string if both `model_explainability` and `visualizations` are set to `True`. Decode the base64 string as described in [notebooks](https://github.com/Azure/azureml-examples/tree/main/sdk/jobs/automl-standalone-jobs).
863-
864-
Decoded base64 figure will have 4 image sections.
865-
866-
- Image at Top-left corner (0, 0) is the cropped input image
867-
- Image at top-right corner (0, 1) is the heatmap of attributions on a color scale bgyw (blue green yello white) where the contribution of white pixels on the predicted class is the highest and blue pixels is the lowest.
868-
- Image at bottom left corner (1, 0) is blended heatmap of attributions on cropped input image
869-
- Image at bottom right corner (1, 1) is the cropped input image with top 30 percent of pixels based on attribution scores.
870-
871-
### Interpreting Attributions
872-
Deployed endpoint returns attributions if both `model_explainability` and `attributions` are set to `True`. Fore more details, refer to [multi-class classification and multi-label classification notebooks](https://github.com/Azure/azureml-examples/tree/main/sdk/jobs/automl-standalone-jobs).
873-
874785

875786
## Example notebooks
876787
Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/jobs/automl-standalone-jobs). Please check the folders with 'automl-image-' prefix for samples specific to building computer vision models.

0 commit comments

Comments
 (0)