You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This article explains deep learning vs. machine learning and how they fit into the broader category of artificial intelligence. Learn about deep learning solutions you can build on Azure Machine Learning, such as fraud detection, voice and facial recognition, sentiment analysis, and time series forecasting.
19
19
20
-
For guidance on choosing algorithms for your solutions, see the [Machine Learning Algorithm Cheat Sheet](./algorithm-cheat-sheet.md?WT.mc_id=docs-article-lazzeri).
20
+
For guidance on choosing algorithms for your solutions, see the [Machine Learning Algorithm Cheat Sheet](./v1/algorithm-cheat-sheet.md?WT.mc_id=docs-article-lazzeri).
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-auto-train-forecast.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -186,7 +186,7 @@ forecasting_job.set_training(
186
186
To enable DNN for an AutoML experiment created in the Azure Machine Learning studio, see the [task type settings in the studio UI how-to](how-to-use-automated-ml-for-ml-models.md#create-and-run-experiment).
187
187
188
188
> [!NOTE]
189
-
> * When you enable DNN for experiments created with the SDK, [best model explanations](how-to-machine-learning-interpretability-automl.md) are disabled.
189
+
> * When you enable DNN for experiments created with the SDK, [best model explanations](./v1/how-to-machine-learning-interpretability-automl.md) are disabled.
190
190
> * DNN support for forecasting in Automated Machine Learning is not supported for runs initiated in Databricks.
191
191
> * GPU compute types are recommended when DNN training is enabled
192
192
@@ -553,6 +553,6 @@ See the [forecasting sample notebooks](https://github.com/Azure/azureml-examples
553
553
## Next steps
554
554
555
555
* Learn more about [How to deploy an AutoML model to an online endpoint](how-to-deploy-automl-endpoint.md).
556
-
* Learn about [Interpretability: model explanations in automated machine learning (preview)](how-to-machine-learning-interpretability-automl.md).
556
+
* Learn about [Interpretability: model explanations in automated machine learning (preview)](./v1/how-to-machine-learning-interpretability-automl.md).
557
557
* Learn about [how AutoML builds forecasting models](./concept-automl-forecasting-methods.md).
558
558
* Learn how to [configure AutoML for various forecasting scenarios](./how-to-automl-forecasting-faq.md#what-modeling-configuration-should-i-use).
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-configure-auto-train.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -298,7 +298,7 @@ The following table shows the accepted settings for featurization.
298
298
299
299
|Featurization Configuration | Description |
300
300
| ------------- | ------------- |
301
-
|`"mode": 'auto'`| Indicates that as part of preprocessing, [data guardrails and featurization steps](how-to-configure-auto-features.md#featurization) are performed automatically. **Default setting**.|
301
+
|`"mode": 'auto'`| Indicates that as part of preprocessing, [data guardrails and featurization steps](./v1/how-to-configure-auto-features.md#featurization) are performed automatically. **Default setting**.|
302
302
|`"mode": 'off'`| Indicates featurization step shouldn't be done automatically.|
303
303
|`"mode":` `'custom'`| Indicates customized featurization step should be used.|
304
304
@@ -376,7 +376,7 @@ Automated ML offers options for you to monitor and evaluate your training result
376
376
377
377
* For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md).
378
378
379
-
* To get a featurization summary and understand what features were added to a particular model, see [Featurization transparency](how-to-configure-auto-features.md#featurization-transparency).
379
+
* To get a featurization summary and understand what features were added to a particular model, see [Featurization transparency](./v1/how-to-configure-auto-features.md#featurization-transparency).
380
380
381
381
From Azure Machine Learning UI at the model's page you can also view the hyperparameters used when training a particular model and also view and customize the internal model's training code used.
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-log-view-metrics.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,7 +37,7 @@ Logs can help you diagnose errors and warnings, or track performance metrics lik
37
37
> This article shows you how to monitor the model training process. If you're interested in monitoring resource usage and events from Azure Machine Learning, such as quotas, completed training jobs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
38
38
39
39
> [!TIP]
40
-
> For information on logging metrics in Azure Machine Learning designer, see [How to log metrics in the designer](how-to-track-designer-experiments.md).
40
+
> For information on logging metrics in Azure Machine Learning designer, see [How to log metrics in the designer](./v1/how-to-track-designer-experiments.md).
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-machine-learning-interpretability.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,25 +22,25 @@ This article describes methods you can use for model interpretability in Azure M
22
22
23
23
## Why model interpretability is important to model debugging
24
24
25
-
When you're using machine learning models in ways that affect people’s lives, it's critically important to understand what influences the behavior of models. Interpretability helps answer questions in scenarios such as:
25
+
When you're using machine learning models in ways that affect people's lives, it's critically important to understand what influences the behavior of models. Interpretability helps answer questions in scenarios such as:
26
26
* Model debugging: Why did my model make this mistake? How can I improve my model?
27
-
* Human-AI collaboration: How can I understand and trust the model’s decisions?
27
+
* Human-AI collaboration: How can I understand and trust the model's decisions?
28
28
* Regulatory compliance: Does my model satisfy legal requirements?
29
29
30
-
The interpretability component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) contributes to the “diagnose” stage of the model lifecycle workflow by generating human-understandable descriptions of the predictions of a machine learning model. It provides multiple views into a model’s behavior:
30
+
The interpretability component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) contributes to the "diagnose" stage of the model lifecycle workflow by generating human-understandable descriptions of the predictions of a machine learning model. It provides multiple views into a model's behavior:
31
31
* Global explanations: For example, what features affect the overall behavior of a loan allocation model?
32
-
* Local explanations: For example, why was a customer’s loan application approved or rejected?
32
+
* Local explanations: For example, why was a customer's loan application approved or rejected?
33
33
34
34
You can also observe model explanations for a selected cohort as a subgroup of data points. This approach is valuable when, for example, you're assessing fairness in model predictions for individuals in a particular demographic group. The **Local explanation** tab of this component also represents a full data visualization, which is great for general eyeballing of the data and looking at differences between correct and incorrect predictions of each cohort.
35
35
36
36
The capabilities of this component are founded by the [InterpretML](https://interpret.ml/) package, which generates model explanations.
37
37
38
38
Use interpretability when you need to:
39
39
40
-
* Determine how trustworthy your AI system’s predictions are by understanding what features are most important for the predictions.
40
+
* Determine how trustworthy your AI system's predictions are by understanding what features are most important for the predictions.
41
41
* Approach the debugging of your model by understanding it first and identifying whether the model is using healthy features or merely false correlations.
42
42
* Uncover potential sources of unfairness by understanding whether the model is basing predictions on sensitive features or on features that are highly correlated with them.
43
-
* Build user trust in your model’s decisions by generating local explanations to illustrate their outcomes.
43
+
* Build user trust in your model's decisions by generating local explanations to illustrate their outcomes.
44
44
* Complete a regulatory audit of an AI system to validate models and monitor the impact of model decisions on humans.
45
45
46
46
## How to interpret your model
@@ -89,7 +89,7 @@ Interpret-Community serves as the host for the following supported explainers, a
89
89
| Guided gradCAM | Guided GradCAM is a popular explanation method for deep neural networks that provides insights into the learned representations of the model. It generates a visualization of the input features that contribute most to a particular output class, by combining the gradient-based approach of guided backpropagation with the localization approach of GradCAM. Specifically, it computes the gradients of the output class with respect to the feature maps of the last convolutional layer in the network, and then weights each feature map according to the importance of its activation for that class. This produces a high-resolution heatmap that highlights the most discriminative regions of the input image for the given output class. Guided GradCAM can be used to explain a wide range of deep learning models, including CNNs, RNNs, and transformers. Additionally, by incorporating guided backpropagation, it ensures that the visualization is meaningful and interpretable, avoiding spurious activations and negative contributions. | AutoML | Image Multi-class Classification, Image Multi-label Classification |
90
90
| Integrated Gradients | Integrated Gradients is a popular explanation method for deep neural networks that provides insights into the contribution of each input feature to a given prediction. It computes the integral of the gradient of the output class with respect to the input image, along a straight path between a baseline image and the actual input image. This path is typically chosen to be a linear interpolation between the two images, with the baseline being a neutral image that has no salient features. By integrating the gradient along this path, Integrated Gradients provides a measure of how each input feature contributes to the prediction, allowing for an attribution map to be generated. This map highlights the most influential input features, and can be used to gain insights into the model's decision-making process. Integrated Gradients can be used to explain a wide range of deep learning models, including CNNs, RNNs, and transformers. Additionally, it's a theoretically grounded technique that satisfies a set of desirable properties, such as sensitivity, implementation invariance, and completeness. | AutoML | Image Multi-class Classification, Image Multi-label Classification |
91
91
| XRAI |[XRAI](https://arxiv.org/pdf/1906.02825.pdf) is a novel region-based saliency method based on Integrated Gradients (IG). It over-segments the image and iteratively tests the importance of each region, coalescing smaller regions into larger segments based on attribution scores. This strategy yields high quality, tightly bounded saliency regions that outperform existing saliency techniques. XRAI can be used with any DNN-based model as long as there's a way to cluster the input features into segments through some similarity metric. | AutoML | Image Multi-class Classification, Image Multi-label Classification |
92
-
| D-RISE | D-RISE is a model agnostic method for creating visual explanations for the predictions of object detection models. By accounting for both the localization and categorization aspects of object detection, D-RISE can produce saliency maps that highlight parts of an image that most contribute to the prediction of the detector. Unlike gradient-based methods, D-RISE is more general and doesn't need access to the inner workings of the object detector; it only requires access to the inputs and outputs of the model. The method can be applied to one-stage detectors (for example, YOLOv3), two-stage detectors (for example, Faster-RCNN), and Vision Transformers (for example, DETR, OWL-ViT). <br> D-Rise provides the saliency map by creating random masks of the input image and will send it to the object detector with the random masks of the input image. By assessing the change of the object detector’s score, it aggregates all the detections with each mask and produce a final saliency map. | Model Agnostic | Object Detection |
92
+
| D-RISE | D-RISE is a model agnostic method for creating visual explanations for the predictions of object detection models. By accounting for both the localization and categorization aspects of object detection, D-RISE can produce saliency maps that highlight parts of an image that most contribute to the prediction of the detector. Unlike gradient-based methods, D-RISE is more general and doesn't need access to the inner workings of the object detector; it only requires access to the inputs and outputs of the model. The method can be applied to one-stage detectors (for example, YOLOv3), two-stage detectors (for example, Faster-RCNN), and Vision Transformers (for example, DETR, OWL-ViT). <br> D-Rise provides the saliency map by creating random masks of the input image and will send it to the object detector with the random masks of the input image. By assessing the change of the object detector's score, it aggregates all the detections with each mask and produce a final saliency map. | Model Agnostic | Object Detection |
93
93
94
94
95
95
### Supported in Python SDK v1
@@ -143,4 +143,4 @@ You can run the explanation remotely on Azure Machine Learning Compute and log t
143
143
* Learn how to generate the Responsible AI dashboard via [CLI v2 and SDK v2](how-to-responsible-ai-dashboard-sdk-cli.md) or the [Azure Machine Learning studio UI](how-to-responsible-ai-dashboard-ui.md).
144
144
* Explore the [supported interpretability visualizations](how-to-responsible-ai-dashboard.md#feature-importances-model-explanations) of the Responsible AI dashboard.
145
145
* Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.
146
-
* Learn how to enable [interpretability for automated machine learning models](how-to-machine-learning-interpretability-automl.md).
146
+
* Learn how to enable [interpretability for automated machine learning models](./v1/how-to-machine-learning-interpretability-automl.md).
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-understand-automated-ml.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -313,7 +313,7 @@ The mAP, precision and recall values are logged at an epoch-level for image obje
313
313
314
314
While model evaluation metrics and charts are good for measuring the general quality of a model, inspecting which dataset features a model used to make its predictions is essential when practicing responsible AI. That's why automated ML provides a model explanations dashboard to measure and report the relative contributions of dataset features. See how to [view the explanations dashboard in the Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md#model-explanations-preview).
315
315
316
-
For a code first experience, see how to set up [model explanations for automated ML experiments with the Azure Machine Learning Python SDK](how-to-machine-learning-interpretability-automl.md).
316
+
For a code first experience, see how to set up [model explanations for automated ML experiments with the Azure Machine Learning Python SDK](./v1/how-to-machine-learning-interpretability-automl.md).
317
317
318
318
> [!NOTE]
319
319
> Interpretability, best model explanation, is not available for automated ML forecasting experiments that recommend the following algorithms as the best model or ensemble:
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -130,7 +130,7 @@ Otherwise, you'll see a list of your recent automated ML experiments, including
130
130
Additional configurations|Description
131
131
------|------
132
132
Primary metric| Main metric used for scoring your model. [Learn more about model metrics](how-to-configure-auto-train.md#primary-metric).
133
-
Explain best model | Select to enable or disable, in order to show explanations for the recommended best model. <br> This functionality is not currently available for [certain forecasting algorithms](how-to-machine-learning-interpretability-automl.md#interpretability-during-training-for-the-best-model).
133
+
Explain best model | Select to enable or disable, in order to show explanations for the recommended best model. <br> This functionality is not currently available for [certain forecasting algorithms](./v1/how-to-machine-learning-interpretability-automl.md#interpretability-during-training-for-the-best-model).
134
134
Blocked algorithm| Select algorithms you want to exclude from the training job. <br><br> Allowing algorithms is only available for [SDK experiments](how-to-configure-auto-train.md#supported-algorithms). <br> See the [supported algorithms for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels).
135
135
Exit criterion| When any of these criteria are met, the training job is stopped. <br> *Training job time (hours)*: How long to allow the training job to run. <br> *Metric score threshold*: Minimum metric score for all pipelines. This ensures that if you have a defined target metric you want to reach, you do not spend more time on the training job than necessary.
136
136
Concurrency| *Max concurrent iterations*: Maximum number of pipelines (iterations) to test in the training job. The job will not run more than the specified number of iterations. Learn more about how automated ML performs [multiple child jobs on clusters](how-to-configure-auto-train.md#multiple-child-runs-on-clusters).
@@ -266,7 +266,7 @@ After your experiment completes, you can test the model(s) that automated ML gen
266
266
267
267
To better understand your model, you can see which data features (raw or engineered) influenced the model's predictions with the model explanations dashboard.
268
268
269
-
The model explanations dashboard provides an overall analysis of the trained model along with its predictions and explanations. It also lets you drill into an individual data point and its individual feature importance. [Learn more about the explanation dashboard visualizations](how-to-machine-learning-interpretability-aml.md#visualizations).
269
+
The model explanations dashboard provides an overall analysis of the trained model along with its predictions and explanations. It also lets you drill into an individual data point and its individual feature importance. [Learn more about the explanation dashboard visualizations](./v1/how-to-machine-learning-interpretability-aml.md#visualizations).
Azure Machine Learning can also log information from other sources during training, such as automated machine learning runs, or Docker containers that run the jobs. These logs aren't documented, but if you encounter problems and contact Microsoft support, they may be able to use these logs during troubleshooting.
216
216
217
-
For information on logging metrics in Azure Machine Learning designer, see [How to log metrics in the designer](../how-to-track-designer-experiments.md)
217
+
For information on logging metrics in Azure Machine Learning designer, see [How to log metrics in the designer](how-to-track-designer-experiments.md)
0 commit comments