You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/concept-responsible-ai-dashboard.md
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,12 +42,12 @@ The Responsible AI dashboard is accompanied by a [PDF scorecard](how-to-responsi
42
42
43
43
The Responsible AI dashboard brings together, in a comprehensive view, various new and pre-existing tools. The dashboard integrates these tools with [Azure Machine Learning CLI v2, Azure Machine Learning Python SDK v2](concept-v2.md), and [Azure Machine Learning studio](overview-what-is-machine-learning-studio.md). The tools include:
44
44
45
-
-[Data explorer](concept-data-analysis.md) to understand and explore your dataset distributions and statistics.
46
-
-[Model overview and fairness assessment](concept-fairness-ml.md) to evaluate the performance of your model and evaluate your model's group fairness issues (how your model's predictions affect diverse groups of people).
47
-
-[Error analysis](concept-error-analysis.md) to view and understand how errors are distributed in your dataset.
48
-
-[Model interpretability](how-to-machine-learning-interpretability.md) (importance values for aggregate and individual features) to understand your model's predictions and how those overall and individual predictions are made.
49
-
-[Counterfactual what-if](concept-counterfactual-analysis.md) to observe how feature perturbations would affect your model predictions while providing the closest data points with opposing or different model predictions.
50
-
-[Causal analysis](concept-causal-inference.md) to use historical data to view the causal effects of treatment features on real-world outcomes.
45
+
-[Data explorer](concept-data-analysis.md), to understand and explore your dataset distributions and statistics.
46
+
-[Model overview and fairness assessment](concept-fairness-ml.md), to evaluate the performance of your model and evaluate your model's group fairness issues (how your model's predictions affect diverse groups of people).
47
+
-[Error analysis](concept-error-analysis.md), to view and understand how errors are distributed in your dataset.
48
+
-[Model interpretability](how-to-machine-learning-interpretability.md) (importance values for aggregate and individual features), to understand your model's predictions and how those overall and individual predictions are made.
49
+
-[Counterfactual what-if](concept-counterfactual-analysis.md), to observe how feature perturbations would affect your model predictions while providing the closest data points with opposing or different model predictions.
50
+
-[Causal analysis](concept-causal-inference.md), to use historical data to view the causal effects of treatment features on real-world outcomes.
51
51
52
52
Together, these tools will help you debug machine learning models, while informing your data-driven and model-driven business decisions. The following diagram shows how you can incorporate them into your AI lifecycle to improve your models and get solid data insights.
53
53
@@ -92,7 +92,7 @@ Mitigation steps are available via standalone tools such as [Fairlearn](https://
92
92
93
93
Decision-making is one of the biggest promises of machine learning. The Responsible AI dashboard can help you make informed business decisions through:
94
94
95
-
- Data-driven insights to further understand causal treatment effects on an outcome, by using historic data only. For example:
95
+
- Data-driven insights, to further understand causal treatment effects on an outcome, by using historical data only. For example:
96
96
97
97
"How would a medicine affect a patient's blood pressure?"
98
98
@@ -111,7 +111,7 @@ These components of the Responsible AI dashboard support responsible decision-ma
111
111
-**Causal inference**: The causal inference component estimates how a real-world outcome changes in the presence of an intervention. It also helps construct promising interventions by simulating feature responses to various interventions and creating rules to determine which population cohorts would benefit from a particular intervention. Collectively, these functionalities allow you to apply new policies and effect real-world change.
112
112
113
113
The capabilities of this component come from the [EconML](https://github.com/Microsoft/EconML) package, which estimates heterogeneous treatment effects from observational data via machine learning.
114
-
-**Counterfactual analysis**: You can reuse the counterfactual analysis component here to generate minimum changes applied to a data point's features that lead to opposite model predictions. For example: Taylor would have gotten the loan approval from the AI if they earned $10,000 more in annual income and had two fewer credit cards open.
114
+
-**Counterfactual analysis**: You can reuse the counterfactual analysis component here to generate minimum changes applied to a data point's features that lead to opposite model predictions. For example, Taylor would have obtained the loan approval from the AI if they earned $10,000 more in annual income and had two fewer credit cards open.
115
115
116
116
Providing this information to users informs their perspective. It educates them on how they can take action to get the desired outcome from the AI in the future.
117
117
@@ -147,7 +147,7 @@ Need some inspiration? Here are some examples of how the dashboard's components
147
147
| Model overview > data explorer | To understand the root cause of errors and fairness issues introduced via data imbalances or lack of representation of a particular data cohort |
148
148
| Model overview > interpretability | To diagnose model errors through understanding how the model has made its predictions |
149
149
| Data explorer > causal inference | To distinguish between correlations and causations in the data or decide the best treatments to apply to get a positive outcome |
150
-
| Interpretability > causal inference | To learn whether the factors that the model has used for predictionmaking have any causal effect on the real-world outcome|
150
+
| Interpretability > causal inference | To learn whether the factors that the model has used for prediction-making have any causal effect on the real-world outcome|
151
151
| Data explorer > counterfactuals analysis and what-if | To address customers' questions about what they can do next time to get a different outcome from an AI system|
152
152
153
153
## People who should use the Responsible AI dashboard
@@ -164,10 +164,10 @@ The following people can use the Responsible AI dashboard, and its corresponding
164
164
## Supported scenarios and limitations
165
165
166
166
- The Responsible AI dashboard currently supports regression and classification (binary and multi-class) models trained on tabular structured data.
167
-
- The Responsible AI dashboard currently supports MLFlow models that are registered in Azure Machine Learning with a sklearn flavor only. The scikit-learn models should implement `predict()/predict_proba()` methods, or the model should be wrapped within a class that implements `predict()/predict_proba()` methods. The models must be loadable in the component environment and must be pickleable.
167
+
- The Responsible AI dashboard currently supports MLflow models that are registered in Azure Machine Learning with a sklearn (scikit-learn) flavor only. The scikit-learn models should implement `predict()/predict_proba()` methods, or the model should be wrapped within a class that implements `predict()/predict_proba()` methods. The models must be loadable in the component environment and must be pickleable.
168
168
- The Responsible AI dashboard currently visualizes up to 5K of your data points in the dashboard UI. You should downsample your dataset to 5K or less before passing it to the dashboard.
169
169
- The dataset inputs to the Responsible AI dashboard must be pandas DataFrames in Parquet format. NumPy and SciPy sparse data is currently not supported.
170
-
- The Responsible AI dashboard currently supports numeric or categorical features. For categorical features, the user has to explicitly specify the feature names.
170
+
- The Responsible AI dashboard currently supports numeric or categorical features. For categorical features, you have to explicitly specify the feature names.
171
171
- The Responsible AI dashboard currently doesn't support datasets with more than 10K columns.
0 commit comments