You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/concept-responsible-ai-dashboard.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -92,7 +92,7 @@ Mitigation steps are available via standalone tools such as [Fairlearn](https://
92
92
93
93
Decision-making is one of the biggest promises of machine learning. The Responsible AI dashboard can help you make informed business decisions through:
94
94
95
-
- Data-driven insights, to further understand causal treatment effects on an outcome, by using historical data only. For example:
95
+
- Data-driven insights, to further understand causal treatment effects on an outcome by using historical data only. For example:
96
96
97
97
"How would a medicine affect a patient's blood pressure?"
98
98
@@ -111,15 +111,15 @@ These components of the Responsible AI dashboard support responsible decision-ma
111
111
-**Causal inference**: The causal inference component estimates how a real-world outcome changes in the presence of an intervention. It also helps construct promising interventions by simulating feature responses to various interventions and creating rules to determine which population cohorts would benefit from a particular intervention. Collectively, these functionalities allow you to apply new policies and effect real-world change.
112
112
113
113
The capabilities of this component come from the [EconML](https://github.com/Microsoft/EconML) package, which estimates heterogeneous treatment effects from observational data via machine learning.
114
-
-**Counterfactual analysis**: You can reuse the counterfactual analysis component here to generate minimum changes applied to a data point's features that lead to opposite model predictions. For example, Taylor would have obtained the loan approval from the AI if they earned $10,000 more in annual income and had two fewer credit cards open.
114
+
-**Counterfactual analysis**: You can reuse the counterfactual analysis component here to generate minimum changes applied to a data point's features that lead to opposite model predictions. For example: Taylor would have obtained the loan approval from the AI if they earned $10,000 more in annual income and had two fewer credit cards open.
115
115
116
116
Providing this information to users informs their perspective. It educates them on how they can take action to get the desired outcome from the AI in the future.
117
117
118
118
The capabilities of this component come from the [DiCE](https://github.com/interpretml/DiCE) package.
119
119
120
120
## Reasons for using the Responsible AI dashboard
121
121
122
-
Although progress has been made on individual tools for specific areas of Responsible AI, data scientists often need to use various tools to holistically evaluate their models and data. For example, they might have to use model interpretability and fairness assessment together.
122
+
Although progress has been made on individual tools for specific areas of Responsible AI, data scientists often need to use various tools to holistically evaluate their models and data. For example: they might have to use model interpretability and fairness assessment together.
123
123
124
124
If data scientists discover a fairness issue with one tool, they then need to jump to a different tool to understand what data or model factors lie at the root of the issue before taking any steps on mitigation. The following factors further complicate this challenging process:
125
125
@@ -165,13 +165,13 @@ The following people can use the Responsible AI dashboard, and its corresponding
165
165
166
166
- The Responsible AI dashboard currently supports regression and classification (binary and multi-class) models trained on tabular structured data.
167
167
- The Responsible AI dashboard currently supports MLflow models that are registered in Azure Machine Learning with a sklearn (scikit-learn) flavor only. The scikit-learn models should implement `predict()/predict_proba()` methods, or the model should be wrapped within a class that implements `predict()/predict_proba()` methods. The models must be loadable in the component environment and must be pickleable.
168
-
- The Responsible AI dashboard currently visualizes up to 5K of your data points in the dashboard UI. You should downsample your dataset to 5K or less before passing it to the dashboard.
168
+
- The Responsible AI dashboard currently visualizes up to 5K of your data points on the dashboard UI. You should downsample your dataset to 5K or less before passing it to the dashboard.
169
169
- The dataset inputs to the Responsible AI dashboard must be pandas DataFrames in Parquet format. NumPy and SciPy sparse data is currently not supported.
170
-
- The Responsible AI dashboard currently supports numeric or categorical features. For categorical features, you have to explicitly specify the feature names.
170
+
- The Responsible AI dashboard currently supports numeric or categorical features. For categorical features, the user has to explicitly specify the feature names.
171
171
- The Responsible AI dashboard currently doesn't support datasets with more than 10K columns.
172
172
173
173
174
174
## Next steps
175
175
176
176
- Learn how to generate the Responsible AI dashboard via [CLI and SDK](how-to-responsible-ai-dashboard-sdk-cli.md) or [Azure Machine Learning studio UI](how-to-responsible-ai-dashboard-ui.md).
177
-
- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.
177
+
- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed on the Responsible AI dashboard.
0 commit comments