You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/concept-responsible-ai-dashboard.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -86,7 +86,7 @@ The following table describes when to use Responsible AI dashboard components to
86
86
| Diagnose | Model interpretability | The interpretability component generates human-understandable explanations of the predictions of a machine learning model. It provides multiple views into a model's behavior: <br> - Global explanations (for example, which features affect the overall behavior of a loan allocation model) <br> - Local explanations (for example, why an applicant's loan application was approved or rejected) <br><br> The capabilities of this component in the dashboard come from the [InterpretML](https://interpret.ml/) package. |
87
87
| Diagnose | Counterfactual analysis and what-if| This component consists of two functionalities for better error diagnosis: <br> - Generating a set of examples in which minimal changes to a particular point alter the model's prediction. That is, the examples show the closest data points with opposite model predictions. <br> - Enabling interactive and custom what-if perturbations for individual data points to understand how the model reacts to feature changes. <br> <br> The capabilities of this component in the dashboard come from the [DiCE](https://github.com/interpretml/DiCE) package. |
88
88
89
-
Mitigation steps are available via standalone tools such as [Fairlearn](https://fairlearn.org/). For more information, see [unfairness mitigation algorithms](https://fairlearn.org/v0.7.0/user_guide/mitigation.html).
89
+
Mitigation steps are available via standalone tools such as [Fairlearn](https://fairlearn.org/). For more information, see the [unfairness mitigation algorithms](https://fairlearn.org/v0.7.0/user_guide/mitigation.html).
90
90
91
91
### Responsible decision-making
92
92
@@ -103,7 +103,7 @@ Decision-making is one of the biggest promises of machine learning. The Responsi
103
103
104
104
:::image type="content" source="./media/concept-responsible-ai-dashboard/decision-making.png" alt-text="Diagram that shows responsible AI dashboard capabilities for responsible business decision-making.":::
105
105
106
-
Exploratory data analysis, counterfactual analysis, and causal inference capabilities can help you make informed model-driven and data-driven decisions responsibly.
106
+
Exploratory data analysis, causal inference, and counterfactual analysis capabilities can help you make informed model-driven and data-driven decisions responsibly.
107
107
108
108
These components of the Responsible AI dashboard support responsible decision-making:
109
109
@@ -137,7 +137,7 @@ When you're ready to share those insights with other stakeholders, you can extra
137
137
138
138
The Responsible AI dashboard's strength lies in its customizability. It empowers users to design tailored, end-to-end model debugging and decision-making workflows that address their particular needs.
139
139
140
-
Need some inspiration? Here are some examples of how its components can be put together to analyze scenarios in diverse ways:
140
+
Need some inspiration? Here are some examples of how the dashboard's components can be put together to analyze scenarios in diverse ways:
@@ -40,7 +40,7 @@ To build trust, it's critical that AI systems operate reliably, safely, and cons
40
40
**Reliability and safety in Azure Machine Learning**: The [error analysis](./concept-error-analysis.md) component of the [Responsible AI dashboard](./concept-responsible-ai-dashboard.md) enables data scientists and developers to:
41
41
42
42
- Get a deep understanding of how failure is distributed for a model.
43
-
- Identify cohorts of data with a higher error rate than the overall benchmark.
43
+
- Identify cohorts (subsets) of data with a higher error rate than the overall benchmark.
44
44
45
45
These discrepancies might occur when the system or model underperforms for specific demographic groups or for infrequently observed input conditions in the training data.
46
46
@@ -64,7 +64,7 @@ Azure Machine Learning also supports a [Responsible AI scorecard](./how-to-respo
64
64
65
65
## Privacy and security
66
66
67
-
As AI becomes more prevalent, protecting privacy and securing personal and business information is becoming more important and complex. With AI, privacy and data security require close attention because access to data is essential for AI systems to make accurate and informed predictions and decisions about people. AI systems must comply with privacy laws that:
67
+
As AI becomes more prevalent, protecting privacy and securing personal and business information are becoming more important and complex. With AI, privacy and data security require close attention because access to data is essential for AI systems to make accurate and informed predictions and decisions about people. AI systems must comply with privacy laws that:
68
68
69
69
- Require transparency about the collection, use, and storage of data.
70
70
- Mandate that consumers have appropriate controls to choose how their data is used.
@@ -77,7 +77,7 @@ As AI becomes more prevalent, protecting privacy and securing personal and busin
77
77
- Scan for vulnerabilities.
78
78
- Apply and audit configuration policies.
79
79
80
-
Microsoft has also created two open-source packages that could enable further implementation of privacy and security principles:
80
+
Microsoft has also created two open-source packages that can enable further implementation of privacy and security principles:
81
81
82
82
-[SmartNoise](https://github.com/opendifferentialprivacy/smartnoise-core): Differential privacy is a set of systems and practices that help keep the data of individuals safe and private. In machine learning solutions, differential privacy might be required for regulatory compliance. SmartNoise is an open-source project (co-developed by Microsoft) that contains components for building differentially private systems that are global.
83
83
@@ -94,12 +94,12 @@ The people who design and deploy AI systems must be accountable for how their sy
94
94
- Notify and alert on events in the machine learning lifecycle. Examples include experiment completion, model registration, model deployment, and data drift detection.
95
95
- Monitor applications for operational issues and issues related to machine learning. Compare model inputs between training and inference, explore model-specific metrics, and provide monitoring and alerts on your machine learning infrastructure.
96
96
97
-
Besides the MLOps capabilities, the [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) in Azure Machine Learning creates accountability by enabling cross-stakeholder communications. The scorecard also creates accountability by empowering developers to configure, download, and share their model health insights with their technical and non-technical stakeholders to educate them about their AI's data and model health (and to build trust).
97
+
Besides the MLOps capabilities, the [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) in Azure Machine Learning creates accountability by enabling cross-stakeholder communications. The scorecard also creates accountability by empowering developers to configure, download, and share their model health insights with their technical and non-technical stakeholders about AI data and model health. Sharing these insights can help build trust.
98
98
99
-
The machine learning platform also enables decision-making by informing model-driven and data-driven business decisions:
99
+
The machine learning platform also enables decision-making by informing business decisions through:
100
100
101
-
- Data-driven insights to help stakeholders understand causal treatment effects on an outcome, by using historic data only. For example: "How would a medicine affect a patient's blood pressure?" Such insights are provided through the [causal inference](concept-causal-inference.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
102
-
- Model-driven insights, to answer user questions such as "What can I do to get a different outcome from your AI next time?" to inform their actions. Such insights are provided to data scientists through the [counterfactual what-if](concept-counterfactual-analysis.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
101
+
- Data-driven insights to help stakeholders understand causal treatment effects on an outcome, by using historic data only. For example: "How would a medicine affect a patient's blood pressure?" These insights are provided through the [causal inference](concept-causal-inference.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
102
+
- Model-driven insights, to answer users' questions (such as "What can I do to get a different outcome from your AI next time?") so they can take action. Such insights are provided to data scientists through the [counterfactual what-if](concept-counterfactual-analysis.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
0 commit comments