Skip to content

Commit b980ef6

Browse files
committed
edit pass: concept-responsible
1 parent da70907 commit b980ef6

File tree

2 files changed

+11
-11
lines changed

2 files changed

+11
-11
lines changed

articles/machine-learning/concept-responsible-ai-dashboard.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ The following table describes when to use Responsible AI dashboard components to
8686
| Diagnose | Model interpretability | The interpretability component generates human-understandable explanations of the predictions of a machine learning model. It provides multiple views into a model's behavior: <br> - Global explanations (for example, which features affect the overall behavior of a loan allocation model) <br> - Local explanations (for example, why an applicant's loan application was approved or rejected) <br><br> The capabilities of this component in the dashboard come from the [InterpretML](https://interpret.ml/) package. |
8787
| Diagnose | Counterfactual analysis and what-if| This component consists of two functionalities for better error diagnosis: <br> - Generating a set of examples in which minimal changes to a particular point alter the model's prediction. That is, the examples show the closest data points with opposite model predictions. <br> - Enabling interactive and custom what-if perturbations for individual data points to understand how the model reacts to feature changes. <br> <br> The capabilities of this component in the dashboard come from the [DiCE](https://github.com/interpretml/DiCE) package. |
8888

89-
Mitigation steps are available via standalone tools such as [Fairlearn](https://fairlearn.org/). For more information, see [unfairness mitigation algorithms](https://fairlearn.org/v0.7.0/user_guide/mitigation.html).
89+
Mitigation steps are available via standalone tools such as [Fairlearn](https://fairlearn.org/). For more information, see the [unfairness mitigation algorithms](https://fairlearn.org/v0.7.0/user_guide/mitigation.html).
9090

9191
### Responsible decision-making
9292

@@ -103,7 +103,7 @@ Decision-making is one of the biggest promises of machine learning. The Responsi
103103

104104
:::image type="content" source="./media/concept-responsible-ai-dashboard/decision-making.png" alt-text="Diagram that shows responsible AI dashboard capabilities for responsible business decision-making.":::
105105

106-
Exploratory data analysis, counterfactual analysis, and causal inference capabilities can help you make informed model-driven and data-driven decisions responsibly.
106+
Exploratory data analysis, causal inference, and counterfactual analysis capabilities can help you make informed model-driven and data-driven decisions responsibly.
107107

108108
These components of the Responsible AI dashboard support responsible decision-making:
109109

@@ -137,7 +137,7 @@ When you're ready to share those insights with other stakeholders, you can extra
137137

138138
The Responsible AI dashboard's strength lies in its customizability. It empowers users to design tailored, end-to-end model debugging and decision-making workflows that address their particular needs.
139139

140-
Need some inspiration? Here are some examples of how its components can be put together to analyze scenarios in diverse ways:
140+
Need some inspiration? Here are some examples of how the dashboard's components can be put together to analyze scenarios in diverse ways:
141141

142142
| Responsible AI dashboard flow | Use case |
143143
|-------------------------------|----------|

articles/machine-learning/concept-responsible-ml.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ ms.custom: responsible-ai, event-tier1-build-2022
1313
#Customer intent: As a data scientist, I want to learn what Responsible AI is and how I can use it in Azure Machine Learning.
1414
---
1515

16-
# What is Responsible AI? (preview)
16+
# What is Responsible AI (preview)?
1717

1818
[!INCLUDE [dev v1](../../includes/machine-learning-dev-v1.md)]
1919

@@ -40,7 +40,7 @@ To build trust, it's critical that AI systems operate reliably, safely, and cons
4040
**Reliability and safety in Azure Machine Learning**: The [error analysis](./concept-error-analysis.md) component of the [Responsible AI dashboard](./concept-responsible-ai-dashboard.md) enables data scientists and developers to:
4141

4242
- Get a deep understanding of how failure is distributed for a model.
43-
- Identify cohorts of data with a higher error rate than the overall benchmark.
43+
- Identify cohorts (subsets) of data with a higher error rate than the overall benchmark.
4444

4545
These discrepancies might occur when the system or model underperforms for specific demographic groups or for infrequently observed input conditions in the training data.
4646

@@ -64,7 +64,7 @@ Azure Machine Learning also supports a [Responsible AI scorecard](./how-to-respo
6464

6565
## Privacy and security
6666

67-
As AI becomes more prevalent, protecting privacy and securing personal and business information is becoming more important and complex. With AI, privacy and data security require close attention because access to data is essential for AI systems to make accurate and informed predictions and decisions about people. AI systems must comply with privacy laws that:
67+
As AI becomes more prevalent, protecting privacy and securing personal and business information are becoming more important and complex. With AI, privacy and data security require close attention because access to data is essential for AI systems to make accurate and informed predictions and decisions about people. AI systems must comply with privacy laws that:
6868

6969
- Require transparency about the collection, use, and storage of data.
7070
- Mandate that consumers have appropriate controls to choose how their data is used.
@@ -77,7 +77,7 @@ As AI becomes more prevalent, protecting privacy and securing personal and busin
7777
- Scan for vulnerabilities.
7878
- Apply and audit configuration policies.
7979

80-
Microsoft has also created two open-source packages that could enable further implementation of privacy and security principles:
80+
Microsoft has also created two open-source packages that can enable further implementation of privacy and security principles:
8181

8282
- [SmartNoise](https://github.com/opendifferentialprivacy/smartnoise-core): Differential privacy is a set of systems and practices that help keep the data of individuals safe and private. In machine learning solutions, differential privacy might be required for regulatory compliance. SmartNoise is an open-source project (co-developed by Microsoft) that contains components for building differentially private systems that are global.
8383

@@ -94,12 +94,12 @@ The people who design and deploy AI systems must be accountable for how their sy
9494
- Notify and alert on events in the machine learning lifecycle. Examples include experiment completion, model registration, model deployment, and data drift detection.
9595
- Monitor applications for operational issues and issues related to machine learning. Compare model inputs between training and inference, explore model-specific metrics, and provide monitoring and alerts on your machine learning infrastructure.
9696

97-
Besides the MLOps capabilities, the [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) in Azure Machine Learning creates accountability by enabling cross-stakeholder communications. The scorecard also creates accountability by empowering developers to configure, download, and share their model health insights with their technical and non-technical stakeholders to educate them about their AI's data and model health (and to build trust).
97+
Besides the MLOps capabilities, the [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) in Azure Machine Learning creates accountability by enabling cross-stakeholder communications. The scorecard also creates accountability by empowering developers to configure, download, and share their model health insights with their technical and non-technical stakeholders about AI data and model health. Sharing these insights can help build trust.
9898

99-
The machine learning platform also enables decision-making by informing model-driven and data-driven business decisions:
99+
The machine learning platform also enables decision-making by informing business decisions through:
100100

101-
- Data-driven insights to help stakeholders understand causal treatment effects on an outcome, by using historic data only. For example: "How would a medicine affect a patient's blood pressure?" Such insights are provided through the [causal inference](concept-causal-inference.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
102-
- Model-driven insights, to answer user questions such as "What can I do to get a different outcome from your AI next time?" to inform their actions. Such insights are provided to data scientists through the [counterfactual what-if](concept-counterfactual-analysis.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
101+
- Data-driven insights to help stakeholders understand causal treatment effects on an outcome, by using historic data only. For example: "How would a medicine affect a patient's blood pressure?" These insights are provided through the [causal inference](concept-causal-inference.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
102+
- Model-driven insights, to answer users' questions (such as "What can I do to get a different outcome from your AI next time?") so they can take action. Such insights are provided to data scientists through the [counterfactual what-if](concept-counterfactual-analysis.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
103103

104104
## Next steps
105105

0 commit comments

Comments
 (0)