|
1 | 1 | ---
|
2 | 2 | title: Share Responsible AI insights and make data-driven decisions with Azure Machine Learning Responsible AI scorecard
|
3 | 3 | titleSuffix: Azure Machine Learning
|
4 |
| -description: Learn about how to use the Responsible AI scorecard to share responsible AI insights from your machine learning models and make data-driven decisions with non-technical and technical stakeholders. |
| 4 | +description: Learn about how to use the Responsible AI scorecard to share responsible AI insights from your machine learning models and make data-driven decisions with nontechnical and technical stakeholders. |
5 | 5 | services: machine-learning
|
6 | 6 | ms.service: azure-machine-learning
|
7 | 7 | ms.subservice: responsible-ai
|
8 | 8 | ms.topic: conceptual
|
9 | 9 | ms.author: lagayhar
|
10 | 10 | author: lgayhardt
|
11 | 11 | ms.reviewer: mesameki
|
12 |
| -ms.date: 02/27/2024 |
| 12 | +ms.date: 03/31/2025 |
13 | 13 | ms.custom: responsible-ml, build-2023, build-2023-dataai
|
14 | 14 | ---
|
15 | 15 |
|
16 | 16 | # Share Responsible AI insights using the Responsible AI scorecard (preview)
|
17 | 17 |
|
18 | 18 | Our Responsible AI dashboard is designed for machine learning professionals and data scientists to explore and evaluate model insights and inform their data-driven decisions. While it can help you implement Responsible AI practically in your machine learning lifecycle, there are some needs left unaddressed:
|
19 | 19 |
|
20 |
| -- There often exists a gap between the technical Responsible AI tools (designed for machine-learning professionals) and the ethical, regulatory, and business requirements that define the production environment. |
21 |
| -- While an end-to-end machine learning life cycle includes both technical and non-technical stakeholders in the loop, there's little support to enable an effective multi-stakeholder alignment, helping technical experts get timely feedback and direction from the non-technical stakeholders. |
22 |
| -- AI regulations make it essential to be able to share model and data insights with auditors and risk officers for auditability purposes. |
| 20 | +- The gap between the technical Responsible AI tools (designed for machine learning professionals) and the ethical, regulatory, and business requirements that define the production environment. |
| 21 | +- The need for effective multi-stakeholder alignment in an end-to-end machine learning lifecycle, ensuring technical experts receive timely feedback and direction from nontechnical stakeholders. |
| 22 | +- The ability to share model and data insights with auditors and risk officers for auditability purposes, as required by AI regulations. |
23 | 23 |
|
24 |
| -One of the biggest benefits of using the Azure Machine Learning ecosystem is related to the archival of model and data insights in the Azure Machine Learning Run History (for quick reference in future). As a part of that infrastructure and to accompany machine learning models and their corresponding Responsible AI dashboards, we introduce the Responsible AI scorecard to empower ML professionals to generate and share their data and model health records easily. |
| 24 | +One of the biggest benefits of using the Azure Machine Learning ecosystem is the ability to archive model and data insights in the Azure Machine Learning Run History for quick reference in the future. As part of this infrastructure, and to complement machine learning models and their corresponding Responsible AI dashboards, we introduce the Responsible AI scorecard. This scorecard empowers machine learning professionals to easily generate and share their data and model health records. |
25 | 25 |
|
26 | 26 | [!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)]
|
27 | 27 |
|
28 | 28 | ## Who should use a Responsible AI scorecard?
|
29 | 29 |
|
30 |
| -- If you're a data scientist or a machine learning professional, after training your model and generating its corresponding Responsible AI dashboard(s) for assessment and decision-making purposes, you can extract those learnings via our PDF scorecard and share the report easily with your technical and non-technical stakeholders to build trust and gain their approval for deployment. |
31 |
| - |
32 |
| -- If you're a product manager, business leader, or an accountable stakeholder on an AI product, you can pass your desired model performance and fairness target values such as your target accuracy, target error rate, etc., to your data science team, asking them to generate this scorecard with respect to your identified target values and whether your model meets them. That can provide guidance into whether the model should be deployed or further improved. |
| 30 | +- **Data scientists and machine learning professionals**: After training your model and generating its corresponding Responsible AI dashboard for assessment and decision-making purposes, you can extract those learnings via our PDF scorecard. This allows you to easily share the report with your technical and nontechnical stakeholders, building trust and gaining their approval for deployment. |
| 31 | +- **Product managers, business leaders, and accountable stakeholders on an AI product**: You can provide your desired model performance and fairness target values, such as target accuracy and target error rate, to your data science team. They can then generate the scorecard based on these target values to determine whether the model meets them. This helps guide decisions on whether the model should be deployed or further improved. |
33 | 32 |
|
34 | 33 | ## Next steps
|
35 | 34 |
|
|
0 commit comments