You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: learn-pr/philanthropies/apply-responsible-ai-principles/3-explore-the-reliability-and-safety-principle.yml
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ metadata:
6
6
description: This content is a part of the 'Apply responsible AI principles in learning environments' module.
Copy file name to clipboardExpand all lines: learn-pr/philanthropies/apply-responsible-ai-principles/8-knowledge-check.yml
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ metadata:
6
6
description: This content is a part of the 'Apply responsible AI principles in learning environments' module.
7
7
author: yoosunee
8
8
ms.author: anruss
9
-
ms.date: 01/14/2025
9
+
ms.date: 03/25/2025
10
10
ms.topic: unit
11
11
ms.collection:
12
12
- philanthropies-ai-copilot
@@ -20,12 +20,12 @@ quiz:
20
20
- content: "To confirm, the AI system's performance is consistent with its training data."
21
21
isCorrect: false
22
22
explanation: "Incorrect. While consistency with training data is important, the key reason for evaluation with real-world data is to ensure the system performs fairly across all demographic groups."
23
-
- content: "To ensure the AI system does not require any updates or improvements."
23
+
- content: "To ensure the AI system doesn't require any updates or improvements."
24
24
isCorrect: false
25
25
explanation: "Incorrect. Evaluation with real-world data is crucial precisely because it helps identify areas where the AI system may need updates or improvements to ensure fairness."
26
26
- content: "To verify the AI system performs equitably for different demographic groups."
27
27
isCorrect: true
28
-
explanation: "Correct. Evaluating real-world data is essential to ensure that the AI system's error rates are fair and do not disproportionately affect any demographic group."
28
+
explanation: "Correct. Evaluating real-world data is essential to ensure that the AI system's error rates are fair and don't disproportionately affect any demographic group."
29
29
- content: "How does Microsoft's responsible AI standard promote accessibility?"
30
30
choices:
31
31
- content: "By requiring AI systems to be accessible through traditional interaction methods."
When users define an AI system's fairness, one aspect to consider is how well the system performs for different groups of people. You can look at the performance of the complete system, and it's sometimes useful to examine the performance of one or more components of the system on their own. Research shows that without conscious effort focused on parity, the performance of an AI system can often vary. Parity variations can be based on differences between groups, such as race, ethnicity, language, gender, and age, and intersectional groups.
1
+
When users define an AI system's fairness, one aspect to consider is how well the system performs for different groups of people. You can look at the performance of the complete system, and it's sometimes useful to examine the performance of one or more components of the system on their own. Research shows that without conscious effort focused on parity, the performance of an AI system can often vary. Parity variations can be based on differences between groups, such as race, ethnicity, language, gender, age, and intersectional groups.
Each service and feature of the AI system is different, and the system may not perfectly match your context or cover all scenarios required for your use-case. So, you must evaluate error rates for the AI system with real-world data that reflects your use-case, including testing with users from different demographic groups. Through evaluation, you can ensure the AI system is fair and not causing any harm. Three common types of AI-caused harms are:
6
6
7
-
**Harms of allocation**: When an AI system is used to extend or withhold opportunities, or resources to people, it causes harm through allocation. In addition, it makes different recommendations to some people based on which demographic group they belong to. For example, educational institutions are increasingly using AI systems to screen applications, resumes, and other data. When educational institutions train an AI system using biased historical data, they can inadvertently perpetuate existing discriminatory selection practices. If an institution with predominantly male learners trains its system based on historical data, it may end up favoring male applicants over female applicants, with the consequence of perpetuating gender disparities.
7
+
**Harms of allocation:** When an AI system is used to extend or withhold opportunities, or resources to people, it causes harm through allocation. In addition, it makes different recommendations to some people based on which demographic group they belong to. For example, educational institutions are increasingly using AI systems to screen applications, resumes, and other data. When educational institutions train an AI system using biased historical data, they can inadvertently perpetuate existing discriminatory selection practices. If an institution with predominantly male learners trains its system based on historical data, it may end up favoring male applicants over female applicants, with the consequence of perpetuating gender disparities.
8
8
9
-
**Harms of quality of service**: When an AI system's effectiveness fluctuates, it causes harm to quality of service. For example, facial recognition systems can misidentify or fail to recognize individuals from certain demographic groups. Such misidentification can happen when the data set used to train the system wasn't diverse, causing it to behave differently based on race or gender. Poor training can result in serious consequences, such as misidentification in law enforcement or security applications.
9
+
**Harms of quality of service:** When an AI system's effectiveness fluctuates, it causes harm to quality of service. For example, facial recognition systems can misidentify or fail to recognize individuals from certain demographic groups. Such misidentification can happen when the data set used to train the system wasn't diverse, causing it to behave differently based on race or gender. Poor training can result in serious consequences, such as misidentification in law enforcement or security applications.
10
10
11
11
**Harm of representation:** When an AI system's output contains stereotyping or demeaning content against some groups of people, or the outputs fail to depict a group of people sufficiently, it causes harm to representation. Such stereotyping can happen when AI harmfully associates certain genders, races, or ethnicities with certain traits, roles, or behaviors. For example, an automated ad recommendation system provided more criminal background check ad recommendations to people of color. Such reporting can result in higher levels of discrimination against certain groups of people.
12
12
@@ -16,18 +16,18 @@ AI systems should be fair and equitable to everyone, regardless of background, i
16
16
17
17
To ensure fairness in the AI system you're implementing, you should:
18
18
19
-
**Understand the purpose, scope, and intended uses of the AI system**. Ask yourself questions such as: What problem is the system trying to solve? Who benefits from the system? Who might the system harm? How can the system be used in ways that aren't intended or responsible?
19
+
**Understand the purpose, scope, and intended uses of the AI system:** Ask yourself questions such as: What problem is the system trying to solve? Who benefits from the system? Who might the system harm? How can the system be used in ways that aren't intended or responsible?
20
20
21
-
**Work with a diverse pool of people to implement the system**. Make sure that your implementation team reflects the diversity of your learner community and includes people with different backgrounds, experiences, education, and perspectives. Such an implementation helps you avoid unguarded spots and performance issues that could affect the system's performance and impact.
21
+
**Work with a diverse pool of people to implement the system:** Make sure that your implementation team reflects the diversity of your learner community and includes people with different backgrounds, experiences, education, and perspectives. Such an implementation helps you avoid unguarded spots and performance issues that could affect the system's performance and impact.
22
22
23
-
**Detect and eliminate bias in datasets by examining the sources, structure, and representativeness of your data**. As data is the foundation of AI systems, it can reflect existing social and systemic inequalities at every stage in creation, from data collection to data modeling to operation. You should ensure that your data is relevant, accurate, complete, and representative of your learner community.
23
+
**Detect and eliminate bias in datasets by examining the sources, structure, and representativeness of your data:** As data is the foundation of AI systems, it can reflect existing social and systemic inequalities at every stage in creation, from data collection to data modeling to operation. You should ensure that your data is relevant, accurate, complete, and representative of your learner community.
24
24
25
-
**Identify societal bias in machine learning algorithms** by applying tools and techniques that improve the transparency and intelligibility of the AI systems you're implementing. When you use prebuilt models, such as those delivered by Azure OpenAI Servicecan help avoid biases, you should also be cautious with the results provided by the Al system, and continue to meticulously monitor AI systems for performance issues.
25
+
**Identify societal bias in machine learning algorithms:**Accomplish by applying tools and techniques that improve the transparency and intelligibility of the AI systems you're implementing. When you use prebuilt models, such as those delivered by Azure OpenAI Service, it can help avoid biases. You should also be cautious with the results provided by the Al system, and continue to meticulously monitor AI systems for performance issues.
26
26
27
-
**Leverage human review and domain expertise**. Train your team to understand the meaning and implications of AI results, especially when AI is used to make consequential decisions about people. Decisions that use AI should be paired with review performed by subject matter experts. You should use AI as a copilot; it's a technology that can help you to do your job better and faster but still requires a degree of supervision.
27
+
**Leverage human review and domain expertise:** Train your team to understand the meaning and implications of AI results, especially when AI is used to make consequential decisions about people. Decisions that use AI should be paired with review performed by subject matter experts. You should use AI as a copilot. It's a technology that can help you to do your job better and faster, but it still requires a degree of supervision.
28
28
29
-
**Research and employ best practices**. Consider learning from the best practices used by other educational organizations and enterprises to help detect, prevent, and address societal and systemic biases in AI systems.
29
+
**Research and employ best practices:** Consider learning from the best practices used by other educational organizations and enterprises to help detect, prevent, and address societal and systemic biases in AI systems.
Understanding the fairness of an AI system is just the beginning. As we transition from the concept of fairness, it's crucial to recognize that fairness goes together with reliability and safety. These aspects aren't isolated; they're interdependent components that form the backbone of trustworthy AI. In Unit 2, we move to the principle of reliability & safety, exploring how this principle ensures that AI systems not only perform equitably but also operate with integrity and resilience in all circumstances.
33
+
Understanding the fairness of an AI system is just the beginning. As we transition from the concept of fairness, it's crucial to recognize that fairness goes together with reliability and safety. These aspects aren't isolated; they're interdependent components that form the backbone of trustworthy AI. In the next unit, we move to the principle of reliability & safety, exploring how this principle ensures that AI systems not only perform equitably, but they also operate with integrity and resilience in all circumstances.
0 commit comments