You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/personalizer/ethics-responsible-use.md
+19-19Lines changed: 19 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
2
title: Ethics and responsible use - Personalizer
3
3
titleSuffix: Azure Cognitive Services
4
-
description: These guidelines are aimed at helping you to implement personalization in a way that helps you build trust in your company and service. Be sure to pause to research, learn and deliberate on the impact of the personalization on people’s lives. When in doubt, seek guidance.
4
+
description: These guidelines are aimed at helping you to implement personalization in a way that helps you build trust in your company and service. Be sure to pause to research, learn and deliberate on the impact of the personalization on people's lives. When in doubt, seek guidance.
5
5
services: cognitive-services
6
6
author: diberry
7
7
manager: nitinme
@@ -11,16 +11,16 @@ ms.topic: conceptual
11
11
ms.date: 06/12/2019
12
12
ms.author: diberry
13
13
---
14
-
14
+
15
15
# Guidelines for responsible implementation of Personalizer
16
16
17
-
For people and society to realize the full potential of AI, implementations need to be designed in such a way that they earn the trust of those adding AI to their applications and the users of applications built with AI. These guidelines are aimed at helping you to implement Personalizer in a way that helps you build trust in your company and service. Be sure to pause to research, learn and deliberate on the impact of the personalization on people’s lives. When in doubt, seek guidance.
17
+
For people and society to realize the full potential of AI, implementations need to be designed in such a way that they earn the trust of those adding AI to their applications and the users of applications built with AI. These guidelines are aimed at helping you to implement Personalizer in a way that helps you build trust in your company and service. Be sure to pause to research, learn and deliberate on the impact of the personalization on people's lives. When in doubt, seek guidance.
18
18
19
19
These guidelines are not intended as legal advice and you should separately ensure that your application complies with the fast-paced developments in the law in this area and in your sector.
20
20
21
21
Also, in designing your application using Personalizer, you should consider a broad set of responsibilities you have when developing any data-centric AI system, including ethics, privacy, security, safety, inclusion, transparency and accountability. You can read more about these in the [Recommended reading](#recommended-reading) section.
22
22
23
-
You can use the following content as a starter checklist, and customize and refine it to your scenario. This document has two main sections: The first is dedicated to highlighting responsible use considerations when choosing scenarios, features and rewards for Personalizer. The second take a set of values Microsoft believes should be considered when building AI systems, and provides actionable suggestions and risks on how your use of Personalizer influences them.
23
+
You can use the following content as a starter checklist, and customize and refine it to your scenario. This document has two main sections: The first is dedicated to highlighting responsible use considerations when choosing scenarios, features and rewards for Personalizer. The second take a set of values Microsoft believes should be considered when building AI systems, and provides actionable suggestions and risks on how your use of Personalizer influences them.
24
24
25
25
26
26
## Your responsibility
@@ -37,18 +37,18 @@ Microsoft is continuously putting effort into its tools and documents to help yo
37
37
Implementing Personalizer can be of great value to your users and your business. To implement Personalizer responsibly, start by considering the following guidelines when:
38
38
39
39
* Choosing use cases to apply Personalization.
40
-
* Building [reward functions](https://github.com/Azure/personalization-rl/blob/master/docs/concepts-rewards.md).
41
-
* Choosing which [features](https://github.com/Azure/personalization-rl/blob/master/docs/concepts-features.md) about the context and possible actions you will use for personalization.
40
+
* Building [reward functions](concept-rewards.md).
41
+
* Choosing which [features](concepts-features.md) about the context and possible actions you will use for personalization.
42
42
43
43
44
44
## Choosing use cases for Personalizer
45
45
46
-
Using a service that learns to personalize content and user interfaces is useful. It can also be misapplied if the way the personalization creates negative side effects in the real world, including if users are unaware of content personalization.
46
+
Using a service that learns to personalize content and user interfaces is useful. It can also be misapplied if the way the personalization creates negative side effects in the real world, including if users are unaware of content personalization.
47
47
48
-
Examples of uses of Personalizer with heightened potential for negative side effects or a lack of transparency include scenarios where the "reward" depends on many long-term complex factors that, when over-simplified into an immediate reward can have unfavorable results for individuals. These tend to be considered “consequential” choices, or choices that involve a risk of harm. For example:
48
+
Examples of uses of Personalizer with heightened potential for negative side effects or a lack of transparency include scenarios where the "reward" depends on many long-term complex factors that, when over-simplified into an immediate reward can have unfavorable results for individuals. These tend to be considered "consequential" choices, or choices that involve a risk of harm. For example:
49
49
50
50
51
-
***Finance**: Personalizing offers on loan, financial, and insurance products, where risk factors are based on data the individuals don't know about, can't obtain, or can't dispute.
51
+
***Finance**: Personalizing offers on loan, financial, and insurance products, where risk factors are based on data the individuals don't know about, can't obtain, or can't dispute.
52
52
***Education**: Personalizing ranks for school courses and education institutions where recommendations may propagate biases and reduce users' awareness of other options.
53
53
***Democracy and Civic Participation**: Personalizing content for users with the goal of influencing opinions is consequential and manipulative.
54
54
***Third-party reward evaluation**: Personalizing items where the reward is based on a latter 3rd party evaluation of the user, instead of having a reward generated by the user's own behavior.
@@ -73,15 +73,15 @@ Consider the effect of these features:
73
73
***User demographics**: Features regarding sex, gender, age, race, religion: These features may be not allowed in certain applications for regulatory reasons, and it may not be ethical to personalize around them because the personalization would propagate generalizations and bias. An example of this bias propagation is a job posting for engineering not being shown to elderly or gender-based audiences.
74
74
***Locale information**: In many places of the world, location information (such as a zip code, postal code, or neighborhood name) can be highly correlated with income, race and religion.
75
75
***User Perception of Fairness**: Even in cases where your application is making sound decisions, consider the effect of users perceiving that content displayed in your application changes in a way that appears to be correlated to features that would be discriminatory.
76
-
***Unintended Bias in Features**:There are types of biases that may be introduced by using features that only affect a subset of the population. This requires extra attention if features are being generated algorithmically, such as when using image analysis to extract items in a picture or text analytics to discover entities in text. Make yourself aware of the characteristics of the services you use to create these features.
76
+
***Unintended Bias in Features**:There are types of biases that may be introduced by using features that only affect a subset of the population. This requires extra attention if features are being generated algorithmically, such as when using image analysis to extract items in a picture or text analytics to discover entities in text. Make yourself aware of the characteristics of the services you use to create these features.
77
77
78
78
Apply the following practices when choosing features to send in contexts and actions to Personalizer:
79
79
80
80
* Consider the legality and ethics of using certain features for some applications, and whether innocent-looking features may be proxies for others you want to or should avoid,
81
81
* Be transparent to users that algorithms and data analysis are being used to personalize the options they see.
82
82
* Ask yourself: Would my users care and be happy if I used this information to personalize the content for them? Would I feel comfortable showing them how the decision was made to highlight or hide certain items?
83
83
* Use behavioral rather than classification or segmentation data based on other characteristics. Demographic information was traditionally used by retailers for historical reasons – demographic attributes seemed simple to collect and act upon before a digital era, - but question how relevant demographic information is when you have actual interaction, contextual, and historical data that relates more closely to the preferences and identity of users.
84
-
* Consider how to prevent features from being 'spoofed' by malicious users, which if exploited in large numbers can lead to training Personalizer in misleading ways to purposefully disrupt, embarrass and harass certain classes of users.
84
+
* Consider how to prevent features from being 'spoofed' by malicious users, which if exploited in large numbers can lead to training Personalizer in misleading ways to purposefully disrupt, embarrass and harass certain classes of users.
85
85
* When appropriate and feasible, design your application to allow your users to opt in or opt out of having certain personal features used. These could be grouped, such as "Location information", "Device Information", "Past Purchase History" etc.
86
86
87
87
@@ -96,7 +96,7 @@ For example, rewarding on clicks will make the Personalizer Service seek clicks
96
96
As a contrasting example, a news site may want to set rewards tied to something more meaningful than clicks, such as "Did the user spend enough time to read the content?" "Did they click on relevant articles or references?". With Personalizer it is easy to tie metrics closely to rewards. But be careful not to confound short-term user engagement with good outcomes.
97
97
98
98
### Unintended consequences from reward scores
99
-
Reward scores may be built with the best of intentions, but can still create unexpected consequences or unintended results on how Personalizer ranks content.
99
+
Reward scores may be built with the best of intentions, but can still create unexpected consequences or unintended results on how Personalizer ranks content.
100
100
101
101
Consider the following examples:
102
102
@@ -117,7 +117,7 @@ The following are areas of design for responsible implementations of AI. Learn m
117
117

118
118
119
119
### Accountability
120
-
*People who design and deploy AI Systems must be accountable for how their systems operate*.
120
+
*People who design and deploy AI Systems must be accountable for how their systems operate*.
121
121
122
122
* Create internal guidelines on how to implement Personalizer, document, and communicate them to your team, executives, and suppliers.
123
123
* Perform periodic reviews of how reward scores are computed, perform offline evaluations to see what features are affecting Personalizer, and use the results to eliminate unneeded and unnecessary features.
@@ -150,17 +150,17 @@ The following are areas of design for responsible implementations of AI. Learn m
150
150
*AI Systems should be secure and respect privacy*. When using Personalizer:
151
151
152
152
**Inform users up front about the data that is collected and how it is used and obtain their consent beforehand*, following your local and industry regulations.
153
-
**Provide privacy-protecting user controls.* For applications that store personal information, consider providing an easy-to-find button for functions such as:
154
-
*`Show me all you know about me`
155
-
*`Forget my last interaction`
153
+
**Provide privacy-protecting user controls.* For applications that store personal information, consider providing an easy-to-find button for functions such as:
154
+
*`Show me all you know about me`
155
+
*`Forget my last interaction`
156
156
*`Delete all you know about me`
157
157
158
158
In some cases, these may be legally required. Consider the tradeoffs in retraining models periodically so they don't contain traces of deleted data.
159
159
160
160
### Inclusiveness
161
161
*Address a broad range of human needs and experiences*.
162
162
**Provide personalized experiences for accessibility-enabled interfaces.* The efficiency that comes from good personalization - applied to reduce the amount of effort, movement, and needless repetition in interactions- can be especially beneficial to people with disabilities.
163
-
**Adjust application behavior to context*. You can use Personalizer to disambiguate between intents in a chat bot, for example, as the right interpretation may be contextual and one size may not fit all.
163
+
**Adjust application behavior to context*. You can use Personalizer to disambiguate between intents in a chat bot, for example, as the right interpretation may be contextual and one size may not fit all.
164
164
165
165
166
166
## Proactive readiness for increased data protection and governance
@@ -180,14 +180,14 @@ Consider creating methods for team members, users and business owners to report
180
180
Any person thinking about side effects of use of any technology is limited by their perspective and life experience. Expand the range of opinions available by bringing in more diverse voices into your teams, users, or advisory boards; such that it is possible and encouraged for them to speak up. Consider training and learning materials to further expand the team knowledge in this domain, and to add capability to discuss complex and sensitive topics.
181
181
182
182
Consider treating tasks regarding responsible use just like other crosscutting tasks in the application lifecycle, such as tasks related to user experience, security, or DevOps. These tasks and their requirements can't be an afterthought. Responsible use should be discussed and verified throughout the application lifecycle.
183
-
183
+
184
184
## Questions and feedback
185
185
186
186
Microsoft is continuously putting effort into tools and documents to help you act on these responsibilities. Our team invites you to [provide feedback to Microsoft](mailto:[email protected]?subject%3DPersonalizer%20Responsible%20Use%20Feedback&body%3D%5BPlease%20share%20any%20question%2C%20idea%20or%20concern%5D) if you believe additional tools, product features, and documents would help you implement these guidelines for using Personalizer.
187
187
188
188
## Recommended reading
189
189
190
-
* See Microsoft’s six principles for the responsible development of AI published in the January 2018 book, [The Future Computed](https://news.microsoft.com/futurecomputed/)
190
+
* See Microsoft's six principles for the responsible development of AI published in the January 2018 book, [The Future Computed](https://news.microsoft.com/futurecomputed/)
191
191
*[Who Owns the Future?](https://www.goodreads.com/book/show/15802693-who-owns-the-future) by Jaron Lanier.
192
192
*[Weapons of Math Destruction](https://www.goodreads.com/book/show/28186015-weapons-of-math-destruction) by - Cathy O'Neil
193
193
*[Ethics and Data Science](https://www.oreilly.com/library/view/ethics-and-data/9781492043898/) by DJ Patil, Hilary Mason, Mike Loukides.
0 commit comments