Skip to content

Commit 15fdc5c

Browse files
authored
Merge pull request #109737 from diberry/0331-per-bk-lnks-2
[CogSvcs] Personalizer - brkn links
2 parents 2ab438b + 7b4266d commit 15fdc5c

File tree

1 file changed

+19
-19
lines changed

1 file changed

+19
-19
lines changed

articles/cognitive-services/personalizer/ethics-responsible-use.md

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Ethics and responsible use - Personalizer
33
titleSuffix: Azure Cognitive Services
4-
description: These guidelines are aimed at helping you to implement personalization in a way that helps you build trust in your company and service. Be sure to pause to research, learn and deliberate on the impact of the personalization on peoples lives. When in doubt, seek guidance.
4+
description: These guidelines are aimed at helping you to implement personalization in a way that helps you build trust in your company and service. Be sure to pause to research, learn and deliberate on the impact of the personalization on people's lives. When in doubt, seek guidance.
55
services: cognitive-services
66
author: diberry
77
manager: nitinme
@@ -11,16 +11,16 @@ ms.topic: conceptual
1111
ms.date: 06/12/2019
1212
ms.author: diberry
1313
---
14-
14+
1515
# Guidelines for responsible implementation of Personalizer
1616

17-
For people and society to realize the full potential of AI, implementations need to be designed in such a way that they earn the trust of those adding AI to their applications and the users of applications built with AI. These guidelines are aimed at helping you to implement Personalizer in a way that helps you build trust in your company and service. Be sure to pause to research, learn and deliberate on the impact of the personalization on peoples lives. When in doubt, seek guidance.
17+
For people and society to realize the full potential of AI, implementations need to be designed in such a way that they earn the trust of those adding AI to their applications and the users of applications built with AI. These guidelines are aimed at helping you to implement Personalizer in a way that helps you build trust in your company and service. Be sure to pause to research, learn and deliberate on the impact of the personalization on people's lives. When in doubt, seek guidance.
1818

1919
These guidelines are not intended as legal advice and you should separately ensure that your application complies with the fast-paced developments in the law in this area and in your sector.
2020

2121
Also, in designing your application using Personalizer, you should consider a broad set of responsibilities you have when developing any data-centric AI system, including ethics, privacy, security, safety, inclusion, transparency and accountability. You can read more about these in the [Recommended reading](#recommended-reading) section.
2222

23-
You can use the following content as a starter checklist, and customize and refine it to your scenario. This document has two main sections: The first is dedicated to highlighting responsible use considerations when choosing scenarios, features and rewards for Personalizer. The second take a set of values Microsoft believes should be considered when building AI systems, and provides actionable suggestions and risks on how your use of Personalizer influences them.
23+
You can use the following content as a starter checklist, and customize and refine it to your scenario. This document has two main sections: The first is dedicated to highlighting responsible use considerations when choosing scenarios, features and rewards for Personalizer. The second take a set of values Microsoft believes should be considered when building AI systems, and provides actionable suggestions and risks on how your use of Personalizer influences them.
2424

2525

2626
## Your responsibility
@@ -37,18 +37,18 @@ Microsoft is continuously putting effort into its tools and documents to help yo
3737
Implementing Personalizer can be of great value to your users and your business. To implement Personalizer responsibly, start by considering the following guidelines when:
3838

3939
* Choosing use cases to apply Personalization.
40-
* Building [reward functions](https://github.com/Azure/personalization-rl/blob/master/docs/concepts-rewards.md).
41-
* Choosing which [features](https://github.com/Azure/personalization-rl/blob/master/docs/concepts-features.md) about the context and possible actions you will use for personalization.
40+
* Building [reward functions](concept-rewards.md).
41+
* Choosing which [features](concepts-features.md) about the context and possible actions you will use for personalization.
4242

4343

4444
## Choosing use cases for Personalizer
4545

46-
Using a service that learns to personalize content and user interfaces is useful. It can also be misapplied if the way the personalization creates negative side effects in the real world, including if users are unaware of content personalization.
46+
Using a service that learns to personalize content and user interfaces is useful. It can also be misapplied if the way the personalization creates negative side effects in the real world, including if users are unaware of content personalization.
4747

48-
Examples of uses of Personalizer with heightened potential for negative side effects or a lack of transparency include scenarios where the "reward" depends on many long-term complex factors that, when over-simplified into an immediate reward can have unfavorable results for individuals. These tend to be considered consequential choices, or choices that involve a risk of harm. For example:
48+
Examples of uses of Personalizer with heightened potential for negative side effects or a lack of transparency include scenarios where the "reward" depends on many long-term complex factors that, when over-simplified into an immediate reward can have unfavorable results for individuals. These tend to be considered "consequential" choices, or choices that involve a risk of harm. For example:
4949

5050

51-
* **Finance**: Personalizing offers on loan, financial, and insurance products, where risk factors are based on data the individuals don't know about, can't obtain, or can't dispute.
51+
* **Finance**: Personalizing offers on loan, financial, and insurance products, where risk factors are based on data the individuals don't know about, can't obtain, or can't dispute.
5252
* **Education**: Personalizing ranks for school courses and education institutions where recommendations may propagate biases and reduce users' awareness of other options.
5353
* **Democracy and Civic Participation**: Personalizing content for users with the goal of influencing opinions is consequential and manipulative.
5454
* **Third-party reward evaluation**: Personalizing items where the reward is based on a latter 3rd party evaluation of the user, instead of having a reward generated by the user's own behavior.
@@ -73,15 +73,15 @@ Consider the effect of these features:
7373
* **User demographics**: Features regarding sex, gender, age, race, religion: These features may be not allowed in certain applications for regulatory reasons, and it may not be ethical to personalize around them because the personalization would propagate generalizations and bias. An example of this bias propagation is a job posting for engineering not being shown to elderly or gender-based audiences.
7474
* **Locale information**: In many places of the world, location information (such as a zip code, postal code, or neighborhood name) can be highly correlated with income, race and religion.
7575
* **User Perception of Fairness**: Even in cases where your application is making sound decisions, consider the effect of users perceiving that content displayed in your application changes in a way that appears to be correlated to features that would be discriminatory.
76-
* **Unintended Bias in Features**: There are types of biases that may be introduced by using features that only affect a subset of the population. This requires extra attention if features are being generated algorithmically, such as when using image analysis to extract items in a picture or text analytics to discover entities in text. Make yourself aware of the characteristics of the services you use to create these features.
76+
* **Unintended Bias in Features**: There are types of biases that may be introduced by using features that only affect a subset of the population. This requires extra attention if features are being generated algorithmically, such as when using image analysis to extract items in a picture or text analytics to discover entities in text. Make yourself aware of the characteristics of the services you use to create these features.
7777

7878
Apply the following practices when choosing features to send in contexts and actions to Personalizer:
7979

8080
* Consider the legality and ethics of using certain features for some applications, and whether innocent-looking features may be proxies for others you want to or should avoid,
8181
* Be transparent to users that algorithms and data analysis are being used to personalize the options they see.
8282
* Ask yourself: Would my users care and be happy if I used this information to personalize the content for them? Would I feel comfortable showing them how the decision was made to highlight or hide certain items?
8383
* Use behavioral rather than classification or segmentation data based on other characteristics. Demographic information was traditionally used by retailers for historical reasons – demographic attributes seemed simple to collect and act upon before a digital era, - but question how relevant demographic information is when you have actual interaction, contextual, and historical data that relates more closely to the preferences and identity of users.
84-
* Consider how to prevent features from being 'spoofed' by malicious users, which if exploited in large numbers can lead to training Personalizer in misleading ways to purposefully disrupt, embarrass and harass certain classes of users.
84+
* Consider how to prevent features from being 'spoofed' by malicious users, which if exploited in large numbers can lead to training Personalizer in misleading ways to purposefully disrupt, embarrass and harass certain classes of users.
8585
* When appropriate and feasible, design your application to allow your users to opt in or opt out of having certain personal features used. These could be grouped, such as "Location information", "Device Information", "Past Purchase History" etc.
8686

8787

@@ -96,7 +96,7 @@ For example, rewarding on clicks will make the Personalizer Service seek clicks
9696
As a contrasting example, a news site may want to set rewards tied to something more meaningful than clicks, such as "Did the user spend enough time to read the content?" "Did they click on relevant articles or references?". With Personalizer it is easy to tie metrics closely to rewards. But be careful not to confound short-term user engagement with good outcomes.
9797

9898
### Unintended consequences from reward scores
99-
Reward scores may be built with the best of intentions, but can still create unexpected consequences or unintended results on how Personalizer ranks content.
99+
Reward scores may be built with the best of intentions, but can still create unexpected consequences or unintended results on how Personalizer ranks content.
100100

101101
Consider the following examples:
102102

@@ -117,7 +117,7 @@ The following are areas of design for responsible implementations of AI. Learn m
117117
![AI Values from Future Computed](media/ethics-and-responsible-use/ai-values-future-computed.png)
118118

119119
### Accountability
120-
*People who design and deploy AI Systems must be accountable for how their systems operate*.
120+
*People who design and deploy AI Systems must be accountable for how their systems operate*.
121121

122122
* Create internal guidelines on how to implement Personalizer, document, and communicate them to your team, executives, and suppliers.
123123
* Perform periodic reviews of how reward scores are computed, perform offline evaluations to see what features are affecting Personalizer, and use the results to eliminate unneeded and unnecessary features.
@@ -150,17 +150,17 @@ The following are areas of design for responsible implementations of AI. Learn m
150150
*AI Systems should be secure and respect privacy*. When using Personalizer:
151151

152152
* *Inform users up front about the data that is collected and how it is used and obtain their consent beforehand*, following your local and industry regulations.
153-
* *Provide privacy-protecting user controls.* For applications that store personal information, consider providing an easy-to-find button for functions such as:
154-
* `Show me all you know about me`
155-
* `Forget my last interaction`
153+
* *Provide privacy-protecting user controls.* For applications that store personal information, consider providing an easy-to-find button for functions such as:
154+
* `Show me all you know about me`
155+
* `Forget my last interaction`
156156
* `Delete all you know about me`
157157

158158
In some cases, these may be legally required. Consider the tradeoffs in retraining models periodically so they don't contain traces of deleted data.
159159

160160
### Inclusiveness
161161
*Address a broad range of human needs and experiences*.
162162
* *Provide personalized experiences for accessibility-enabled interfaces.* The efficiency that comes from good personalization - applied to reduce the amount of effort, movement, and needless repetition in interactions- can be especially beneficial to people with disabilities.
163-
* *Adjust application behavior to context*. You can use Personalizer to disambiguate between intents in a chat bot, for example, as the right interpretation may be contextual and one size may not fit all.
163+
* *Adjust application behavior to context*. You can use Personalizer to disambiguate between intents in a chat bot, for example, as the right interpretation may be contextual and one size may not fit all.
164164

165165

166166
## Proactive readiness for increased data protection and governance
@@ -180,14 +180,14 @@ Consider creating methods for team members, users and business owners to report
180180
Any person thinking about side effects of use of any technology is limited by their perspective and life experience. Expand the range of opinions available by bringing in more diverse voices into your teams, users, or advisory boards; such that it is possible and encouraged for them to speak up. Consider training and learning materials to further expand the team knowledge in this domain, and to add capability to discuss complex and sensitive topics.
181181

182182
Consider treating tasks regarding responsible use just like other crosscutting tasks in the application lifecycle, such as tasks related to user experience, security, or DevOps. These tasks and their requirements can't be an afterthought. Responsible use should be discussed and verified throughout the application lifecycle.
183-
183+
184184
## Questions and feedback
185185

186186
Microsoft is continuously putting effort into tools and documents to help you act on these responsibilities. Our team invites you to [provide feedback to Microsoft](mailto:[email protected]?subject%3DPersonalizer%20Responsible%20Use%20Feedback&body%3D%5BPlease%20share%20any%20question%2C%20idea%20or%20concern%5D) if you believe additional tools, product features, and documents would help you implement these guidelines for using Personalizer.
187187

188188
## Recommended reading
189189

190-
* See Microsofts six principles for the responsible development of AI published in the January 2018 book, [The Future Computed](https://news.microsoft.com/futurecomputed/)
190+
* See Microsoft's six principles for the responsible development of AI published in the January 2018 book, [The Future Computed](https://news.microsoft.com/futurecomputed/)
191191
* [Who Owns the Future?](https://www.goodreads.com/book/show/15802693-who-owns-the-future) by Jaron Lanier.
192192
* [Weapons of Math Destruction](https://www.goodreads.com/book/show/28186015-weapons-of-math-destruction) by - Cathy O'Neil
193193
* [Ethics and Data Science](https://www.oreilly.com/library/view/ethics-and-data/9781492043898/) by DJ Patil, Hilary Mason, Mike Loukides.

0 commit comments

Comments
 (0)