Skip to content

Commit 21a8989

Browse files
committed
requested changes
1 parent 7302af8 commit 21a8989

File tree

4 files changed

+7
-7
lines changed

4 files changed

+7
-7
lines changed

articles/cognitive-services/personalizer/concept-active-learning.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -11,13 +11,13 @@ Learning settings determine the *hyperparameters* of the model training. Two mod
1111

1212
[Learning policy and settings](how-to-settings.md#configure-rewards-for-the-feedback-loop) are set on your Personalizer resource in the Azure portal.
1313

14-
### Import and export learning policies
14+
## Import and export learning policies
1515

1616
You can import and export learning-policy files from the Azure portal. Use this method to save existing policies, test them, replace them, and archive them in your source code control as artifacts for future reference and audit.
1717

1818
Learn [how to](how-to-manage-model.md#import-a-new-learning-policy) import and export a learning policy in the Azure portal for your Personalizer resource.
1919

20-
### Understand learning policy settings
20+
## Understand learning policy settings
2121

2222
The settings in the learning policy aren't intended to be changed. Change settings only if you understand how they affect Personalizer. Without this knowledge, you could cause problems, including invalidating Personalizer models.
2323

@@ -32,13 +32,13 @@ The following `.json` is an example of a learning policy.
3232
}
3333
```
3434

35-
### Compare learning policies
35+
## Compare learning policies
3636

3737
You can compare how different learning policies perform against past data in Personalizer logs by doing [offline evaluations](concepts-offline-evaluation.md).
3838

3939
[Upload your own learning policies](how-to-manage-model.md) to compare them with the current learning policy.
4040

41-
### Optimize learning policies
41+
## Optimize learning policies
4242

4343
Personalizer can create an optimized learning policy in an [offline evaluation](how-to-offline-evaluation.md). An optimized learning policy that has better rewards in an offline evaluation will yield better results when it's used online in Personalizer.
4444

articles/cognitive-services/personalizer/concepts-offline-evaluation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ We recommend looking at feature evaluations and asking:
9797

9898
* What other, additional, features could your application or system provide along the lines of those that are more effective?
9999
* What features can be removed due to low effectiveness? Low effectiveness features add _noise_ into the machine learning.
100-
* Are there any features that are accidentally included? Examples of these are: personally identifiable information (PII), duplicate IDs, etc.
100+
* Are there any features that are accidentally included? Examples of these are: user identifiable information, duplicate IDs, etc.
101101
* Are there any undesirable features that shouldn't be used to personalize due to regulatory or responsible use considerations? Are there features that could proxy (that is, closely mirror or correlate with) undesirable features?
102102

103103

articles/cognitive-services/personalizer/how-personalizer-works.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.date: 02/18/2020
77

88
# How Personalizer works
99

10-
The Personalizer reource, your _learning loop_, uses machine learning to build the model that predicts the top action for your content. The model is trained exclusively on your data that you sent to it with the **Rank** and **Reward** calls. Every loop is completely independent of each other.
10+
The Personalizer resource, your _learning loop_, uses machine learning to build the model that predicts the top action for your content. The model is trained exclusively on your data that you sent to it with the **Rank** and **Reward** calls. Every loop is completely independent of each other.
1111

1212
## Rank and Reward APIs impact the model
1313

articles/cognitive-services/personalizer/where-can-you-use-personalizer.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ You can apply Personalizer in situations where you meet or can implement the fol
3333
|Best single option|The contextual decision can be expressed as ranking the best option (action) from a limited set of choices.|
3434
|Scored result|How well the ranked choice worked for your application can be determined by measuring some aspect of user behavior, and expressing it in a _[reward score](concept-rewards.md)_.|
3535
|Relevant timing|The reward score doesn't bring in too many confounding or external factors. The experiment duration is low enough that the reward score can be computed while it's still relevant.|
36-
|Sufficient context features|You can express the context for the rank as a list of at least 5 [features](concepts-features.md) that you think would help make the right choice, and that doesn't include personally identifiable information. (PII).|
36+
|Sufficient context features|You can express the context for the rank as a list of at least 5 [features](concepts-features.md) that you think would help make the right choice, and that doesn't include user-specific identifiable information.|
3737
|Sufficient action features|You have information about each content choice, _action_, as a list of at least 5 [features](concepts-features.md) that you think will help Personalizer make the right choice.|
3838
|Daily data|There's enough events to stay on top of optimal personalization if the problem drifts over time (such as preferences in news or fashion). Personalizer will adapt to continuous change in the real world, but results won't be optimal if there's not enough events and data to learn from to discover and settle on new patterns. You should choose a use case that happens often enough. Consider looking for use cases that happen at least 500 times per day.|
3939
|Historical data|Your application can retain data for long enough to accumulate a history of at least 100,000 interactions. This allows Personalizer to collect enough data to perform offline evaluations and policy optimization.|

0 commit comments

Comments
 (0)