Skip to content

Commit 2deb00e

Browse files
authored
Merge pull request #100627 from diberry/diberry/0109-personalizer-troubleshooting-2
[Cogsvcs] Personalizer - import learning policy
2 parents cfefdfe + 677e805 commit 2deb00e

File tree

5 files changed

+65
-21
lines changed

5 files changed

+65
-21
lines changed

articles/cognitive-services/personalizer/concept-active-learning.md

Lines changed: 18 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ manager: nitinme
88
ms.service: cognitive-services
99
ms.subservice: personalizer
1010
ms.topic: conceptual
11-
ms.date: 05/30/2019
11+
ms.date: 01/09/2019
1212
ms.author: diberry
1313
---
1414

@@ -20,10 +20,10 @@ In some scenarios, the application might need to call Rank before it even knows
2020

2121
Typically, these scenarios happen when:
2222

23-
* You're prerendering UI that the user might or might not get to see.
24-
* Your application is doing predictive personalization in which Rank calls are made with little real-time context and the application might or might not use the output.
23+
* You're prerendering UI that the user might or might not get to see.
24+
* Your application is doing predictive personalization in which Rank calls are made with little real-time context and the application might or might not use the output.
2525

26-
In these cases, use Personalizer to call Rank, requesting the event to be _inactive_. Personalizer won't expect a reward for this event, and it won't apply a default reward.
26+
In these cases, use Personalizer to call Rank, requesting the event to be _inactive_. Personalizer won't expect a reward for this event, and it won't apply a default reward.
2727
Later in your business logic, if the application uses the information from the Rank call, just _activate_ the event. As soon as the event is active, Personalizer expects an event reward. If no explicit call is made to the Reward API, Personalizer applies a default reward.
2828

2929
## Inactive events
@@ -38,15 +38,28 @@ Learning settings determine the *hyperparameters* of the model training. Two mod
3838

3939
You can import and export learning-policy files from the Azure portal. Use this method to save existing policies, test them, replace them, and archive them in your source code control as artifacts for future reference and audit.
4040

41+
Learn [how to](how-to-learning-policy.md) import and export a learning policy.
42+
4143
### Understand learning policy settings
4244

4345
The settings in the learning policy aren't intended to be changed. Change settings only if you understand how they affect Personalizer. Without this knowledge, you could cause problems, including invalidating Personalizer models.
4446

47+
Personalizer uses [vowpalwabbit](https://github.com/VowpalWabbit) to train and score the events. Refer to the [vowpalwabbit documentation](https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Command-line-arguments) on how to edit the learning settings using vowpalwabbit. Once you have the correct command line arguments, save the command to a file with the following format (replace the arguments property value with the desired command) and upload the file to import learning settings in the **Model and Learning Settings** pane in the Azure portal for your Personalizer resource.
48+
49+
The following `.json` is an example of a learning policy.
50+
51+
```json
52+
{
53+
"name": "new learning settings",
54+
"arguments": " --cb_explore_adf --epsilon 0.2 --power_t 0 -l 0.001 --cb_type mtr -q ::"
55+
}
56+
```
57+
4558
### Compare learning policies
4659

4760
You can compare how different learning policies perform against past data in Personalizer logs by doing [offline evaluations](concepts-offline-evaluation.md).
4861

49-
[Upload your own learning policies](how-to-offline-evaluation.md) to compare them with the current learning policy.
62+
[Upload your own learning policies](how-to-learning-policy.md) to compare them with the current learning policy.
5063

5164
### Optimize learning policies
5265

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
---
2+
title: Manage learning policy - Personalizer
3+
description: This article contains answers to frequently asked troubleshooting questions about Personalizer.
4+
ms.topic: conceptual
5+
ms.date: 01/08/2019
6+
---
7+
8+
# Manage learning policy
9+
10+
Learning policy settings determine the _hyperparameters_ of the model training. The learning policy is defined in a `.json` file.
11+
12+
## Import a new learning policy
13+
14+
1. Open the [Azure portal](https://portal.azure.com), and select your Personalizer resource.
15+
1. Select **Model and learning settings** in the **Resource Management** section.
16+
1. For the **Import learning settings** select the file you created with the JSON format specified above, then select the **Upload** button.
17+
18+
Wait for the notification that the learning policy was uploaded successfully.
19+
20+
## Export a learning policy
21+
22+
1. Open the [Azure portal](https://portal.azure.com), and select your Personalizer resource.
23+
1. Select **Model and learning settings** in the **Resource Management** section.
24+
1. For the **Import learning settings** select the **Export learning settings** button. This saves the `json` file to your local computer.
25+
26+
## Next steps
27+
28+
Learn about learning policy [concepts](concept-active-learning.md#learning-settings)
29+
30+
[Learn about region availability](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services)

articles/cognitive-services/personalizer/how-to-settings.md

Lines changed: 10 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.subservice: personalizer
1010
ms.topic: conceptual
1111
ms.date: 10/23/2019
1212
ms.author: diberry
13-
#Customer intent:
13+
#Customer intent:
1414
---
1515

1616
# Configure Personalizer
@@ -19,9 +19,9 @@ Service configuration includes how the service treats rewards, how often the ser
1919

2020
## Create Personalizer resource
2121

22-
Create a Personalizer resource for each feedback loop.
22+
Create a Personalizer resource for each feedback loop.
2323

24-
1. Sign in to [Azure portal](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesPersonalizer). The previous link takes you to the **Create** page for the Personalizer service.
24+
1. Sign in to [Azure portal](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesPersonalizer). The previous link takes you to the **Create** page for the Personalizer service.
2525
1. Enter your service name, select a subscription, location, pricing tier, and resource group.
2626
1. Select the confirmation and select **Create**.
2727

@@ -30,7 +30,7 @@ Create a Personalizer resource for each feedback loop.
3030
## Configure service in the Azure portal
3131

3232
1. Sign in to the [Azure portal](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesPersonalizer).
33-
1. Find your Personalizer resource.
33+
1. Find your Personalizer resource.
3434
1. In the **Resource management** section, select **Configuration**.
3535

3636
Before leaving the Azure portal, copy one of your resource keys from the **Keys** page. You will need this to use the [Personalizer SDK](https://docs.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.personalizer).
@@ -51,9 +51,9 @@ Configure the service for your feedback loop's use of rewards. Changes to the fo
5151

5252
After changing these values, make sure to select **Save**.
5353

54-
### Configure exploration
54+
### Configure exploration
5555

56-
Personalization is able to discover new patterns and adapt to user behavior changes over time by exploring alternatives. The **Exploration** value determines what percentage of Rank calls are answered with exploration.
56+
Personalization is able to discover new patterns and adapt to user behavior changes over time by exploring alternatives. The **Exploration** value determines what percentage of Rank calls are answered with exploration.
5757

5858
Changes to this value will reset the current Personalizer model and retrain it with the last 2 days of data.
5959

@@ -63,7 +63,7 @@ After changing this value, make sure to select **Save**.
6363

6464
### Model update frequency
6565

66-
The latest model, trained from Reward API calls from every active event, isn't automatically used by Personalizer Rank call. The **Model update frequency** sets how often the model used by the Rank call up updated.
66+
The latest model, trained from Reward API calls from every active event, isn't automatically used by Personalizer Rank call. The **Model update frequency** sets how often the model used by the Rank call up updated.
6767

6868
High model update frequencies are useful for situations where you want to closely track changes in user behaviors. Examples include sites that run on live news, viral content, or live product bidding. You could use a 15-minute frequency in these scenarios. For most use cases, a lower update frequency is effective. One-minute update frequencies are useful when debugging an application's code using Personalizer, doing demos, or interactively testing machine learning aspects.
6969

@@ -79,15 +79,10 @@ After changing this value, make sure to select **Save**.
7979

8080
## Export the Personalizer model
8181

82-
From the Resource management's section for **Model and learning settings**, review model creation and last updated date and export the current model. You can use the Azure portal or the Personalizer APIs to export a model file for archival purposes.
82+
From the Resource management's section for **Model and learning settings**, review model creation and last updated date and export the current model. You can use the Azure portal or the Personalizer APIs to export a model file for archival purposes.
8383

8484
![Export current Personalizer model](media/settings/export-current-personalizer-model.png)
8585

86-
## Import and export learning policy
87-
88-
From the Resource management's section for **Model and learning settings**, import a new learning policy or export the current learning policy.
89-
You can get learning policy files from previous exports, or downloading the optimized policies discovered during Offline Evaluations. Making manual changes to these files will affect machine learning performance and accuracy of offline evaluations, and Microsoft cannot vouch for the accuracy of machine learning and evaluations, or service exceptions resulting from manually edited policies.
90-
9186
## Clear data for your learning loop
9287

9388
1. In the Azure portal, for your Personalizer resource, on the **Model and learning settings** page, select **Clear data**.
@@ -101,9 +96,8 @@ You can get learning policy files from previous exports, or downloading the opti
10196
|Reset the Personalizer model.|This model changes on every retraining. This frequency of training is specified in **upload model frequency** on the **Configuration** page. |
10297
|Set the learning policy to default.|If you have changed the learning policy as part of an offline evaluation, this resets to the original learning policy.|
10398

104-
1. Select **Clear selected data** to begin the clearing process. Status is reported in Azure notifications, in the top-right navigation.
99+
1. Select **Clear selected data** to begin the clearing process. Status is reported in Azure notifications, in the top-right navigation.
105100

106101
## Next steps
107102

108-
109-
[Learn about region availability](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services)
103+
[Learn how to manage a learning policy](how-to-learning-policy.md)

articles/cognitive-services/personalizer/toc.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -66,6 +66,8 @@
6666
- name: Create and configure Personalizer
6767
href: how-to-settings.md
6868
displayName: azure, portal, settings, evaluation, offline, policy, export, model, configure
69+
- name: Manage your learning policy
70+
href: how-to-learning-policy.md
6971
- name: Analyze
7072
items:
7173
- name: Offline evaluation

articles/cognitive-services/personalizer/troubleshooting.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -67,6 +67,11 @@ This is typically due to timestamps, user IDs or some other fine grained feature
6767

6868
The offline evaluation uses the trained model data from the events in that time period. If you did not send any data in the time period between start and end time of the evaluation, it will complete without any results. Submit a new offline evaluation by selecting a time range with events you know were sent to Personalizer.
6969

70+
## Learning policy
71+
72+
### How do I import a learning policy?
73+
74+
Learn more about [learning policy concepts](concept-active-learning.md#understand-learning-policy-settings) and [how to apply](how-to-learning-policy.md) a new learning policy. If you do not want to select a learning policy, you can use the [offline evaluation](how-to-offline-evaluation.md) to suggest a learning policy, based on your current events.
7075

7176

7277
## Security

0 commit comments

Comments
 (0)