You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/personalizer/concept-active-learning.md
+18-5Lines changed: 18 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ manager: nitinme
8
8
ms.service: cognitive-services
9
9
ms.subservice: personalizer
10
10
ms.topic: conceptual
11
-
ms.date: 05/30/2019
11
+
ms.date: 01/09/2019
12
12
ms.author: diberry
13
13
---
14
14
@@ -20,10 +20,10 @@ In some scenarios, the application might need to call Rank before it even knows
20
20
21
21
Typically, these scenarios happen when:
22
22
23
-
* You're prerendering UI that the user might or might not get to see.
24
-
* Your application is doing predictive personalization in which Rank calls are made with little real-time context and the application might or might not use the output.
23
+
* You're prerendering UI that the user might or might not get to see.
24
+
* Your application is doing predictive personalization in which Rank calls are made with little real-time context and the application might or might not use the output.
25
25
26
-
In these cases, use Personalizer to call Rank, requesting the event to be _inactive_. Personalizer won't expect a reward for this event, and it won't apply a default reward.
26
+
In these cases, use Personalizer to call Rank, requesting the event to be _inactive_. Personalizer won't expect a reward for this event, and it won't apply a default reward.
27
27
Later in your business logic, if the application uses the information from the Rank call, just _activate_ the event. As soon as the event is active, Personalizer expects an event reward. If no explicit call is made to the Reward API, Personalizer applies a default reward.
28
28
29
29
## Inactive events
@@ -38,15 +38,28 @@ Learning settings determine the *hyperparameters* of the model training. Two mod
38
38
39
39
You can import and export learning-policy files from the Azure portal. Use this method to save existing policies, test them, replace them, and archive them in your source code control as artifacts for future reference and audit.
40
40
41
+
Learn [how to](how-to-learning-policy.md) import and export a learning policy.
42
+
41
43
### Understand learning policy settings
42
44
43
45
The settings in the learning policy aren't intended to be changed. Change settings only if you understand how they affect Personalizer. Without this knowledge, you could cause problems, including invalidating Personalizer models.
44
46
47
+
Personalizer uses [vowpalwabbit](https://github.com/VowpalWabbit) to train and score the events. Refer to the [vowpalwabbit documentation](https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Command-line-arguments) on how to edit the learning settings using vowpalwabbit. Once you have the correct command line arguments, save the command to a file with the following format (replace the arguments property value with the desired command) and upload the file to import learning settings in the **Model and Learning Settings** pane in the Azure portal for your Personalizer resource.
48
+
49
+
The following `.json` is an example of a learning policy.
You can compare how different learning policies perform against past data in Personalizer logs by doing [offline evaluations](concepts-offline-evaluation.md).
48
61
49
-
[Upload your own learning policies](how-to-offline-evaluation.md) to compare them with the current learning policy.
62
+
[Upload your own learning policies](how-to-learning-policy.md) to compare them with the current learning policy.
Copy file name to clipboardExpand all lines: articles/cognitive-services/personalizer/how-to-settings.md
+10-16Lines changed: 10 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ ms.subservice: personalizer
10
10
ms.topic: conceptual
11
11
ms.date: 10/23/2019
12
12
ms.author: diberry
13
-
#Customer intent:
13
+
#Customer intent:
14
14
---
15
15
16
16
# Configure Personalizer
@@ -19,9 +19,9 @@ Service configuration includes how the service treats rewards, how often the ser
19
19
20
20
## Create Personalizer resource
21
21
22
-
Create a Personalizer resource for each feedback loop.
22
+
Create a Personalizer resource for each feedback loop.
23
23
24
-
1. Sign in to [Azure portal](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesPersonalizer). The previous link takes you to the **Create** page for the Personalizer service.
24
+
1. Sign in to [Azure portal](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesPersonalizer). The previous link takes you to the **Create** page for the Personalizer service.
25
25
1. Enter your service name, select a subscription, location, pricing tier, and resource group.
26
26
1. Select the confirmation and select **Create**.
27
27
@@ -30,7 +30,7 @@ Create a Personalizer resource for each feedback loop.
30
30
## Configure service in the Azure portal
31
31
32
32
1. Sign in to the [Azure portal](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesPersonalizer).
33
-
1. Find your Personalizer resource.
33
+
1. Find your Personalizer resource.
34
34
1. In the **Resource management** section, select **Configuration**.
35
35
36
36
Before leaving the Azure portal, copy one of your resource keys from the **Keys** page. You will need this to use the [Personalizer SDK](https://docs.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.personalizer).
@@ -51,9 +51,9 @@ Configure the service for your feedback loop's use of rewards. Changes to the fo
51
51
52
52
After changing these values, make sure to select **Save**.
53
53
54
-
### Configure exploration
54
+
### Configure exploration
55
55
56
-
Personalization is able to discover new patterns and adapt to user behavior changes over time by exploring alternatives. The **Exploration** value determines what percentage of Rank calls are answered with exploration.
56
+
Personalization is able to discover new patterns and adapt to user behavior changes over time by exploring alternatives. The **Exploration** value determines what percentage of Rank calls are answered with exploration.
57
57
58
58
Changes to this value will reset the current Personalizer model and retrain it with the last 2 days of data.
59
59
@@ -63,7 +63,7 @@ After changing this value, make sure to select **Save**.
63
63
64
64
### Model update frequency
65
65
66
-
The latest model, trained from Reward API calls from every active event, isn't automatically used by Personalizer Rank call. The **Model update frequency** sets how often the model used by the Rank call up updated.
66
+
The latest model, trained from Reward API calls from every active event, isn't automatically used by Personalizer Rank call. The **Model update frequency** sets how often the model used by the Rank call up updated.
67
67
68
68
High model update frequencies are useful for situations where you want to closely track changes in user behaviors. Examples include sites that run on live news, viral content, or live product bidding. You could use a 15-minute frequency in these scenarios. For most use cases, a lower update frequency is effective. One-minute update frequencies are useful when debugging an application's code using Personalizer, doing demos, or interactively testing machine learning aspects.
69
69
@@ -79,15 +79,10 @@ After changing this value, make sure to select **Save**.
79
79
80
80
## Export the Personalizer model
81
81
82
-
From the Resource management's section for **Model and learning settings**, review model creation and last updated date and export the current model. You can use the Azure portal or the Personalizer APIs to export a model file for archival purposes.
82
+
From the Resource management's section for **Model and learning settings**, review model creation and last updated date and export the current model. You can use the Azure portal or the Personalizer APIs to export a model file for archival purposes.
83
83
84
84

85
85
86
-
## Import and export learning policy
87
-
88
-
From the Resource management's section for **Model and learning settings**, import a new learning policy or export the current learning policy.
89
-
You can get learning policy files from previous exports, or downloading the optimized policies discovered during Offline Evaluations. Making manual changes to these files will affect machine learning performance and accuracy of offline evaluations, and Microsoft cannot vouch for the accuracy of machine learning and evaluations, or service exceptions resulting from manually edited policies.
90
-
91
86
## Clear data for your learning loop
92
87
93
88
1. In the Azure portal, for your Personalizer resource, on the **Model and learning settings** page, select **Clear data**.
@@ -101,9 +96,8 @@ You can get learning policy files from previous exports, or downloading the opti
101
96
|Reset the Personalizer model.|This model changes on every retraining. This frequency of training is specified in **upload model frequency** on the **Configuration** page. |
102
97
|Set the learning policy to default.|If you have changed the learning policy as part of an offline evaluation, this resets to the original learning policy.|
103
98
104
-
1. Select **Clear selected data** to begin the clearing process. Status is reported in Azure notifications, in the top-right navigation.
99
+
1. Select **Clear selected data** to begin the clearing process. Status is reported in Azure notifications, in the top-right navigation.
105
100
106
101
## Next steps
107
102
108
-
109
-
[Learn about region availability](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services)
103
+
[Learn how to manage a learning policy](how-to-learning-policy.md)
Copy file name to clipboardExpand all lines: articles/cognitive-services/personalizer/troubleshooting.md
+5Lines changed: 5 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -67,6 +67,11 @@ This is typically due to timestamps, user IDs or some other fine grained feature
67
67
68
68
The offline evaluation uses the trained model data from the events in that time period. If you did not send any data in the time period between start and end time of the evaluation, it will complete without any results. Submit a new offline evaluation by selecting a time range with events you know were sent to Personalizer.
69
69
70
+
## Learning policy
71
+
72
+
### How do I import a learning policy?
73
+
74
+
Learn more about [learning policy concepts](concept-active-learning.md#understand-learning-policy-settings) and [how to apply](how-to-learning-policy.md) a new learning policy. If you do not want to select a learning policy, you can use the [offline evaluation](how-to-offline-evaluation.md) to suggest a learning policy, based on your current events.
0 commit comments