You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: This article discusses the use of active and inactive events, learning settings and learning policies within the Personalizer service.
4
+
description: This article discusses the use of active and inactive events, learning settings, and learning policies within the Personalizer service.
5
5
services: cognitive-services
6
6
author: diberry
7
7
manager: nitinme
@@ -14,46 +14,42 @@ ms.author: diberry
14
14
15
15
# Active and inactive events
16
16
17
-
When your application calls the Rank API, you receive which Action the application should show in the rewardActionId field. From that moment, Personalizer will be expecting a Reward call with the same eventId. The reward score will be used to train the model that will be used for future Rank calls. If no Reward call is received for the eventId, a default reward will be applied. Default rewards are established in the Azure Portal.
17
+
When your application calls the Rank API, you receive the action the application should show in the **rewardActionId** field. From that moment, Personalizer expects a Reward call that has the same eventId. The reward score will be used to train the model for future Rank calls. If no Reward call is received for the eventId, a default reward is applied. Default rewards are set in the Azure portal.
18
18
19
-
In some cases, the application may need to call Rank before it even knows if the result will be used or displayed to the user. This may happen in situations where, for example, the page render of promoted content gets overwritten with a marketing campaign. If the result of the Rank call was never used and the user never got to see it, it would be incorrect to train it with any reward at all, zero or otherwise.
19
+
In some scenarios, the application might need to call Rank before it even knows if the result will be used or displayed to the user. This might happen in situations where, for example, the page rendering of promoted content is overwritten by a marketing campaign. If the result of the Rank call was never used and the user never saw it, don't send a corresponding Reward call.
20
20
21
-
Typically this happens when:
21
+
Typically, these scenarios happen when:
22
22
23
-
* You may be pre-rendering some UI that the user may or may not get to see.
24
-
* Your application may be doing predictive personalization in which Rank calls are made with less real-time context and their output may or may not be used by the application.
23
+
* You're prerendering UI that the user might or might not get to see.
24
+
* Your application is doing predictive personalization in which Rank calls are made with little real-time context and the application might or might not use the output.
25
25
26
-
In these cases, the correct way to use Personalizer is by calling Rank requesting the event to be _inactive_. Personalizer will not expect a reward for this event, and will not apply a default reward either.
26
+
In these cases, use Personalizer to call Rank, requesting the event to be _inactive_. Personalizer won't expect a reward for this event, and it won't apply a default reward.
27
+
Later in your business logic, if the application uses the information from the Rank call, just _activate_ the event. As soon as the event is active, Personalizer expects an event reward. If no explicit call is made to the Reward API, Personalizer applies a default reward.
27
28
28
-
Later in your business logic, if the application uses the information from the rank call, all you need to do is _activate_ the event. From the moment the event is active, Personalizer will expect a reward for the event or apply a default reward if no explicit call gets made to the Reward API.
29
+
## Inactive events
29
30
30
-
## Get inactive events
31
-
32
-
To disable training for an event, call Rank with `learningEnabled = False`.
33
-
34
-
Learning for an inactive event is implicitly activated if you send a reward for the eventId, or call the `activate` API for that eventId.
31
+
To disable training for an event, call Rank by using `learningEnabled = False`. For an inactive event, learning is implicitly activated if you send a reward for the eventId or call the `activate` API for that eventId.
35
32
36
33
## Learning settings
37
34
38
-
Learning settings determines the specific *hyperparameters* of the model training. Two models of the same data, trained on different learning settings, will end up being different.
35
+
Learning settings determine the *hyperparameters* of the model training. Two models of the same data that are trained on different learning settings will end up different.
39
36
40
37
### Import and export learning policies
41
38
42
-
You can import and export learningpolicy files from the Azure portal. This allows you to save existing policies, test them, replace them, and archive them in your source code control as artifacts for future reference and audit.
39
+
You can import and export learning-policy files from the Azure portal. Use this method to save existing policies, test them, replace them, and archive them in your source code control as artifacts for future reference and audit.
43
40
44
-
### Learning policy settings
41
+
### Understand learning policy settings
45
42
46
-
The settings in the **Learning Policy** are not intended to be changed. Only change the settings when you understand how they impact Personalizer. Changing settings without this knowledge will cause side effects, including invalidating Personalizer models.
43
+
The settings in the learning policy aren't intended to be changed. Change settings only if you understand how they affect Personalizer. Without this knowledge, you could cause problems, including invalidating Personalizer models.
47
44
48
-
### Comparing effectiveness of learning policies
45
+
### Compare learning policies
49
46
50
-
You can compare how different Learning Policies would have performed against past data in Personalizer logs by doing [offline evaluations](concepts-offline-evaluation.md).
47
+
You can compare how different learning policies perform against past data in Personalizer logs by doing [offline evaluations](concepts-offline-evaluation.md).
51
48
52
-
[Upload your own Learning Policies](how-to-offline-evaluation.md) to compare with the current learning policy.
49
+
[Upload your own learning policies](how-to-offline-evaluation.md) to compare them with the current learning policy.
53
50
54
-
### Discovery of optimized learning policies
51
+
### Optimize learning policies
55
52
56
-
Personalizer can create a more optimized learning policy when doing an [offline evaluation](how-to-offline-evaluation.md).
57
-
A more optimized learning policy, which is shown to have better rewards in an offline evaluation, will yield better results when used online in Personalizer.
53
+
Personalizer can create an optimized learning policy in an [offline evaluation](how-to-offline-evaluation.md). An optimized learning policy that has better rewards in an offline evaluation will yield better results when it's used online in Personalizer.
58
54
59
-
After an optimized learning policy has been created, you can apply it directly to Personalizer so it replaces the current policy immediately, or you can save it for further evaluation and decide in the future whether to discard, save, or apply it later.
55
+
After you optimize a learning policy, you can apply it directly to Personalizer so it immediately replaces the current policy. Or you can save the optimized policy for further evaluation and later decide whether to discard, save, or apply it.
0 commit comments