Skip to content

Commit 42e31d5

Browse files
authored
Merge pull request #92739 from ShannonLeavitt/concept-active-learning
edit pass: concept-active-learning
2 parents 6fa3b2b + 2391f07 commit 42e31d5

File tree

1 file changed

+20
-24
lines changed

1 file changed

+20
-24
lines changed
Lines changed: 20 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Active and inactive events - Personalizer
33
titleSuffix: Azure Cognitive Services
4-
description: This article discusses the use of active and inactive events, learning settings and learning policies within the Personalizer service.
4+
description: This article discusses the use of active and inactive events, learning settings, and learning policies within the Personalizer service.
55
services: cognitive-services
66
author: diberry
77
manager: nitinme
@@ -14,46 +14,42 @@ ms.author: diberry
1414

1515
# Active and inactive events
1616

17-
When your application calls the Rank API, you receive which Action the application should show in the rewardActionId field. From that moment, Personalizer will be expecting a Reward call with the same eventId. The reward score will be used to train the model that will be used for future Rank calls. If no Reward call is received for the eventId, a default reward will be applied. Default rewards are established in the Azure Portal.
17+
When your application calls the Rank API, you receive the action the application should show in the **rewardActionId** field. From that moment, Personalizer expects a Reward call that has the same eventId. The reward score will be used to train the model for future Rank calls. If no Reward call is received for the eventId, a default reward is applied. Default rewards are set in the Azure portal.
1818

19-
In some cases, the application may need to call Rank before it even knows if the result will be used or displayed to the user. This may happen in situations where, for example, the page render of promoted content gets overwritten with a marketing campaign. If the result of the Rank call was never used and the user never got to see it, it would be incorrect to train it with any reward at all, zero or otherwise.
19+
In some scenarios, the application might need to call Rank before it even knows if the result will be used or displayed to the user. This might happen in situations where, for example, the page rendering of promoted content is overwritten by a marketing campaign. If the result of the Rank call was never used and the user never saw it, don't send a corresponding Reward call.
2020

21-
Typically this happens when:
21+
Typically, these scenarios happen when:
2222

23-
* You may be pre-rendering some UI that the user may or may not get to see.
24-
* Your application may be doing predictive personalization in which Rank calls are made with less real-time context and their output may or may not be used by the application.
23+
* You're prerendering UI that the user might or might not get to see.
24+
* Your application is doing predictive personalization in which Rank calls are made with little real-time context and the application might or might not use the output.
2525

26-
In these cases, the correct way to use Personalizer is by calling Rank requesting the event to be _inactive_. Personalizer will not expect a reward for this event, and will not apply a default reward either.
26+
In these cases, use Personalizer to call Rank, requesting the event to be _inactive_. Personalizer won't expect a reward for this event, and it won't apply a default reward.
27+
Later in your business logic, if the application uses the information from the Rank call, just _activate_ the event. As soon as the event is active, Personalizer expects an event reward. If no explicit call is made to the Reward API, Personalizer applies a default reward.
2728

28-
Later in your business logic, if the application uses the information from the rank call, all you need to do is _activate_ the event. From the moment the event is active, Personalizer will expect a reward for the event or apply a default reward if no explicit call gets made to the Reward API.
29+
## Inactive events
2930

30-
## Get inactive events
31-
32-
To disable training for an event, call Rank with `learningEnabled = False`.
33-
34-
Learning for an inactive event is implicitly activated if you send a reward for the eventId, or call the `activate` API for that eventId.
31+
To disable training for an event, call Rank by using `learningEnabled = False`. For an inactive event, learning is implicitly activated if you send a reward for the eventId or call the `activate` API for that eventId.
3532

3633
## Learning settings
3734

38-
Learning settings determines the specific *hyperparameters* of the model training. Two models of the same data, trained on different learning settings, will end up being different.
35+
Learning settings determine the *hyperparameters* of the model training. Two models of the same data that are trained on different learning settings will end up different.
3936

4037
### Import and export learning policies
4138

42-
You can import and export learning policy files from the Azure portal. This allows you to save existing policies, test them, replace them, and archive them in your source code control as artifacts for future reference and audit.
39+
You can import and export learning-policy files from the Azure portal. Use this method to save existing policies, test them, replace them, and archive them in your source code control as artifacts for future reference and audit.
4340

44-
### Learning policy settings
41+
### Understand learning policy settings
4542

46-
The settings in the **Learning Policy** are not intended to be changed. Only change the settings when you understand how they impact Personalizer. Changing settings without this knowledge will cause side effects, including invalidating Personalizer models.
43+
The settings in the learning policy aren't intended to be changed. Change settings only if you understand how they affect Personalizer. Without this knowledge, you could cause problems, including invalidating Personalizer models.
4744

48-
### Comparing effectiveness of learning policies
45+
### Compare learning policies
4946

50-
You can compare how different Learning Policies would have performed against past data in Personalizer logs by doing [offline evaluations](concepts-offline-evaluation.md).
47+
You can compare how different learning policies perform against past data in Personalizer logs by doing [offline evaluations](concepts-offline-evaluation.md).
5148

52-
[Upload your own Learning Policies](how-to-offline-evaluation.md) to compare with the current learning policy.
49+
[Upload your own learning policies](how-to-offline-evaluation.md) to compare them with the current learning policy.
5350

54-
### Discovery of optimized learning policies
51+
### Optimize learning policies
5552

56-
Personalizer can create a more optimized learning policy when doing an [offline evaluation](how-to-offline-evaluation.md).
57-
A more optimized learning policy, which is shown to have better rewards in an offline evaluation, will yield better results when used online in Personalizer.
53+
Personalizer can create an optimized learning policy in an [offline evaluation](how-to-offline-evaluation.md). An optimized learning policy that has better rewards in an offline evaluation will yield better results when it's used online in Personalizer.
5854

59-
After an optimized learning policy has been created, you can apply it directly to Personalizer so it replaces the current policy immediately, or you can save it for further evaluation and decide in the future whether to discard, save, or apply it later.
55+
After you optimize a learning policy, you can apply it directly to Personalizer so it immediately replaces the current policy. Or you can save the optimized policy for further evaluation and later decide whether to discard, save, or apply it.

0 commit comments

Comments
 (0)