Skip to content

Commit 3fd6794

Browse files
Merge pull request #211900 from jcodella/patch-5
Added definition for best action
2 parents 94260e3 + 729e70a commit 3fd6794

File tree

1 file changed

+14
-14
lines changed

1 file changed

+14
-14
lines changed

articles/cognitive-services/personalizer/concepts-features.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -348,15 +348,16 @@ Setting the service configuration flag IsInferenceExplainabilityEnabled in your
348348
"learningMode": "Online",
349349
"isAutoOptimizationEnabled": true,
350350
"autoOptimizationFrequency": "P7D",
351-
"autoOptimizationStartDate": "2019-01-19T00:00:00Z,
352-
isInferenceExplainabilityEnabled: true
351+
"autoOptimizationStartDate": "2019-01-19T00:00:00Z",
352+
"isInferenceExplainabilityEnabled": true
353353
}
354354
```
355355

356356
### How to interpret feature scores?
357357
Enabling inference explainability will add a collection to the JSON response from the Rank API called *inferenceExplanation*. This contains a list of feature names and values that were submitted in the Rank request, along with feature scores learned by Personalizer’s underlying model. The feature scores provide you with insight on how influential each feature was in the model choosing the action.
358358

359359
```JSON
360+
360361
{
361362
"ranking": [
362363
{
@@ -374,29 +375,28 @@ Enabling inference explainability will add a collection to the JSON response fro
374375
],
375376
"eventId": "75269AD0-BFEE-4598-8196-C57383D38E10",
376377
"rewardActionId": "EntertainmentArticle",
377-
inferenceExplanation: [
378+
"inferenceExplanation": [
378379
{
379-
id”: EntertainmentArticle,
380-
features: [
380+
"id”: "EntertainmentArticle",
381+
"features": [
381382
{
382-
name”: ”user.profileType,
383-
score 3.0
383+
"name": "user.profileType",
384+
"score": 3.0
384385
},
385386
{
386-
“namespace”: “user”,
387-
“name”: ”user.latLong”,
388-
“score”: -4.3
387+
"name": "user.latLong",
388+
"score": -4.3
389389
},
390390
{
391-
name”: ”user.profileType^user.latLong,
392-
score : 12.1
391+
"name": "user.profileType^user.latLong",
392+
"score" : 12.1
393393
},
394394
]
395395
]
396396
}
397397
```
398398

399-
Recall that Personalizer will either return the best action as determined by the model or an exploratory action chosen by the exploration policy. Actions taken during exploration do not leverage the feature scores in determining which action to take, therefore **feature scores for exploratory actions should not be used to gain an understanding of why the action was taken.** [You can learn more about exploration here](https://docs.microsoft.com/azure/cognitive-services/personalizer/concepts-exploration).
399+
Recall that Personalizer will either return the _best action_ as determined by the model or an _exploratory action_ chosen by the exploration policy. The best action is the one that the model has determined has the highest probability of maximizing the average reward, whereas exploratory actions are chosen among the set of all possible actions provided in the Rank API call. Actions taken during exploration do not leverage the feature scores in determining which action to take, therefore **feature scores for exploratory actions should not be used to gain an understanding of why the action was taken.** [You can learn more about exploration here](https://docs.microsoft.com/azure/cognitive-services/personalizer/concepts-exploration).
400400

401401
For the best actions returned by Personalizer, the feature scores can provide general insight where:
402402
* Larger positive scores provide more support for the model choosing the best action.
@@ -405,7 +405,7 @@ For the best actions returned by Personalizer, the feature scores can provide ge
405405

406406
### Important considerations for Inference Explainability
407407
* **Increased latency.** Enabling _Inference Explainability_ will significantly increase the latency of Rank API calls due to processing of the feature information. Run experiments and measure the latency in your scenario to see if it satisfies your application’s latency requirements. Future versions of Inference Explainability will mitigate this issue.
408-
* **Correlated Features.** Features that are highly correlated with each other can reduce the utility of feature scores. For example, suppose Feature A is highly correlated with Feature B. It may be that Feature A’s score is a large positive value while Feature B’s score is a large negative value. In this case, the two features may effectively cancel each other out and have little to no impact on the model. While Personalizer is very robust to highly correlated features, when enabling _Inference Explainability_, ensure that features sent to Personalizer are not highly correlated
408+
* **Correlated Features.** Features that are highly correlated with each other can reduce the utility of feature scores. For example, suppose Feature A is highly correlated with Feature B. It may be that Feature A’s score is a large positive value while Feature B’s score is a large negative value. In this case, the two features may effectively cancel each other out and have little to no impact on the model. While Personalizer is very robust to highly correlated features, when using _Inference Explainability_, ensure that features sent to Personalizer are not highly correlated
409409

410410

411411
## Next steps

0 commit comments

Comments
 (0)