Skip to content

Commit b115e0c

Browse files
Merge pull request #211890 from jcodella/patch-3
Added wording for inference explainability
2 parents 91afe8b + 79b8ece commit b115e0c

File tree

1 file changed

+80
-0
lines changed

1 file changed

+80
-0
lines changed

articles/cognitive-services/personalizer/concepts-features.md

Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -328,6 +328,86 @@ JSON objects can include nested JSON objects and simple property/values. An arra
328328
}
329329
```
330330

331+
## Inference Explainability
332+
Personalizer can help you to understand which features are the most and least influential when determining the best action. When enabled, inference explainability includes feature scores from the underlying model into the Rank API response, so your application receives this information at the time of inference.
333+
Feature scores empower you to better understand the relationship between features and the decisions made by Personalizer. They can be used to provide insight to your end-users into why a particular recommendation was made, or to analyze whether your model is exhibiting bias toward or against certain contextual settings, users, and actions.
334+
335+
Setting the service configuration flag IsInferenceExplainabilityEnabled in your service configuration enables Personalizer to include feature values and weights in the Rank API response. To update your current service configuration, use the [Service Configuration – Update API](https://docs.microsoft.com/rest/api/personalizer/1.1preview1/service-configuration/update?tabs=HTTP). In the JSON request body, include your current service configuration and add the additional entry: “IsInferenceExplainabilityEnabled”: true. If you don’t know your current service configuration, you can obtain it from the [Service Configuration – Get API](https://docs.microsoft.com/rest/api/personalizer/1.1preview1/service-configuration/get?tabs=HTTP)
336+
337+
```JSON
338+
{
339+
"rewardWaitTime": "PT10M",
340+
"defaultReward": 0,
341+
"rewardAggregation": "earliest",
342+
"explorationPercentage": 0.2,
343+
"modelExportFrequency": "PT5M",
344+
"logMirrorEnabled": true,
345+
"logMirrorSasUri": "https://testblob.blob.core.windows.net/container?se=2020-08-13T00%3A00Z&sp=rwl&spr=https&sv=2018-11-09&sr=c&sig=signature",
346+
"logRetentionDays": 7,
347+
"lastConfigurationEditDate": "0001-01-01T00:00:00Z",
348+
"learningMode": "Online",
349+
"isAutoOptimizationEnabled": true,
350+
"autoOptimizationFrequency": "P7D",
351+
"autoOptimizationStartDate": "2019-01-19T00:00:00Z”,
352+
“isInferenceExplainabilityEnabled”: true
353+
}
354+
```
355+
356+
### How to interpret feature scores?
357+
Enabling inference explainability will add a collection to the JSON response from the Rank API called *inferenceExplanation*. This contains a list of feature names and values that were submitted in the Rank request, along with feature scores learned by Personalizer’s underlying model. The feature scores provide you with insight on how influential each feature was in the model choosing the action.
358+
359+
```JSON
360+
{
361+
"ranking": [
362+
{
363+
"id": "EntertainmentArticle",
364+
"probability": 0.8
365+
},
366+
{
367+
"id": "SportsArticle",
368+
"probability": 0
369+
},
370+
{
371+
"id": "NewsArticle",
372+
"probability": 0.2
373+
}
374+
],
375+
"eventId": "75269AD0-BFEE-4598-8196-C57383D38E10",
376+
"rewardActionId": "EntertainmentArticle",
377+
“inferenceExplanation”: [
378+
{
379+
“id”: “EntertainmentArticle”,
380+
“features”: [
381+
{
382+
“name”: ”user.profileType”,
383+
“score” 3.0
384+
},
385+
{
386+
“namespace”: “user”,
387+
“name”: ”user.latLong”,
388+
“score”: -4.3
389+
},
390+
{
391+
“name”: ”user.profileType^user.latLong”,
392+
“score” : 12.1
393+
},
394+
]
395+
]
396+
}
397+
```
398+
399+
Recall that Personalizer will either return the best action as determined by the model or an exploratory action chosen by the exploration policy. Actions taken during exploration do not leverage the feature scores in determining which action to take, therefore **feature scores for exploratory actions should not be used to gain an understanding of why the action was taken.** [You can learn more about exploration here](https://docs.microsoft.com/azure/cognitive-services/personalizer/concepts-exploration).
400+
401+
For the best actions returned by Personalizer, the feature scores can provide general insight where:
402+
* Larger positive scores provide more support for the model choosing the best action.
403+
* Larger negative scores provide more support for the model not choosing the best action.
404+
* Scores close to zero have a small effect on the decision to choose the best action.
405+
406+
### Important considerations for Inference Explainability
407+
* **Increased latency.** Enabling _Inference Explainability_ will significantly increase the latency of Rank API calls due to processing of the feature information. Run experiments and measure the latency in your scenario to see if it satisfies your application’s latency requirements. Future versions of Inference Explainability will mitigate this issue.
408+
* **Correlated Features.** Features that are highly correlated with each other can reduce the utility of feature scores. For example, suppose Feature A is highly correlated with Feature B. It may be that Feature A’s score is a large positive value while Feature B’s score is a large negative value. In this case, the two features may effectively cancel each other out and have little to no impact on the model. While Personalizer is very robust to highly correlated features, when enabling _Inference Explainability_, ensure that features sent to Personalizer are not highly correlated
409+
410+
331411
## Next steps
332412

333413
[Reinforcement learning](concepts-reinforcement-learning.md)

0 commit comments

Comments
 (0)