Skip to content

Commit b93d184

Browse files
Merge pull request #212373 from jcodella/patch-9
Update concepts-features.md
2 parents cac9be1 + d020e28 commit b93d184

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

articles/cognitive-services/personalizer/concepts-features.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -330,7 +330,7 @@ JSON objects can include nested JSON objects and simple property/values. An arra
330330

331331
## Inference Explainability
332332
Personalizer can help you to understand which features of a chosen action are the most and least influential to then model during inference. When enabled, inference explainability includes feature scores from the underlying model into the Rank API response, so your application receives this information at the time of inference.
333-
Feature scores empower you to better understand the relationship between features and the decisions made by Personalizer. They can be used to provide insight to your end-users into why a particular recommendation was made, or to analyze whether your model is exhibiting bias toward or against certain contextual settings, users, and actions.
333+
Feature scores empower you to better understand the relationship between features and the decisions made by Personalizer. They can be used to provide insight to your end-users into why a particular recommendation was made, or to further analyze how the data is being used by the underlying model.
334334

335335
Setting the service configuration flag IsInferenceExplainabilityEnabled in your service configuration enables Personalizer to include feature values and weights in the Rank API response. To update your current service configuration, use the [Service Configuration – Update API](/rest/api/personalizer/1.1preview1/service-configuration/update?tabs=HTTP). In the JSON request body, include your current service configuration and add the additional entry: “IsInferenceExplainabilityEnabled”: true. If you don’t know your current service configuration, you can obtain it from the [Service Configuration – Get API](/rest/api/personalizer/1.1preview1/service-configuration/get?tabs=HTTP)
336336

@@ -366,11 +366,11 @@ Enabling inference explainability will add a collection to the JSON response fro
366366
},
367367
{
368368
"id": "SportsArticle",
369-
"probability": 0.15
369+
"probability": 0.10
370370
},
371371
{
372372
"id": "NewsArticle",
373-
"probability": 0.05
373+
"probability": 0.10
374374
}
375375
],
376376
"eventId": "75269AD0-BFEE-4598-8196-C57383D38E10",
@@ -406,9 +406,9 @@ For the best actions returned by Personalizer, the feature scores can provide ge
406406
* Scores close to zero have a small effect on the decision to choose this action.
407407

408408
### Important considerations for Inference Explainability
409-
* **Increased latency.** Enabling _Inference Explainability_ will significantly increase the latency of Rank API calls due to processing of the feature information. Run experiments and measure the latency in your scenario to see if it satisfies your application’s latency requirements. Future versions of Inference Explainability will mitigate this issue.
409+
* **Increased latency.** Currently, enabling _Inference Explainability_ may significantly increase the latency of Rank API calls due to processing of the feature information. Run experiments and measure the latency in your scenario to see if it satisfies your application’s latency requirements.
410410
* **Correlated Features.** Features that are highly correlated with each other can reduce the utility of feature scores. For example, suppose Feature A is highly correlated with Feature B. It may be that Feature A’s score is a large positive value while Feature B’s score is a large negative value. In this case, the two features may effectively cancel each other out and have little to no impact on the model. While Personalizer is very robust to highly correlated features, when using _Inference Explainability_, ensure that features sent to Personalizer are not highly correlated
411-
* **Default exploration only.** Currently, Inference Explainability supports only the default exploration algorithm. Future releases will enable the use of this capability with additional exploration algorithms.
411+
* **Default exploration only.** Currently, Inference Explainability supports only the default exploration algorithm at this time.
412412

413413
## Next steps
414414

0 commit comments

Comments
 (0)