Skip to content

Commit 8b11d37

Browse files
authored
Merge pull request #100599 from diberry/diberyr/0108-personalizer-troubleshooting
[Cogsvcs] Personalizer - Troubleshooting
2 parents 9f9523a + 6880f01 commit 8b11d37

File tree

1 file changed

+56
-3
lines changed

1 file changed

+56
-3
lines changed

articles/cognitive-services/personalizer/troubleshooting.md

Lines changed: 56 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,20 +8,73 @@ services: cognitive-services
88
ms.service: cognitive-services
99
ms.subservice: personalizer
1010
ms.topic: conceptual
11-
ms.date: 06/15/2019
11+
ms.date: 01/08/2019
1212
ms.author: diberry
1313
---
1414
# Personalizer Troubleshooting
1515

1616
This article contains answers to frequently asked troubleshooting questions about Personalizer.
1717

18+
## Transaction errors
19+
20+
### I get an HTTP 429 (Too many requests) response from the service. What can I do?
21+
22+
If you picked a free price tier when you created the Personalizer instance, there is a quota limit on the number of Rank requests that are allowed. Review your API call rate for the Rank API (in the Metrics pane in the Azure portal for your Personalizer resource) and adjust the pricing tier (in the Pricing Tier pane) if your call volume is expected to increase beyond the threshold for chosen pricing tier.
23+
24+
### I'm getting a 5xx error on Rank or Reward APIs. What should I do?
25+
26+
These issues should be transparent. If they continue, contact support by selecting **New support request** in the **Support + troubleshooting** section, in the Azure portal for your Personalizer resource.
27+
28+
1829
## Learning loop
1930

31+
<!--
32+
33+
### How do I import a learning policy?
34+
35+
36+
-->
37+
2038
### The learning loop doesn't seem to learn. How do I fix this?
2139

22-
The learning loop needs a few thousand Reward calls before Rank calls prioritize effectively.
40+
The learning loop needs a few thousand Reward calls before Rank calls prioritize effectively.
41+
42+
If you are unsure about how your learning loop is currently behaving, run an [offline evaluation](concepts-offline-evaluation.md), and apply the corrected learning policy.
43+
44+
### I keep getting rank results with all the same probabilities for all items. How do I know Personalizer is learning?
45+
46+
Personalizer returns the same probabilities in a Rank API result when it has just started and has an _empty_ model, or when you reset the Personalizer Loop, and your model is still within your **Model update frequency** period.
47+
48+
When the new update period begins, the updated model is used, and you’ll see the probabilities change.
49+
50+
### The learning loop was learning but seems to not learn anymore, and the quality of the Rank results isn't that good. What should I do?
51+
52+
* Make sure you've completed and applied one evaluation in the Azure portal for that Personalizer resource (learning loop).
53+
* Make sure all rewards are sent, via the Reward API, and processed.
54+
55+
### How do I know that the learning loop is getting updated regularly and is used to score my data?
56+
57+
You can find the time when the model was last updated in the **Model and Learning Settings** page of the Azure portal. If you see an old timestamp, it is likely because you are not sending the Rank and Reward calls. If the service has no incoming data, it does not update the learning. If you see the learning loop is not updating frequently enough, you can edit the loop's **Model Update frequency**.
58+
59+
60+
## Offline evaluations
61+
62+
### An offline evaluation's feature importance returns a long list with hundreds or thousands of items. What happened?
63+
64+
This is typically due to timestamps, user IDs or some other fine grained features sent in.
65+
66+
### I created an offline evaluation and it succeeded almost instantly. Why is that? I don’t see any results?
67+
68+
The offline evaluation uses the trained model data from the events in that time period. If you did not send any data in the time period between start and end time of the evaluation, it will complete without any results. Submit a new offline evaluation by selecting a time range with events you know were sent to Personalizer.
69+
70+
71+
72+
## Security
73+
74+
### The API key for my loop has been compromised. What can I do?
75+
76+
You can regenerate one key after swapping your clients to use the other key. Having two keys allows you to propagate the key in a lazy manner without having to have any downtime. We recommend doing this on a regular cycle as a security measure.
2377

24-
If you are unsure about how your learning loop is currently behaving, run an [offline evaluation](concepts-offline-evaluation.md), and apply the corrected learning policy.
2578

2679
## Next steps
2780

0 commit comments

Comments
 (0)