You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/personalizer/troubleshooting.md
+56-3Lines changed: 56 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,20 +8,73 @@ services: cognitive-services
8
8
ms.service: cognitive-services
9
9
ms.subservice: personalizer
10
10
ms.topic: conceptual
11
-
ms.date: 06/15/2019
11
+
ms.date: 01/08/2019
12
12
ms.author: diberry
13
13
---
14
14
# Personalizer Troubleshooting
15
15
16
16
This article contains answers to frequently asked troubleshooting questions about Personalizer.
17
17
18
+
## Transaction errors
19
+
20
+
### I get an HTTP 429 (Too many requests) response from the service. What can I do?
21
+
22
+
If you picked a free price tier when you created the Personalizer instance, there is a quota limit on the number of Rank requests that are allowed. Review your API call rate for the Rank API (in the Metrics pane in the Azure portal for your Personalizer resource) and adjust the pricing tier (in the Pricing Tier pane) if your call volume is expected to increase beyond the threshold for chosen pricing tier.
23
+
24
+
### I'm getting a 5xx error on Rank or Reward APIs. What should I do?
25
+
26
+
These issues should be transparent. If they continue, contact support by selecting **New support request** in the **Support + troubleshooting** section, in the Azure portal for your Personalizer resource.
27
+
28
+
18
29
## Learning loop
19
30
31
+
<!--
32
+
33
+
### How do I import a learning policy?
34
+
35
+
36
+
-->
37
+
20
38
### The learning loop doesn't seem to learn. How do I fix this?
21
39
22
-
The learning loop needs a few thousand Reward calls before Rank calls prioritize effectively.
40
+
The learning loop needs a few thousand Reward calls before Rank calls prioritize effectively.
41
+
42
+
If you are unsure about how your learning loop is currently behaving, run an [offline evaluation](concepts-offline-evaluation.md), and apply the corrected learning policy.
43
+
44
+
### I keep getting rank results with all the same probabilities for all items. How do I know Personalizer is learning?
45
+
46
+
Personalizer returns the same probabilities in a Rank API result when it has just started and has an _empty_ model, or when you reset the Personalizer Loop, and your model is still within your **Model update frequency** period.
47
+
48
+
When the new update period begins, the updated model is used, and you’ll see the probabilities change.
49
+
50
+
### The learning loop was learning but seems to not learn anymore, and the quality of the Rank results isn't that good. What should I do?
51
+
52
+
* Make sure you've completed and applied one evaluation in the Azure portal for that Personalizer resource (learning loop).
53
+
* Make sure all rewards are sent, via the Reward API, and processed.
54
+
55
+
### How do I know that the learning loop is getting updated regularly and is used to score my data?
56
+
57
+
You can find the time when the model was last updated in the **Model and Learning Settings** page of the Azure portal. If you see an old timestamp, it is likely because you are not sending the Rank and Reward calls. If the service has no incoming data, it does not update the learning. If you see the learning loop is not updating frequently enough, you can edit the loop's **Model Update frequency**.
58
+
59
+
60
+
## Offline evaluations
61
+
62
+
### An offline evaluation's feature importance returns a long list with hundreds or thousands of items. What happened?
63
+
64
+
This is typically due to timestamps, user IDs or some other fine grained features sent in.
65
+
66
+
### I created an offline evaluation and it succeeded almost instantly. Why is that? I don’t see any results?
67
+
68
+
The offline evaluation uses the trained model data from the events in that time period. If you did not send any data in the time period between start and end time of the evaluation, it will complete without any results. Submit a new offline evaluation by selecting a time range with events you know were sent to Personalizer.
69
+
70
+
71
+
72
+
## Security
73
+
74
+
### The API key for my loop has been compromised. What can I do?
75
+
76
+
You can regenerate one key after swapping your clients to use the other key. Having two keys allows you to propagate the key in a lazy manner without having to have any downtime. We recommend doing this on a regular cycle as a security measure.
23
77
24
-
If you are unsure about how your learning loop is currently behaving, run an [offline evaluation](concepts-offline-evaluation.md), and apply the corrected learning policy.
0 commit comments