You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/internet-analyzer/internet-analyzer-scorecard.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,26 +20,26 @@ The scorecard tab can be found in the Internet Analyzer resource menu.
20
20
21
21
## Filters
22
22
23
-
****Test:*** Select the test that you’d like to view results for - each test has its own scorecard. Test data will appear once there is enough data to complete the analysis – in most cases, this should be within 24 hours.
23
+
****Test:*** Select the test that you’d like to view results for - each test has its own scorecard. Test data will appear once there's enough data to complete the analysis – in most cases, this should be within 24 hours.
24
24
****Time period & end date:*** Three scorecards are generated daily – each scorecard reflects a different aggregation period – the 24 hours prior (day), the seven days prior (week), and the 30 days prior (month). Use the “End Date” filter to select the last day of the time period you want to see.
25
25
****Country:*** For each country that you have end users, a scorecard is generated. The global filter contains all end users.
26
26
27
27
## Measurement count
28
28
29
-
The number of measurements impacts the confidence of the analysis. The higher the count, the more accurate the result. At minimum, tests should aim for a minimum of 100 measurements per endpoint per day. If measurement counts are too low, please configure the JavaScript client to execute more frequently in your application. The measurement counts for endpoints A and B should be very similar although small differences are expected and okay. In the case of large differences, the results should not be trusted.
29
+
The number of measurements impacts the confidence of the analysis. The higher the count, the more accurate the result. At minimum, tests should aim for a minimum of 100 measurements per endpoint per day. If measurement counts are too low, please configure the JavaScript client to execute more frequently in your application. The measurement counts for endpoints A and B should be very similar although small differences are expected and okay. In the case of large differences, the results shouldn't be trusted.
30
30
31
31
## Percentiles
32
32
33
-
Latency, measured in milliseconds, is a popular metric for measuring speed between a source and destination on the Internet. Latency data is not normally distributed (i.e. does not follow a "Bell Curve") because there is a "long-tail" of large latency values that skew results when using statistics such as the arithmetic mean. As an alternative, percentiles provide a "distribution-free" way to analyze data. As an example, the median, or 50th percentile, summarizes the middle of the distribution - half the values are above it and half are below it. A 75th percentile value means it is larger than 75% of all values in the distribution. Internet Analyzer refers to percentiles in shorthand as P50, P75, and P95.
33
+
Latency, measured in milliseconds, is a popular metric for measuring speed between a source and destination on the Internet. Latency data isn't normally distributed (i.e. doesn't follow a "Bell Curve") because there's a "long-tail" of large latency values that skew results when using statistics such as the arithmetic mean. As an alternative, percentiles provide a "distribution-free" way to analyze data. As an example, the median, or 50th percentile, summarizes the middle of the distribution - half the values are above it and half are below it. A 75th percentile value means it's larger than 75% of all values in the distribution. Internet Analyzer refers to percentiles in shorthand as P50, P75, and P95.
34
34
35
35
Internet Analyzer percentiles are _sample metrics_. This is in contrast to the true _population metric_. For example, the daily true population median latency between students at the University of Southern California and Microsoft is the median latency value of all requests during that day. In practice, measuring the value of all requests is impractical, so we assume that a reasonably large sample is representative of the true population.
36
36
37
-
For analysis purposes, P50 (median), is useful as an expected value for a latency distribution. Higher percentiles, such as P95, are useful for identifying how high latency is in the worst cases. If you are interested in understanding customer latency in general, P50 is the correct metric to focus on. If you are concerned with understanding performance for the worst-performing customers, then P95 should be the focus. P75 is a balance between these two.
37
+
For analysis purposes, P50 (median), is useful as an expected value for a latency distribution. Higher percentiles, such as P95, are useful for identifying how high latency is in the worst cases. If you're interested in understanding customer latency in general, P50 is the correct metric to focus on. If you're concerned with understanding performance for the worst-performing customers, then P95 should be the focus. P75 is a balance between these two.
38
38
39
39
40
40
## Deltas
41
41
42
-
A delta is the difference in metric values for endpoints A and B. Deltas are computed to show the benefit of B over A. Positive values indicate B performed better than A, whereas negative values indicate B's performance is worse. Deltas can be absolute (e.g. 10 milliseconds) or relative (5%).
42
+
A delta is the difference in metric values for endpoints A and B. Deltas are computed to show the benefit of B over A. Positive values indicate B performed better than A, whereas negative values indicate B's performance is worse. Deltas can be absolute (for example, 10 milliseconds) or relative (5%).
0 commit comments