You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pages/home/oracle-integrity-staking/publisher-quality-ranking.mdx
+38-4Lines changed: 38 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,15 +4,17 @@ import { Callout} from "nextra/components";
4
4
5
5
This document introduces a quality ranking system to ensure high-quality pricing data on Pythnet. The ranking system will use **three** metrics to evaluate each publisher's performance:
6
6
- Uptime (40% weight),
7
-
- Price deviation (40% weight),
8
-
- Lack of stalled prices (20% weight).
7
+
- Price Deviation (40% weight),
8
+
- Lack of Stalled Prices (20% weight).
9
9
10
10
The ranking will be calculated **monthly** to ensure that only the top-performing publishers remain permissioned for each price feed on Pythnet.
11
11
12
12
**Important: Publishers with an uptime of at least 50% from the previous week will be included in the ranking. If a publisher's uptime is less than 50%, then the deviation and the stalled score of the publisher will be 0 to reflect their ineligibility.**
13
13
14
14
Publishers in Pythtest conformance who are not publishing on Pythnet and pass this uptime condition will also be ranked together with the Pythnet publishers for each symbol.
15
15
16
+
[Pubslisher Rankings](https://www.pyth.network/publishers/ranking) on the main website shows the updated rank of the publishers.
17
+
16
18
## Metrics Used for Ranking
17
19
18
20
The three metrics used for ranking are:
@@ -35,8 +37,8 @@ Each metric is assigned a weight as mentioned above, and the score for each metr
35
37
The scores from each metrics are aggregated with respect to their weights to get the final score for each publisher.
36
38
The weight distribution is as follows:
37
39
- Uptime: 40%
38
-
- Price deviation: 40%
39
-
- Lack of stalled prices: 20%
40
+
- Price Deviation: 40%
41
+
- Lack of Stalled Prices: 20%
40
42
41
43
Publishers are then sorted based on their final scores with the highest score indicating the best performance.
42
44
As mentioned earlier, the score for each metric range from 0 to 1, where 1 represents the best performance.
@@ -80,5 +82,37 @@ Where:
80
82
- $A_i$ is the aggregate price at instance $i$
81
83
- ${CI}_i$ is the aggregate confidence interval at instance $i$
82
84
85
+
86
+
It is calculated similarly to a z-score, where the aggregate price $A_i$ is the mean, and the aggregate confidence interval $CI_i$ is the Standard Deviation. A higher deviation score indicates that the publisher's prices are significantly inconsistent with the aggregate prices, especially when these deviations exceed the confidence interval. The $\text{Score}_{\text{Deviation}}$ is calculated by ranking the $\text{Penalty}_{\text{Deviation}}$ among all publishers and expressing this rank as a percentage of the total number of publishers.
87
+
83
88
Price deviation is given **40% weight** in the final ranking.
84
89
90
+
**Reason for Weight:** To maintain trust in the data published on a feed, it is crucial to ensure that reported prices are within a reasonable range of the aggregate price when adjusted for confidence intervals. Significant inconsistencies can undermine confidence in the published data, hence the weight of 40%.
91
+
92
+
93
+
### Lack of Stalled Prices
94
+
95
+
This metric checks if the publisher is reporting the same price continuously for a specified duration. Repeated prices over an extended period can indicate data staleness or a problem with the data feed.
- $N$ is the total number of slots for the aggregate price.
108
+
- $T$ is the duration threshold for staleness in slot. The threshold is $100$ slots (or about $40$ seconds) for all the symbols but can change in the future on a per-symbol basis.
109
+
- $P_i$ is the price reported by the publisher at time $i$.
110
+
- $\mathbf{1}(\cdot)$ is an indicator function that returns 1 if the condition inside it is true and 0 otherwise.
111
+
- $\text{Penalty}_{\text{Stalled}}$ calculates the fraction of time periods where the price remains unchanged for T consecutive intervals.
112
+
- $\text{Score}_{\text{Stalled}} $ adjusts the raw stalled penalty multiplied by $10$ to a score out of 1, which penalizes higher staleness rates. It means that if a publisher publishes stalled prices more than 10% of the times, it will get the score of 0.
113
+
114
+
This metric is given **20% weight** in the final ranking.
115
+
116
+
**Reason for Weight:** Staleness in data can mislead users into thinking the market conditions are unchanged, which can be detrimental in volatile market conditions. While important, measuring data staleness is deemed relatively less critical than evaluating uptime and price accuracy, hence a weight of 20%.
0 commit comments