|
| 1 | +import { Callout} from "nextra/components"; |
| 2 | + |
| 3 | +# Publisher Quality Ranking |
| 4 | + |
| 5 | +This document introduces a quality ranking system to ensure high-quality pricing data on Pythnet. The ranking system will use **three** metrics to evaluate each publisher's performance: |
| 6 | +- Uptime (40% weight), |
| 7 | +- Price deviation (40% weight), |
| 8 | +- Lack of stalled prices (20% weight). |
| 9 | + |
| 10 | +The ranking will be calculated **monthly** to ensure that only the top-performing publishers remain permissioned for each price feed on Pythnet. |
| 11 | + |
| 12 | +**Important: Publishers with an uptime of at least 50% from the previous week will be included in the ranking. If a publisher's uptime is less than 50%, then the deviation and the stalled score of the publisher will be 0 to reflect their ineligibility.** |
| 13 | + |
| 14 | +Publishers in Pythtest conformance who are not publishing on Pythnet and pass this uptime condition will also be ranked together with the Pythnet publishers for each symbol. |
| 15 | + |
| 16 | +## Metrics Used for Ranking |
| 17 | + |
| 18 | +The three metrics used for ranking are: |
| 19 | + |
| 20 | +### Uptime |
| 21 | + |
| 22 | +This metric measures the percentage of time a publisher is available and actively publishing data. A higher numerical score indicates a higher uptime maintained for the price feed. Score range: 0-1. |
| 23 | + |
| 24 | +### Price Deviation |
| 25 | + |
| 26 | +This metric measures deviations that occur between a publishers' price and the aggregate price, normalized by the aggregate confidence interval. A higher numerical score indicates a lower degree of price deviation. Score range: 0-1. |
| 27 | + |
| 28 | +### Lack of Stalled Prices |
| 29 | + |
| 30 | +This metric penalizes publishers reporting the same value for the price of an asset for at least 100 consecutive slots because such instances indicate potential data staleness. Publishers with fewer stalled prices will get a higher score. Score range: 0-1. |
| 31 | + |
| 32 | +## Ranking Algorithm |
| 33 | + |
| 34 | +Each metric is assigned a weight as mentioned above, and the score for each metric is calculated based on the publisher's performance. |
| 35 | +The scores from each metrics are aggregated with respect to their weights to get the final score for each publisher. |
| 36 | +The weight distribution is as follows: |
| 37 | +- Uptime: 40% |
| 38 | +- Price deviation: 40% |
| 39 | +- Lack of stalled prices: 20% |
| 40 | + |
| 41 | +Publishers are then sorted based on their final scores with the highest score indicating the best performance. |
| 42 | +As mentioned earlier, the score for each metric range from 0 to 1, where 1 represents the best performance. |
| 43 | +Each publisher will also be assigned a rank based on their final scores, where lower ranks are assigned to publishers with higher scores indicating better performance. |
| 44 | + |
| 45 | +## Metric Calculations |
| 46 | + |
| 47 | +This section provides a detailed breakdown of how each metric is calculated. |
| 48 | + |
| 49 | +### Uptime |
| 50 | + |
| 51 | +Uptime measures a publisher's reliability and availability. If a publisher consistently provides data without interruptions, it indicates a high level of reliability. This is aligned with the current conformance testing/PRP, which checks the publsiher's availability based on price publication within 10 slots. |
| 52 | + |
| 53 | + |
| 54 | +$$ |
| 55 | +\text{Score}_{\text{Uptime}} = \frac{\text{Publisher Slot Count}}{\text{Aggregate Slot Count}} |
| 56 | +$$ |
| 57 | + |
| 58 | +Uptime is given **40% weight** in the final ranking. |
| 59 | + |
| 60 | + |
| 61 | +**Reason for Weight:** Uptime is the most critical metric because consistent data availability is fundamental for ensuring a data feed's overall quality and reliability. A publisher that is frequently unable to publish for every set number of slots or unavailable would significantly disrupt the service, hence the high weight of 40%. |
| 62 | + |
| 63 | + |
| 64 | +### Price Deviation |
| 65 | + |
| 66 | +This metric evaluates the deviations between the publisher's price and the aggregate price, normalized by the aggregate's confidence interval. |
| 67 | + |
| 68 | +$$ |
| 69 | +\text{Penalty}_{\text{Deviation}} = |
| 70 | +\frac{1}{N} \sum_{i=1}^{N} \left( \frac{|P_i - A_i|}{{CI}_i} \right)^2 |
| 71 | +$$ |
| 72 | + |
| 73 | +$$ |
| 74 | +\text{Score}_{\text{Deviation}} = \frac{{\text{NumPublishers}}-{\text{Rank}}(\text{Penalty}_\text{Deviation}) + 1}{\text{NumPublishers}} |
| 75 | +$$ |
| 76 | + |
| 77 | +Where: |
| 78 | +- $N$ is the total number of prices |
| 79 | +- $P_i$ is the publisher's price at instance $i$ |
| 80 | +- $A_i$ is the aggregate price at instance $i$ |
| 81 | +- ${CI}_i$ is the aggregate confidence interval at instance $i$ |
| 82 | + |
| 83 | +Price deviation is given **40% weight** in the final ranking. |
| 84 | + |
0 commit comments