Skip to content

Commit 8234e38

Browse files
committed
requested changes
1 parent 386793b commit 8234e38

File tree

2 files changed

+7
-7
lines changed

2 files changed

+7
-7
lines changed

pages/home/metrics/publisher-metrics.mdx

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,30 +1,30 @@
11
# Pyth Publishers Metrics
22

3-
Pyth Publishers Metrics is a feature that provides insights that will empower developers, publishers, and delegators by providing the historical performance of the network's data sources. This powerful tool reflects the commitment to transparency and delivering timely, accurate, and valuable first-party data for everyone.
3+
Pyth Publishers Metrics is a feature that provides insights that will empower developers, publishers, and delegators by providing the historical performance of the network's data sources. This powerful tool reflects a commitment to transparency and delivering timely, accurate, and valuable first-party data for everyone.
44

55
---
66

77
**Price Series**
88

9-
The **price graph** shows how a publishers price compares to the aggregate price, illustrating how closely the two prices track each other, and whether there were any periods where the publisher deviated significantly from the rest of the market.
9+
The **price graph** shows how a publisher's price compares to the aggregate price, illustrating how closely the two prices track each other, and whether there were any periods where the publisher deviated significantly from the rest of the market.
1010

1111
![](../../../images/publisher-metrics/Price_Series.jpg)
1212

1313
**Uptime**
1414

15-
The **uptime graph** shows when the publisher was actively contributing prices. The x-axis subdivides the time interval into bins, and the y-axis is the % of slots in that bin where the publishers price was recent enough to be included in the aggregate. This graph lets you determine the regularity and reliability of a publisher.
15+
The **uptime graph** shows when the publisher was actively contributing prices. The x-axis subdivides the time interval into bins, and the y-axis is the % of slots in that bin where the publisher's price was recent enough to be included in the aggregate. This graph lets you determine the regularity and reliability of a publisher.
1616

1717
![](../../../images/publisher-metrics/Uptime.jpg)
1818

1919
**Quality**
2020

21-
The quality graph shows the dataset used in the regression model for computing the quality score described in section 4.1.1 of the [whitepaper](https://pyth.network/whitepaper.pdf). The quality score measures how well a publishers price series predicts future changes in the aggregate price. A smooth color gradient (from blue on the bottom left to pink on the top right) indicates a high-quality score.
21+
The quality graph shows the dataset used in the regression model for computing the quality score described in section 4.1.1 of the [whitepaper](https://pyth.network/whitepaper.pdf). The quality score measures how well a publisher's price series predicts future changes in the aggregate price. A smooth color gradient (from blue on the bottom left to pink on the top right) indicates a high-quality score.
2222

2323
![](../../../images/publisher-metrics/Quality.jpg)
2424

2525
**Calibration**
2626

27-
The **calibration graph** shows how closely the publisher's prices and confidences match the expected Laplace distribution. The closer the fit between the two distributions, the higher the calibration score (described in section 4.1.2 of the whitepaper). In other words, a perfect publisher should produce a uniform histogram. As a reminder, the calibration score does not reward publishers for producing tighter confidence intervals; rather, the score captures whether the reported confidence interval corresponds to the publishers “true” confidence.
27+
The **calibration graph** shows how closely the publisher's prices and confidences match the expected Laplace distribution. The closer the fit between the two distributions, the higher the calibration score (described in section 4.1.2 of the whitepaper). In other words, a perfect publisher should produce a uniform histogram. As a reminder, the calibration score does not reward publishers for producing tighter confidence intervals; rather, the score captures whether the reported confidence interval corresponds to the publisher's “true” confidence.
2828

2929
![](../../../images/publisher-metrics/Calibration.jpg)
3030

@@ -44,6 +44,6 @@ To open the metrics for another publisher (of that same price feed), you can cli
4444

4545
![](../../../images/publisher-metrics/Publisher_metrics_image_2.jpg)
4646

47-
If you want to review the Publisher Metrics of another price feed (e.g. ETH/USD), you will need to access the relevant asset. As mentioned, the [Pyth Price Feeds page](https://pyth.network/price-feeds/) has the full list of price feeds.
47+
If you want to review the Publisher Metrics of another price feed (e.g. ETH/USD), you will need to access the relevant asset. As mentioned, the [Pyth Price Feeds page](https://pyth.network/price-feeds/) has the full list of price feeds.
4848

4949
For more details on the Pyth Publishers Metrics, please visit this [blog post](https://pythnetwork.medium.com/introducing-pyth-publishers-metrics-3b20de6f1bf3).

pages/price-feeds/best-practices.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ At every point in time, Pyth publishes both a price and a confidence interval fo
7575

7676
In a Pyth feed, each publisher specifies an interval `(p_i-c_i, p_i+c_i)` in the form of their price and confidence submission. This interval is intended to achieve 95% coverage, i.e. the publisher expresses the belief that this interval contains the “true” price with 95% probability. The resulting aggregate interval `(μ-σ, μ+σ)`, where `μ` represents the aggregate price and `σ` represents the aggregate confidence, is a good estimate of a range in which the true price lies.
7777

78-
To explain this, let's consider two cases of publisher estimates. In the first case, there is 100% overlap of all the publishers’ intervals, i.e. each publisher submits the same interval `(p-c, p+c)`. In this case, the aggregate confidence interval is exactly that interval, so the aggregate confidence interval provides 100% coverage of the publishers’ intervals. This first case represents normal operating conditions, where most publishers agree about the price of an asset.
78+
To explain this, consider two cases of publisher estimates. In the first case, there is 100% overlap of all the publishers’ intervals, i.e. each publisher submits the same interval `(p-c, p+c)`. In this case, the aggregate confidence interval is exactly that interval, so the aggregate confidence interval provides 100% coverage of the publishers’ intervals. This first case represents normal operating conditions, where most publishers agree about the price of an asset.
7979

8080
In the second case, each publisher specifies an interval that is disjoint from each of the other publishers’ intervals. In this case, the aggregate confidence interval can be seen to contain at least the 25th percentile and at least the 75th percentile of the set of points consisting of each of the publisher’s price, price plus confidence, and price plus confidence. As a result, the aggregate confidence interval is somewhat analogous to an interquartile range of the data, which is a reasonable measure of the spread of a set of points. Note that this is not an IQR of the prices alone of the publishers but rather of the set composed of the 3 points that each publisher submits. Moreover, note that the IQR does not include the most extreme publishers’ prices on either side; this property is necessary to ensure that a small group of publishers cannot manipulate the aggregate confidence interval. This second case represents an atypical scenario where publishers all disagree. Such circumstances are rare but can occur during market volatility or unusual events.
8181

0 commit comments

Comments
 (0)