Skip to content

Commit 60ec9ee

Browse files
author
mgoelzer
committed
Initial Deals Dashboard proposal
1 parent 69d4ffa commit 60ec9ee

File tree

1 file changed

+162
-0
lines changed

1 file changed

+162
-0
lines changed

proposals/deals-dashboard.md

Lines changed: 162 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,162 @@
1+
# Deals Dashboard
2+
3+
Authors: @mgoelzer
4+
5+
Initial PR: #85 <!-- Reference the PR first proposing this document. Oooh, self-reference! -->
6+
7+
<!--
8+
For ease of discussion in PRs, consider breaking lines after every sentence or long phrase.
9+
-->
10+
11+
## Purpose &amp; impact
12+
#### Background &amp; intent
13+
_Describe the desired state of the world after this project? Why does that matter?_
14+
15+
To better understand real world deal making performance on the Filecoin mainnet, we need a set of [Observable](https://observablehq.com/) (or similar) dashboards that stakeholders can review to see real time updated data on various facets of deals. Broadly, these dashboards need to cover three areas:
16+
17+
- **Key KPIs.** These include key decisionmaker metrics like overall storage and retrieval deal success rates (DSRs), speed of retrievals, rate of deal acceptance by miners.
18+
- **Deal Testing.** [Elsewhere](https://github.com/protocol/web3-dev-team/pull/84) we have proposed to develop a dealbot that randomly selects miners on the network and makes storage and retrieval deals with them. Therefore, a dashboard is needed to visualize the data produced by these bots.
19+
20+
This would take the form of a filterable list of all deals attempted and their outcome. Filters would include characteristics like storage or retrieval deal, verified or unverified deal, fast retrieval flag set or not set, date range, and so on.
21+
- **Other Dashboards.** A few miscellaneous dashboards that would help us understand the network, such as view of storage and retrieval DSRs over time, histogram of deal failures by stage, and average time spent in each stage of the deal process.
22+
23+
#### Assumptions &amp; hypotheses
24+
_What must be true for this project to matter?_
25+
26+
* The [Dealbots project](https://github.com/protocol/web3-dev-team/pull/84) needs to be completed and producing a feed of data in order to power these dashboards.
27+
* Stakeholder alignment on what dashboard views are useful.
28+
29+
#### User workflow example
30+
_How would a developer or user use this new capability?_
31+
32+
* A stakeholder would browser to an Observable URL and view a real-time updated set of dashboard visualizations.
33+
34+
#### Impact
35+
_How would this directly contribute to web3 dev stack product-market fit?_
36+
37+
Key to PMF is proper functioning of the Filecoin network under all "normal" usage conditions. These dashboards would answer the question of about whether the network is performing as expected.
38+
39+
#### Leverage
40+
_How much would nailing this project improve our knowledge and ability to execute future projects?_
41+
42+
The impact of these dashboards would be significant.
43+
44+
Right now, the Filecoin Project is "flying blind" in terms of:
45+
46+
- knowing where bugs may exist in the network
47+
- whether miners (and clienta) are well incentivized to use the network to its full potential
48+
- unexpected deviations from normal operation on mainnet (e.g., today we have no quick way of detecting a sudden spike in deal failures that might indicate a regression in recent network upgrade)
49+
- what kinds of problems users may be encountering that we are not hearing about through other channels
50+
51+
#### Confidence
52+
_How sure are we that this impact would be realized? Label from [this scale](https://medium.com/@nimay/inside-product-introduction-to-feature-priority-using-ice-impact-confidence-ease-and-gist-5180434e5b15)_.
53+
54+
Confidence = 10
55+
56+
Management decision making based on automated dashboards is a very common practice in software companies. I have seen this technique surface information to decision makers effectively in multiple contexts.
57+
58+
Dashboards also enable decision makers to self-serve, discovering important information on their own without having to convene a meeting or wait for a report from others in their organization.
59+
60+
61+
## Project definition
62+
#### Brief plan of attack
63+
64+
<!--Briefly describe the milestones/steps/work needed for this project-->
65+
66+
#### What does done look like?
67+
_What specific deliverables should completed to consider this project done?_
68+
69+
A set of Observable dashboards covering the items listed below.
70+
71+
- **Key KPIs.**
72+
73+
- **P1** Storage deal success rate (DSR) across all golden path deals
74+
- **P1** Retrieval DSR across all golden path deals
75+
- **P2** Time to acquire DataCap (target: <1m)
76+
- **P2** Time to return QueryAsk (target: <1s)
77+
- **P2** % of dialable miners
78+
- **P2** % of proposed storage deals that are accepted
79+
- **P2** % of proposed retrieval deals that are accepted
80+
- **P1** % of initiated storage data transfers that are successful
81+
- **P1** % of initiated retrieval data transfers that are successful
82+
- **P1** % of accepted storage deals that are ultimately active on-chain
83+
- **P1** Time to first byte on retrievals
84+
- **P1** Time to last byte on retrievals for each different deal size
85+
86+
- **Deal Testing.**
87+
88+
- A list of all bot deals with filters for:
89+
- **P1** Storage or retrieval deal
90+
- **P1** Using DataCap or not
91+
- **P1** Using fast retrieval flag or not
92+
- **P2** Deal size (various sizes from 4 GiB to 32 GiB)
93+
- **P2** Version numbers for lotus, graphsync, go-fil-markets, go-data-transfer
94+
- **P1** Deal stage where failure occurred
95+
- **P1** specific error codes
96+
- **P2** Datetime range
97+
- A top-level metrics dashboard showing all bot deal stages with
98+
- **P2** Avg time in stage
99+
- **P2** Success rate in advancing out of that stage
100+
- Two top level dashboards showing:
101+
- **P1** Overall bot storage deal DSR
102+
- **P1** Overall bot retrieval deal DSR
103+
- **P1** Top level dashboard with a table that contains each attempted bot deal and all info that we have about that deal
104+
105+
- **Other Dashboards.**
106+
- **P1** View of storage DSR over time
107+
- **P1** View of retrieval DSR over time
108+
- **P1** Histogram of failures by deal stage
109+
- **P2** Time in each stage (i.e. how long each stage takes), and lines that show how the time in each stage has changed over time
110+
111+
#### What does success look like?
112+
_Success means impact. How will we know we did the right thing?_
113+
114+
Success means:
115+
116+
- Having a set of Observable dashboards that cover all of the performance indicators described in the previous section
117+
- Stakeholders are actually using these metrics to understand the network and make decisions
118+
119+
#### Counterpoints &amp; pre-mortem
120+
_Why might this project be lower impact than expected? How could this project fail to complete, or fail to be successful?_
121+
122+
- We could be focused on the wrong dashboard metrics
123+
- There could be dashboard metrics that are important but have not yet occurred to us
124+
- Stakeholders could decline to make use of the dashboards, either because they are not answering urgent questions or are too complicated to understand or other mismatch with stakeholder needs
125+
- The data stops being updated due to lack of maintenance bandwidth
126+
127+
#### Alternatives
128+
_How might this project’s intent be realized in other ways (other than this project proposal)? What other potential solutions can address the same need?_
129+
130+
- Its very hard to get the benefits of dashboards without building some type of dashboard
131+
- However, other tools (besides Observable) could be used for visualization
132+
- Non-automated methods of achieving the goals of these dashboards are possible: regularly interviewing users, drawing inferences from other known metrics like number of active users, increase/decrease in high value users, etc.
133+
134+
#### Dependencies/prerequisites
135+
<!--List any other projects that are dependencies/prerequisites for this project that is being pitched.-->
136+
137+
- The dashboards will be powered by data from the [Dealbots project](https://github.com/protocol/web3-dev-team/pull/84)
138+
139+
#### Future opportunities
140+
141+
- Add additional metrics that we realize would be of use
142+
- Remove metrics that prove not to be useful and/or complicate the UX of the dashboards
143+
144+
## Required resources
145+
146+
#### Effort estimate
147+
<!--T-shirt size rating of the size of the project. If the project might require external collaborators/teams, please note in the roles/skills section below).
148+
For a team of 3-5 people with the appropriate skills:
149+
- Small, 1-2 weeks
150+
- Medium, 3-5 weeks
151+
- Large, 6-10 weeks
152+
- XLarge, >10 weeks
153+
Describe any choices and uncertainty in this scope estimate. (E.g. Uncertainty in the scope until design work is complete, low uncertainty in execution thereafter.)
154+
-->
155+
156+
(Engineering should make this estimate based on requirements described above.)
157+
158+
#### Roles / skills needed
159+
<!--Describe the knowledge/skill-sets and team that are needed for this project (e.g. PM, docs, protocol or library expertise, design expertise, etc.). If this project could be externalized to the community or a team outside PL's direct employment, please note that here.-->
160+
161+
- PM for requirements and requirements changes
162+
- Experience with Observable (i.e., rudimentary Javascript coding)

0 commit comments

Comments
 (0)