You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: website/src/pages/en/subgraphs/querying/distributed-systems-guide.mdx
+28-28Lines changed: 28 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,22 +10,22 @@ By following these steps, you can avoid data inconsistencies that arise from blo
10
10
11
11
When you need to fetch the newest information from The Graph without stepping back to an older block:
12
12
13
-
1.**Initialize a minimal block target:** Start by setting `minBlock` to 0 (or a known block number). This ensures your query will be served from the most recent block.
14
-
2.**Set up a periodic polling cycle:** Choose a delay that matches the block production interval (e.g., 14 seconds). This ensures you wait until a new block is likely available.
15
-
3.**Use the `block: { number_gte: $minBlock }` argument:** This ensures the fetched data is from a block at or above the specified block number, preventing time from moving backward.
16
-
4.**Handle logic inside the loop:** Update `minBlock` to the most recent block returned in each iteration.
13
+
1.**Initialize a minimal block target:** Start by setting `minBlock` to 0 (or a known block number). This ensures your query will be served from the most recent block.
14
+
2.**Set up a periodic polling cycle:** Choose a delay that matches the block production interval (e.g., 14 seconds). This ensures you wait until a new block is likely available.
15
+
3.**Use the `block: { number_gte: $minBlock }` argument:** This ensures the fetched data is from a block at or above the specified block number, preventing time from moving backward.
16
+
4.**Handle logic inside the loop:** Update `minBlock` to the most recent block returned in each iteration.
17
17
5.**Process the fetched data:** Implement the necessary actions (e.g., updating internal state) with the newly polled data.
18
18
19
19
```javascript
20
20
/// Example: Polling for updated data
21
21
asyncfunctionupdateProtocolPaused() {
22
-
let minBlock =0;
22
+
let minBlock =0
23
23
24
24
for (;;) {
25
25
// Wait for the next block.
26
26
constnextBlock=newPromise((f) => {
27
-
setTimeout(f, 14000);
28
-
});
27
+
setTimeout(f, 14000)
28
+
})
29
29
30
30
constquery=`
31
31
query GetProtocol($minBlock: Int!) {
@@ -38,17 +38,17 @@ async function updateProtocolPaused() {
38
38
}
39
39
}
40
40
}
41
-
`;
41
+
`
42
42
43
-
constvariables= { minBlock };
44
-
constresponse=awaitgraphql(query, variables);
45
-
minBlock =response._meta.block.number;
43
+
constvariables= { minBlock }
44
+
constresponse=awaitgraphql(query, variables)
45
+
minBlock =response._meta.block.number
46
46
47
47
// TODO: Replace this placeholder with handling of 'response.protocol.paused'.
48
-
console.log(response.protocol.paused);
48
+
console.log(response.protocol.paused)
49
49
50
50
// Wait to poll again.
51
-
await nextBlock;
51
+
await nextBlock
52
52
}
53
53
}
54
54
```
@@ -57,16 +57,16 @@ async function updateProtocolPaused() {
57
57
58
58
If you must retrieve multiple related items or a large set of data from the same point in time:
59
59
60
-
1.**Fetch the initial page:** Use a query that includes `_meta { block { hash } }` to capture the block hash. This ensures subsequent queries stay pinned to that same block.
61
-
2.**Store the block hash:** Keep the hash from the first response. This becomes your reference point for the rest of the items.
62
-
3.**Paginate the results:** Make additional requests using the same block hash and a pagination strategy (e.g., `id_gt` or other filtering) until you have fetched all relevant items.
60
+
1.**Fetch the initial page:** Use a query that includes `_meta { block { hash } }` to capture the block hash. This ensures subsequent queries stay pinned to that same block.
61
+
2.**Store the block hash:** Keep the hash from the first response. This becomes your reference point for the rest of the items.
62
+
3.**Paginate the results:** Make additional requests using the same block hash and a pagination strategy (e.g., `id_gt` or other filtering) until you have fetched all relevant items.
63
63
4.**Handle re-orgs:** If the block hash becomes invalid due to a re-org, retry from the first request to obtain a non-uncle block.
64
64
65
65
```javascript
66
66
/// Example: Fetching a large set of related items
67
67
asyncfunctiongetDomainNames() {
68
-
let pages =5;
69
-
constperPage=1000;
68
+
let pages =5
69
+
constperPage=1000
70
70
71
71
// First request captures the block hash.
72
72
constlistDomainsQuery=`
@@ -81,15 +81,15 @@ async function getDomainNames() {
81
81
}
82
82
}
83
83
}
84
-
`;
84
+
`
85
85
86
-
let data =awaitgraphql(listDomainsQuery, { perPage });
87
-
let result =data.domains.map((d) =>d.name);
88
-
let blockHash =data._meta.block.hash;
86
+
let data =awaitgraphql(listDomainsQuery, { perPage })
87
+
let result =data.domains.map((d) =>d.name)
88
+
let blockHash =data._meta.block.hash
89
89
90
90
// Paginate until fewer than 'perPage' results are returned or you reach the page limit.
91
91
while (data.domains.length=== perPage &&--pages) {
92
-
let lastID =data.domains[data.domains.length-1].id;
92
+
let lastID =data.domains[data.domains.length-1].id
@@ -101,17 +101,17 @@ async function getDomainNames() {
101
101
id
102
102
}
103
103
}
104
-
`;
104
+
`
105
105
106
-
data =awaitgraphql(query, { perPage, lastID, blockHash });
106
+
data =awaitgraphql(query, { perPage, lastID, blockHash })
107
107
108
108
for (constdomainofdata.domains) {
109
-
result.push(domain.name);
109
+
result.push(domain.name)
110
110
}
111
111
}
112
112
113
113
// TODO: Do something with the full result.
114
-
return result;
114
+
return result
115
115
}
116
116
```
117
117
@@ -121,4 +121,4 @@ By using the `number_gte` parameter in a polling loop, you ensure time moves for
121
121
122
122
• If you encounter re-orgs, plan to retry from the beginning or adjust your logic accordingly. • Explore other filtering and block arguments (see \[placeholder for reference location\]) to handle additional use-cases.
123
123
124
-
\[Placeholder for additional references or external resources if available\]
124
+
\[Placeholder for additional references or external resources if available\]
Copy file name to clipboardExpand all lines: website/src/pages/en/subgraphs/querying/distributed-systems.mdx
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,15 +16,15 @@ A perfect example of this is the so-called "block wobble" phenomenon, where a cl
16
16
17
17
To understand the impact, consider a scenario where a client continuously fetches the latest block from an Indexer:
18
18
19
-
1. Indexer ingests block 8
20
-
2. Request served to the client for block 8
21
-
3. Indexer ingests block 9
22
-
4. Indexer ingests block 10A
23
-
5. Request served to the client for block 10A
24
-
6. Indexer detects re-org to 10B and rolls back 10A
25
-
7. Request served to the client for block 9
26
-
8. Indexer ingests block 10B
27
-
9. Indexer ingests block 11
19
+
1. Indexer ingests block 8
20
+
2. Request served to the client for block 8
21
+
3. Indexer ingests block 9
22
+
4. Indexer ingests block 10A
23
+
5. Request served to the client for block 10A
24
+
6. Indexer detects re-org to 10B and rolls back 10A
25
+
7. Request served to the client for block 9
26
+
8. Indexer ingests block 10B
27
+
9. Indexer ingests block 11
28
28
10. Request served to the client for block 11
29
29
30
30
From the **Indexer's viewpoint**, it sees a forward-moving progression with a brief need to roll back an invalid block. But from the **client's viewpoint**, responses seem to arrive in a puzzling order: block 8, block 10, then suddenly block 9, and finally block 11.
@@ -136,4 +136,4 @@ By pinning queries to a single block hash, all responses relate to the same poin
136
136
137
137
### Final Thoughts on Distributed Consistency
138
138
139
-
Distributed systems can seem unpredictable, but understanding the root causes of events like out-of-order requests and block reorganizations helps clarify why the data may appear contradictory. By using the patterns described above, both crossing block boundaries forward in time and maintaining a single consistent block snapshot become possible strategies in managing this inherent complexity.
139
+
Distributed systems can seem unpredictable, but understanding the root causes of events like out-of-order requests and block reorganizations helps clarify why the data may appear contradictory. By using the patterns described above, both crossing block boundaries forward in time and maintaining a single consistent block snapshot become possible strategies in managing this inherent complexity.
0 commit comments