Skip to content

Commit 4771d95

Browse files
Fixes
1 parent d435e1e commit 4771d95

File tree

2 files changed

+38
-38
lines changed

2 files changed

+38
-38
lines changed

website/src/pages/en/subgraphs/querying/distributed-systems-guide.mdx

Lines changed: 28 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -10,22 +10,22 @@ By following these steps, you can avoid data inconsistencies that arise from blo
1010

1111
When you need to fetch the newest information from The Graph without stepping back to an older block:
1212

13-
1. **Initialize a minimal block target:** Start by setting `minBlock` to 0 (or a known block number). This ensures your query will be served from the most recent block.
14-
2. **Set up a periodic polling cycle:** Choose a delay that matches the block production interval (e.g., 14 seconds). This ensures you wait until a new block is likely available.
15-
3. **Use the `block: { number_gte: $minBlock }` argument:** This ensures the fetched data is from a block at or above the specified block number, preventing time from moving backward.
16-
4. **Handle logic inside the loop:** Update `minBlock` to the most recent block returned in each iteration.
13+
1. **Initialize a minimal block target:** Start by setting `minBlock` to 0 (or a known block number). This ensures your query will be served from the most recent block.
14+
2. **Set up a periodic polling cycle:** Choose a delay that matches the block production interval (e.g., 14 seconds). This ensures you wait until a new block is likely available.
15+
3. **Use the `block: { number_gte: $minBlock }` argument:** This ensures the fetched data is from a block at or above the specified block number, preventing time from moving backward.
16+
4. **Handle logic inside the loop:** Update `minBlock` to the most recent block returned in each iteration.
1717
5. **Process the fetched data:** Implement the necessary actions (e.g., updating internal state) with the newly polled data.
1818

1919
```javascript
2020
/// Example: Polling for updated data
2121
async function updateProtocolPaused() {
22-
let minBlock = 0;
22+
let minBlock = 0
2323

2424
for (;;) {
2525
// Wait for the next block.
2626
const nextBlock = new Promise((f) => {
27-
setTimeout(f, 14000);
28-
});
27+
setTimeout(f, 14000)
28+
})
2929

3030
const query = `
3131
query GetProtocol($minBlock: Int!) {
@@ -38,17 +38,17 @@ async function updateProtocolPaused() {
3838
}
3939
}
4040
}
41-
`;
41+
`
4242

43-
const variables = { minBlock };
44-
const response = await graphql(query, variables);
45-
minBlock = response._meta.block.number;
43+
const variables = { minBlock }
44+
const response = await graphql(query, variables)
45+
minBlock = response._meta.block.number
4646

4747
// TODO: Replace this placeholder with handling of 'response.protocol.paused'.
48-
console.log(response.protocol.paused);
48+
console.log(response.protocol.paused)
4949

5050
// Wait to poll again.
51-
await nextBlock;
51+
await nextBlock
5252
}
5353
}
5454
```
@@ -57,16 +57,16 @@ async function updateProtocolPaused() {
5757

5858
If you must retrieve multiple related items or a large set of data from the same point in time:
5959

60-
1. **Fetch the initial page:** Use a query that includes `_meta { block { hash } }` to capture the block hash. This ensures subsequent queries stay pinned to that same block.
61-
2. **Store the block hash:** Keep the hash from the first response. This becomes your reference point for the rest of the items.
62-
3. **Paginate the results:** Make additional requests using the same block hash and a pagination strategy (e.g., `id_gt` or other filtering) until you have fetched all relevant items.
60+
1. **Fetch the initial page:** Use a query that includes `_meta { block { hash } }` to capture the block hash. This ensures subsequent queries stay pinned to that same block.
61+
2. **Store the block hash:** Keep the hash from the first response. This becomes your reference point for the rest of the items.
62+
3. **Paginate the results:** Make additional requests using the same block hash and a pagination strategy (e.g., `id_gt` or other filtering) until you have fetched all relevant items.
6363
4. **Handle re-orgs:** If the block hash becomes invalid due to a re-org, retry from the first request to obtain a non-uncle block.
6464

6565
```javascript
6666
/// Example: Fetching a large set of related items
6767
async function getDomainNames() {
68-
let pages = 5;
69-
const perPage = 1000;
68+
let pages = 5
69+
const perPage = 1000
7070

7171
// First request captures the block hash.
7272
const listDomainsQuery = `
@@ -81,15 +81,15 @@ async function getDomainNames() {
8181
}
8282
}
8383
}
84-
`;
84+
`
8585

86-
let data = await graphql(listDomainsQuery, { perPage });
87-
let result = data.domains.map((d) => d.name);
88-
let blockHash = data._meta.block.hash;
86+
let data = await graphql(listDomainsQuery, { perPage })
87+
let result = data.domains.map((d) => d.name)
88+
let blockHash = data._meta.block.hash
8989

9090
// Paginate until fewer than 'perPage' results are returned or you reach the page limit.
9191
while (data.domains.length === perPage && --pages) {
92-
let lastID = data.domains[data.domains.length - 1].id;
92+
let lastID = data.domains[data.domains.length - 1].id
9393
let query = `
9494
query ListDomains($perPage: Int!, $lastID: ID!, $blockHash: Bytes!) {
9595
domains(
@@ -101,17 +101,17 @@ async function getDomainNames() {
101101
id
102102
}
103103
}
104-
`;
104+
`
105105

106-
data = await graphql(query, { perPage, lastID, blockHash });
106+
data = await graphql(query, { perPage, lastID, blockHash })
107107

108108
for (const domain of data.domains) {
109-
result.push(domain.name);
109+
result.push(domain.name)
110110
}
111111
}
112112

113113
// TODO: Do something with the full result.
114-
return result;
114+
return result
115115
}
116116
```
117117

@@ -121,4 +121,4 @@ By using the `number_gte` parameter in a polling loop, you ensure time moves for
121121

122122
• If you encounter re-orgs, plan to retry from the beginning or adjust your logic accordingly. • Explore other filtering and block arguments (see \[placeholder for reference location\]) to handle additional use-cases.
123123

124-
\[Placeholder for additional references or external resources if available\]
124+
\[Placeholder for additional references or external resources if available\]

website/src/pages/en/subgraphs/querying/distributed-systems.mdx

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -16,15 +16,15 @@ A perfect example of this is the so-called "block wobble" phenomenon, where a cl
1616

1717
To understand the impact, consider a scenario where a client continuously fetches the latest block from an Indexer:
1818

19-
1. Indexer ingests block 8
20-
2. Request served to the client for block 8
21-
3. Indexer ingests block 9
22-
4. Indexer ingests block 10A
23-
5. Request served to the client for block 10A
24-
6. Indexer detects re-org to 10B and rolls back 10A
25-
7. Request served to the client for block 9
26-
8. Indexer ingests block 10B
27-
9. Indexer ingests block 11
19+
1. Indexer ingests block 8
20+
2. Request served to the client for block 8
21+
3. Indexer ingests block 9
22+
4. Indexer ingests block 10A
23+
5. Request served to the client for block 10A
24+
6. Indexer detects re-org to 10B and rolls back 10A
25+
7. Request served to the client for block 9
26+
8. Indexer ingests block 10B
27+
9. Indexer ingests block 11
2828
10. Request served to the client for block 11
2929

3030
From the **Indexer's viewpoint**, it sees a forward-moving progression with a brief need to roll back an invalid block. But from the **client's viewpoint**, responses seem to arrive in a puzzling order: block 8, block 10, then suddenly block 9, and finally block 11.
@@ -136,4 +136,4 @@ By pinning queries to a single block hash, all responses relate to the same poin
136136

137137
### Final Thoughts on Distributed Consistency
138138

139-
Distributed systems can seem unpredictable, but understanding the root causes of events like out-of-order requests and block reorganizations helps clarify why the data may appear contradictory. By using the patterns described above, both crossing block boundaries forward in time and maintaining a single consistent block snapshot become possible strategies in managing this inherent complexity.
139+
Distributed systems can seem unpredictable, but understanding the root causes of events like out-of-order requests and block reorganizations helps clarify why the data may appear contradictory. By using the patterns described above, both crossing block boundaries forward in time and maintaining a single consistent block snapshot become possible strategies in managing this inherent complexity.

0 commit comments

Comments
 (0)