Skip to content

Commit e040e96

Browse files
author
Dimitri POSTOLOV
authored
fixes for remark-lint-heading-increment (#354)
1 parent 726a4b7 commit e040e96

File tree

8 files changed

+45
-27
lines changed

8 files changed

+45
-27
lines changed

.remarkrc.cjs

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ module.exports = {
33
'frontmatter', // should be defined
44
['remark-lint-first-heading-level', 2],
55
['remark-lint-restrict-elements', { type: 'heading', depth: 1 }],
6-
// 'remark-lint-heading-increment',
6+
'remark-lint-heading-increment',
7+
['remark-lint-no-heading-punctuation', '\\.,;:'],
78
],
89
}

package.json

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,7 @@
3232
"remark-frontmatter": "^4.0.1",
3333
"remark-lint-first-heading-level": "^3.1.1",
3434
"remark-lint-heading-increment": "^3.1.1",
35+
"remark-lint-no-heading-punctuation": "^3.1.1",
3536
"remark-lint-restrict-elements": "workspace:*",
3637
"typescript": "5.0.4"
3738
},

pnpm-lock.yaml

Lines changed: 14 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

website/pages/en/cookbook/near.mdx

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -186,31 +186,33 @@ As a quick primer - the first step is to "create" your subgraph - this only need
186186

187187
Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command:
188188

189-
```
189+
```sh
190190
$ graph create --node <graph-node-url> subgraph/name # creates a subgraph on a local Graph Node (on the Hosted Service, this is done via the UI)
191191
$ graph deploy --node <graph-node-url> --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the subgraph to a specified Graph Node based on the manifest IPFS hash
192192
```
193193

194194
The node configuration will depend on where the subgraph is being deployed.
195195

196-
#### Hosted Service:
196+
### Hosted Service
197197

198-
```
198+
```sh
199199
graph deploy --node https://api.thegraph.com/deploy/ --ipfs https://api.thegraph.com/ipfs/ --access-token <your-access-token>
200200
```
201201

202-
#### Local Graph Node (based on default configuration):
202+
### Local Graph Node (based on default configuration)
203203

204-
```
204+
```sh
205205
graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001
206206
```
207207

208208
Once your subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the subgraph itself:
209209

210-
```
210+
```graphql
211211
{
212212
_meta {
213-
block { number }
213+
block {
214+
number
215+
}
214216
}
215217
}
216218
```

website/pages/en/developing/developer-faqs.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ Not currently, as mappings are written in AssemblyScript. One possible alternati
8989

9090
Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: Start blocks
9191

92-
## 18. Are there some tips to increase the performance of indexing? My subgraph is taking a very long time to sync.
92+
## 18. Are there some tips to increase the performance of indexing? My subgraph is taking a very long time to sync
9393

9494
Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks)
9595

website/pages/en/network/developing.mdx

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -8,46 +8,46 @@ Developers are the demand side of The Graph ecosystem. Developers build subgraph
88

99
Subgraphs deployed to the network have a defined lifecycle.
1010

11-
#### Build locally
11+
### Build locally
1212

1313
As with all subgraph development, it starts with local development and testing. Developers can use the same local setup whether they are building for The Graph Network, the hosted service or a local Graph Node, leveraging `graph-cli` and `graph-ts` to build their subgraph. Developers are encouraged to use tools such as [Matchstick](https://github.com/LimeChain/matchstick) for unit testing to improve the robustness of their subgraphs.
1414

1515
> There are certain constraints on The Graph Network, in terms of feature and network support. Only subgraphs on [supported networks](/developing/supported-networks) will earn indexing rewards, and subgraphs which fetch data from IPFS are also not eligible.
1616
17-
#### Deploy to the Subgraph Studio
17+
### Deploy to the Subgraph Studio
1818

1919
Once defined, the subgraph can be built and deployed to the [Subgraph Studio](https://thegraph.com/docs/en/deploying/subgraph-studio-faqs/). The Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected.
2020

21-
#### Publish to the Network
21+
### Publish to the Network
2222

2323
When the developer is happy with their subgraph, they can publish it to The Graph Network. This is an on-chain action, which registers the subgraph so that it is discoverable by Indexers. Published subgraphs have a corresponding NFT, which is then easily transferable. The published subgraph has associated metadata, which provides other network participants with useful context and information.
2424

25-
#### Signal to Encourage Indexing
25+
### Signal to Encourage Indexing
2626

2727
Published subgraphs are unlikely to be picked up by Indexers without the addition of signal. Signal is locked GRT associated with a given subgraph, which indicates to Indexers that a given subgraph will receive query volume, and also contributes to the indexing rewards available for processing it. Subgraph developers will generally add signal to their subgraph, in order to encourage indexing. Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume.
2828

29-
#### Querying & Application Development
29+
### Querying & Application Development
3030

3131
Once a subgraph has been processed by Indexers and is available for querying, developers can start to use the subgraph in their applications. Developers query subgraphs via a gateway, which forwards their queries to an Indexer who has processed the subgraph, paying query fees in GRT.
3232

3333
In order to make queries, developers must generate an API key, which can be done in the Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. The Subgraph Studio provides developers with data on their API key usage over time.
3434

3535
Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in the Subgraph Studio.
3636

37-
#### Upgrading Subgraphs
37+
### Upgrading Subgraphs
3838

3939
After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to the Subgraph Studio for rate-limited development and testing.
4040

4141
Once the Subgraph Developer is ready to upgrade, they can initiate a transaction to point their subgraph at the new version. Upgrading the subgraph migrates any signal to the new version (assuming the user who applied the signal selected "auto-migrate"), which also incurs a migration tax. This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying.
4242

43-
#### Deprecating Subgraphs
43+
### Deprecating Subgraphs
4444

4545
At some point a developer may decide that they no longer need a published subgraph. At that point they may deprecate the subgraph, which returns any signalled GRT to the Curators.
4646

4747
### Diverse Developer Roles
4848

4949
Some developers will engage with the full subgraph lifecycle on the network, publishing, querying and iterating on their own subgraphs. Some may be focused on subgraph development, building open APIs which others can build on. Some may be application focused, querying subgraphs deployed by others.
5050

51-
## Developers and Network Economics
51+
### Developers and Network Economics
5252

5353
Developers are a key economic actor in the network, locking up GRT in order to encourage indexing, and crucially querying subgraphs, which is the network's primary value exchange. Subgraph developers also burn GRT whenever a subgraph is upgraded.

website/pages/en/operating-graph-node.mdx

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -12,23 +12,23 @@ This provides a contextual overview of Graph Node, and some of the more advanced
1212

1313
Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node).
1414

15-
#### PostgreSQL database
15+
### PostgreSQL database
1616

1717
The main store for the Graph Node, this is where subgraph data is stored, as well as metadata about subgraphs, and subgraph-agnostic network data such as the block cache, and eth_call cache.
1818

19-
#### Network clients
19+
### Network clients
2020

2121
In order to index a network, Graph Node needs access to a network client via an EVM-compatible JSON-RPC API. This RPC may connect to a single client or it could be a more complex setup that load balances across multiple.
2222

2323
While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)).
2424

2525
**Upcoming: Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/).
2626

27-
#### IPFS Nodes
27+
### IPFS Nodes
2828

2929
Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
3030

31-
#### Prometheus metrics server
31+
### Prometheus metrics server
3232

3333
To enable monitoring and reporting, Graph Node can optionally log metrics to a Prometheus metrics server.
3434

@@ -320,7 +320,7 @@ In some cases a failure might be resolvable by the indexer (for example if the e
320320

321321
Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph.
322322

323-
However in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider.
323+
However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider.
324324

325325
If a block cache inconsistency is suspected, such as a tx receipt missing event:
326326

@@ -333,7 +333,7 @@ If a block cache inconsistency is suspected, such as a tx receipt missing event:
333333

334334
Once a subgraph has been indexed, indexers can expect to serve queries via the subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process.
335335

336-
However even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users.
336+
However, even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users.
337337

338338
There is not one "silver bullet", but a range of tools for preventing, diagnosing and dealing with slow queries.
339339

website/pages/en/querying/graphql-api.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ This guide explains the GraphQL Query API that is used for the Graph Protocol.
88

99
In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph.
1010

11-
#### Examples
11+
### Examples
1212

1313
Query for a single `Token` entity defined in your schema:
1414

@@ -21,7 +21,7 @@ Query for a single `Token` entity defined in your schema:
2121
}
2222
```
2323

24-
**Note:** When querying for a single entity, the `id` field is required and it must be a string.
24+
> **Note:** When querying for a single entity, the `id` field is required, and it must be a string.
2525
2626
Query all `Token` entities:
2727

@@ -66,7 +66,7 @@ In the following example, we sort the tokens by the name of their owner:
6666
}
6767
```
6868

69-
> Currently you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported.
69+
> Currently, you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported.
7070
7171
### Pagination
7272

@@ -410,7 +410,7 @@ If a block is provided, the metadata is as of that block, if not the latest inde
410410

411411
`deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file.
412412

413-
`block` provides information about the latest block (taking into account any block constraints passed to \_meta):
413+
`block` provides information about the latest block (taking into account any block constraints passed to `_meta`):
414414

415415
- hash: the hash of the block
416416
- number: the block number

0 commit comments

Comments
 (0)