You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/1.getting-started/2.core-concepts.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,25 +13,25 @@ Before proceeding to development, let's take a look at basic DipDup concepts.
13
13
14
14
## Big picture
15
15
16
-
DipDup is an SDK for building custom backends for decentralized applications, or for short, indexers. DipDup indexers are off-chain services that aggregate blockchain data from various sources and store it in a database.
16
+
DipDup is an SDK for building custom backends for decentralized applications, or, indexers. DipDup indexers are off-chain services that aggregate blockchain data from various sources and store it in a database.
17
17
18
-
Each indexer consists of a YAML configuration file and a Python package with models, handlers, and other code. The configuration file describes the inventory of the indexer, i.e., what contracts to index and what data to extract from them. It supports templates, environment variables, and use syntax similar to Subgraph manifest files. The Python package contains models, callbacks and queries. Models describe the domain-specific data structures you want to store in the database. Callbacks implement the business logic, i.e., how to convert blockchain data to your models. Other files in the package are optional and can be used to extend DipDup functionality.
18
+
Each indexer consists of a YAML configuration file and a Python package with models, handlers, and other code. The configuration file describes the inventory of the indexer, i.e., what contracts to index and what data to extract from them. It supports templates, and environment variables, and uses a syntax similar to Subgraph manifest files. The Python package contains models, callbacks and queries. Models describe the domain-specific data structures you want to store in the database. Callbacks implement the business logic, i.e., how to convert blockchain data to your models. Other files in the package are optional and can be used to extend DipDup functionality.
19
19
20
-
As a result, you get a service responsible for filling the database with the indexed data. Then you can use it to build a custom API backend or integrate with existing ones. DipDup provides Hasura GraphQL Engine integration to expose indexed data via REST and GraphQL with little to no effort, but you can also use other API engines like PostgREST or develop one in-house.
20
+
As a result, you get a service responsible for filling the database with indexed data. Then you can use it to build a custom API backend or integrate with existing ones. DipDup provides Hasura GraphQL Engine integration to expose indexed data via REST and GraphQL with little to no effort, but you can also use other API engines like PostgREST or develop one in-house.
21
21
22
22
<!-- TODO: SVG include doesn't work -->
23
23
24
24

25
25
26
26
## Storage layer
27
27
28
-
DipDup uses PostgreSQL or SQLite as a database backend. All the data is stored in a single database schema created on the first run. Make sure it's used by DipDup exclusively, since changes in index configuration or models require DipDup to drop the whole database schema and start indexing from scratch. You can, however, mark specific tables as immune to preserve them from being dropped.
28
+
DipDup uses PostgreSQL or SQLite as a database backend. All the data is stored in a single database schema created on the first run. Make sure it's used by DipDup exclusively since changes in index configuration or models require DipDup to drop the whole database schema and start indexing from scratch. You can, however, mark specific tables as immune to preserve them from being dropped.
29
29
30
30
DipDup does not support database schema migration. Any change in models or index definitions will trigger reindexing. Migrations introduce complexity and do not guarantee data consistency. DipDup stores a hash of the SQL version of the DB schema and checks for changes each time you run indexing.
31
31
32
-
DipDup applies all updates atomically block by block, ensuring data integrity. If indexing is interrupted, next time DipDup starts, it will check the database state and continue from the last block processed. DipDup state is stored in the database per-index and can be used by API consumers to determine the current indexer head.
32
+
DipDup applies all updates atomically block by block, ensuring data integrity. If indexing is interrupted, the next time DipDup starts, it will check the database state and continue from the last block processed. DipDup state is stored in the database perindex and can be used by API consumers to determine the current indexer head.
33
33
34
-
Index is a set of contracts and rules for processing them as a single entity. Your config can contain more than one index, but they are processed in parallel and cannot share data as execution order is not guaranteed.
34
+
An index is a set of contracts and rules for processing them as a single entity. Your config can contain more than one index, but they are processed in parallel and cannot share data as execution order is not guaranteed.
Copy file name to clipboardExpand all lines: docs/1.getting-started/3.config.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,7 @@ Use `config export`{lang="sh"} and `config env`{lang="sh"} commands to check the
43
43
44
44
## Environment variables
45
45
46
-
DipDup supports compose-style variable expansion with optional default value. Use this feature to store sensitive data outside of the configuration file and make your app fully declarative. If required variable is not set, DipDup will fail with an error. You can use these placeholders anywhere throughout the configuration file.
46
+
DipDup supports compose-style variable expansion with an optional default value. Use this feature to store sensitive data outside of the configuration file and make your app fully declarative. If a required variable is not set, DipDup will fail with an error. You can use these placeholders anywhere throughout the configuration file.
47
47
48
48
```yaml [dipdup.yaml]
49
49
database:
@@ -57,7 +57,7 @@ There are multiple ways to pass environment variables to DipDup:
57
57
- Export them in the shell before running DipDup
58
58
- Create the env file and pass it to DipDup with the `-e` CLI option
59
59
60
-
For every config file in `deploy` project directory, DipDup will create a corresponding `.env.default` file with all the variables used in the config. Copy it removing `.default` suffix and fill in the values.
60
+
For every config file in the `deploy` project directory, DipDup will create a corresponding `.env.default` file with all the variables used in the config. Copy it, remove the `.default` suffix and fill in the values.
Copy file name to clipboardExpand all lines: docs/1.getting-started/4.package.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,11 +6,11 @@ description: "DipDup is a Python framework for building smart contract indexers.
6
6
7
7
# Package structure
8
8
9
-
Each DipDup project consists of a YAML config and a Python package of specific structure. It could be placed anywhere, but needs to be importable. The package name is defined in the config file.
9
+
Each DipDup project consists of a YAML config and a Python package of a specific structure. It could be placed anywhere, but needs to be importable. The package name is defined in the config file.
10
10
11
11
To generate all necessary directories and files according to config run the `init` command. You should run it every time you significantly change the config file.
12
12
13
-
The structure of resulting package is the following:
13
+
The structure of the resulting package is the following:
@@ -29,7 +29,7 @@ There's also a bunch on files in the root directory: .ignore files, pyproject.to
29
29
30
30
## ABIs and typeclasses
31
31
32
-
DipDup uses contract type information to generate dataclasses for developers to work with strictly typed data. Theses dataclasses are generated automatically from contract ABIs. In most cases, you don't need to modify them manually. The process is roughly the following:
32
+
DipDup uses contract type information to generate dataclasses for developers to work with strictly typed data. These dataclasses are generated automatically from contract ABIs. In most cases, you don't need to modify them manually. The process is roughly the following:
33
33
34
34
1. Contract ABIs are placed in the `abi` directory; either manually or during init.
35
35
2. DipDup converts these ABIs to intermediate JSONSchemas.
@@ -76,4 +76,4 @@ indexer
76
76
└── bar
77
77
```
78
78
79
-
The same applies to handler callbacks. Callback alias still needs to be a valid Python module path: lowercase letters, underscores, and dots.
79
+
The same applies to handler callbacks. The callback alias still needs to be a valid Python module path: lowercase letters, underscores, and dots.
Copy file name to clipboardExpand all lines: docs/1.getting-started/5.models.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,21 +6,21 @@ description: "DipDup is a Python framework for building smart contract indexers.
6
6
7
7
# Models
8
8
9
-
To store indexed data in the database, you need to define models. Our storage layer is based on [Tortoise ORM](https://tortoise.github.io/index.html). It's fast, flexible, and have a syntax similar to Django ORM. We have extended it with some useful features like copy-on-write rollback mechanism, caching, and more.
9
+
To store indexed data in the database, you need to define models. Our storage layer is based on [Tortoise ORM](https://tortoise.github.io/index.html). It's fast, flexible, and has a syntax similar to Django ORM. We have extended it with some useful features like a copy-on-write rollback mechanism, caching, and more.
10
10
11
-
We plan to make things official and fork this library under a new name, but it's not ready yet. For now let's call our implemetation**DipDup ORM**.
11
+
We plan to make things official and fork this library under a new name, but it's not ready yet. For now, let's call our implementation**DipDup ORM****.
12
12
13
13
Before we begin to dive into the details, here's an important note:
14
14
15
15
::banner{type="warning"}
16
-
Please, don't report DipDup issues to the Tortoise ORM bugtracker! We patch it heavily to better suit our needs, so it's not the same library anymore.
16
+
Please, don't report DipDup issues to the Tortoise ORM bug tracker! We patch it heavily to better suit our needs, so it's not the same library anymore.
17
17
::
18
18
19
19
You can use [Tortoise ORM docs](https://tortoise.github.io/examples.html) as a reference. We will describe only DipDup-specific features here.
20
20
21
21
## Defining models
22
22
23
-
Project models should be placed in `models` directory in the project root. By default, `__init__.py` module created on project initialization, but you can use any structure you want. Models from nested packages will be discovered automatically.
23
+
Project models should be placed in the `models` directory in the project root. By default, the `__init__.py` module is created on project initialization, but you can use any structure you want. Models from nested packages will be discovered automatically.
24
24
25
25
Here's an example containing all available fields:
26
26
@@ -34,7 +34,7 @@ Some limitations are applied to model names and fields to avoid ambiguity in Gra
34
34
35
35
- Table names must be in snake_case
36
36
- Model fields must be in snake_case
37
-
- Model fields must differ from table name
37
+
- Model fields must differ fromthe table name
38
38
39
39
## Basic usage
40
40
@@ -62,7 +62,7 @@ See `demo_uniswap` project for real-life examples.
62
62
63
63
## Differences from Tortoise ORM
64
64
65
-
This section describes differences between DipDup and Tortoise ORM. Most likely won't notice them, but it's better to be aware of them.
65
+
This section describes the differences between DipDup and Tortoise ORM. Most likely won't notice them, but it's better to be aware of them.
66
66
67
67
### Fields
68
68
@@ -86,7 +86,7 @@ In Tortoise ORM each subsequent call creates a new queryset using expensive `cop
86
86
87
87
### Transactions
88
88
89
-
DipDup manage transactions automatically for indexes opening one for each level. You can't open another one. Entering a transaction context manually with `in_transaction()` will return the same active transaction. For hooks, there's `atomic` flag in configuration.
89
+
DipDup manages transactions automatically for indexes opening one for each level. You can't open another one. Entering a transaction context manually with `in_transaction()` will return the same active transaction. For hooks, there's `atomic` flag in the configuration.
Copy file name to clipboardExpand all lines: docs/1.getting-started/6.datasources.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,7 +25,7 @@ Index datasources, ones that can be attached to a specific index, are prefixed w
25
25
26
26
All datasources now share the same code under the hood to communicate with underlying APIs via HTTP. Their configs have an optional section `http` to configure connection settings. You can use it to set timeouts, retry policies, and other parameters.
27
27
28
-
Each datasource kind has its defaults. Usually, there's no reason to alter these settings unless you use self-hosted instances. In example below, default values are shown:
28
+
Each datasource kind has its defaults. Usually, there's no reason to alter these settings unless you use self-hosted instances. In the example below, default values are shown:
29
29
30
30
```yaml [dipdup.yaml]
31
31
datasources:
@@ -48,7 +48,7 @@ datasources:
48
48
49
49
## Ratelimiting
50
50
51
-
Ratelimiting is implenented using "leaky bucket" algorithm. The number of consumed "drops" can be set with each request (defaults to 1), and the bucket is refilled with a constant rate. If the bucket is empty, the request is delayed until it's refilled.
51
+
Ratelimiting is implemented using the "leaky bucket" algorithm. The number of consumed "drops" can be set with each request (defaults to 1), and the bucket is refilled with a constant rate. If the bucket is empty, the request is delayed until it's refilled.
Indexes can join multiple contracts considered as a single application. Also, contracts can be used by multiple indexes of any kind, but make sure that they are independent of each other and indexed data don't overlap.
23
+
Indexes can join multiple contracts considered as a single application. Also, contracts can be used by multiple indexes of any kind, but make sure that they are independent of each other and that indexed data don't overlap.
24
+
24
25
<!-- TODO: here was a link to a place that doesnt exist now -- Make sure to visit {{ #summary getting-started/core-concepts.md#atomicity-and-persistency }}. -->
25
26
26
27
Handler is a callback function, called when new data has arrived from the datasource. The handler receives the data item as an argument and can perform any actions, e.g., store data in the database, send a notification, or call an external API.
27
28
28
29
## Using templates
29
30
30
-
Index definitions can be templated to reduce the amount of boilerplate code. To create an index from template use the following syntax:
31
+
Index definitions can be templated to reduce the amount of boilerplate code. To create an index from the template use the following syntax:
Copy file name to clipboardExpand all lines: docs/3.datasources/2.abi_etherscan.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ description: "DipDup is a Python framework for building smart contract indexers.
5
5
network: "ethereum"
6
6
---
7
7
8
-
[Etherscan](https://etherscan.io/) is a popular Ethereum blockchain explorer. It provides a public API to fetch ABIs of verified contracts. DipDup can use it API to fetch ABIs for contracts being indexed.
8
+
[Etherscan](https://etherscan.io/) is a popular Ethereum blockchain explorer. It provides a public API to fetch ABIs of verified contracts. DipDup can use its API to fetch ABIs for contracts being indexed.
9
9
10
10
To use this datasource, add the following section in config:
Copy file name to clipboardExpand all lines: docs/3.datasources/3.coinbase.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ datasources:
14
14
15
15
## Authorization
16
16
17
-
If you have a Coinbase API key, you can set it in config and, optionally, increase the ratelimit according to your subscription plan. Otherwise, you will be limited to 10 requests per second.
17
+
If you have a Coinbase API key, you can set it in the config and, optionally, increase the ratelimit according to your subscription plan. Otherwise, you will be limited to 10 requests per second.
Copy file name to clipboardExpand all lines: docs/3.datasources/4.evm_node.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ description: "DipDup is a Python framework for building smart contract indexers.
5
5
network: "ethereum"
6
6
---
7
7
8
-
DipDup can connect to any EVM-compatible node via JSON-RPC. It can be used as a "last mile" datasource for EVM indexes (data that not in Subsquid Archives yet) or as a standalone datasource for handlers and hooks.
8
+
DipDup can connect to any EVM-compatible node via JSON-RPC. It can be used as a "last mile" datasource for EVM indexes (data that is not in Subsquid Archives yet) or as a standalone datasource for handlers and hooks.
9
9
10
10
Examples below show how to connect to Infura and Alchemy nodes for Ethereum mainnet indexes. You can also use your own node, but make sure it has all the necessary data (e.g. archive node).
11
11
@@ -39,7 +39,7 @@ datasources:
39
39
Don't initialize web3 clients manually! It will break the connection pooling, lock the event loop, and kill your dog.
40
40
::
41
41
42
-
To access the client, use `web3` property of the datasource. Underlying web3 client is asynchronous, so you should use `await` keyword to call its methods.
42
+
To access the client, use `web3` property of the datasource. The underlying web3 client is asynchronous, so you should use `await` keyword to call its methods.
0 commit comments