Skip to content

Commit 3659c2e

Browse files
authored
Update README.md
1 parent 037e265 commit 3659c2e

File tree

1 file changed

+7
-269
lines changed

1 file changed

+7
-269
lines changed

README.md

Lines changed: 7 additions & 269 deletions
Original file line numberDiff line numberDiff line change
@@ -1,285 +1,22 @@
1-
# dipdup
1+
# DipDup
22

33
[![PyPI version](https://badge.fury.io/py/dipdup.svg?)](https://badge.fury.io/py/dipdup)
44
[![Tests](https://github.com/dipdup-net/dipdup-py/workflows/Tests/badge.svg?)](https://github.com/baking-bad/dipdup/actions?query=workflow%3ATests)
5-
[![Docker Build Status](https://img.shields.io/docker/cloud/build/bakingbad/dipdup)](https://hub.docker.com/r/bakingbad/dipdup)
65
[![Made With](https://img.shields.io/badge/made%20with-python-blue.svg?)](ttps://www.python.org)
76
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
87

98
Python SDK for developing indexers of [Tezos](https://tezos.com/) smart contracts inspired by [The Graph](https://thegraph.com/).
109

11-
## Installation
10+
## Quickstart
1211

13-
Python 3.8+ is required for dipdup to run.
12+
Python 3.9+ is required for dipdup to run.
1413

1514
```shell
1615
$ pip install dipdup
1716
```
1817

19-
## Creating indexer
20-
21-
If you want to see dipdup in action before diving into details you can run a demo project and use it as reference. Clone this repo and run the following command in it's root directory:
22-
23-
```shell
24-
$ dipdup -c src/demo_hic_et_nunc/dipdup.yml run
25-
```
26-
27-
Examples in this guide are simplified Hic Et Nunc demo.
28-
29-
### Write configuration file
30-
31-
Create a new YAML file and adapt the following example to your needs:
32-
33-
```yaml
34-
spec_version: 0.1
35-
package: demo_hic_et_nunc
36-
37-
database:
38-
kind: sqlite
39-
path: db.sqlite3
40-
41-
contracts:
42-
HEN_objkts:
43-
address: ${HEN_OBJKTS:-KT1RJ6PbjHpwc3M5rw5s2Nbmefwbuwbdxton}
44-
typename: hen_objkts
45-
HEN_minter:
46-
address: ${HEN_MINTER:-KT1Hkg5qeNhfwpKW4fXvq7HGZB9z2EnmCCA9}
47-
typename: hen_minter
48-
49-
datasources:
50-
tzkt_mainnet:
51-
kind: tzkt
52-
url: ${TZKT_URL:-https://staging.api.tzkt.io}
53-
54-
indexes:
55-
hen_mainnet:
56-
kind: operation
57-
datasource: tzkt_mainnet
58-
contracts:
59-
- HEN_minter
60-
handlers:
61-
- callback: on_mint
62-
pattern:
63-
- type: transaction
64-
destination: HEN_minter
65-
entrypoint: mint_OBJKT
66-
- type: transaction
67-
destination: HEN_objkts
68-
entrypoint: mint
69-
```
70-
71-
Each handler in index config matches an operation group based on operations' entrypoints and destination addresses in pattern. Matched operation groups will be passed to handlers you define.
72-
73-
### Initialize project structure
74-
75-
Run the following command replacing `config.yml` with path to YAML file you just created:
76-
77-
```shell
78-
$ dipdup -c config.yml init
79-
```
80-
81-
This command will create a new package with the following structure (some lines were omitted for readability):
82-
83-
```
84-
demo_hic_et_nunc/
85-
├── handlers
86-
│ ├── on_mint.py
87-
│ └── on_rollback.py
88-
├── hasura-metadata.json
89-
├── models.py
90-
└── types
91-
├── hen_minter
92-
│ ├── storage.py
93-
│ └── parameter
94-
│ └── mint_OBJKT.py
95-
└── hen_objkts
96-
├── storage.py
97-
└── parameter
98-
└── mint.py
99-
```
100-
101-
`types` directory is Pydantic dataclasses of contract storage and parameter. This directory is autogenerated, you shouldn't modify any files in it. `models` and `handlers` modules are placeholders for your future code and will be discussed later.
102-
103-
You could invoke `init` command on existing project (must be in your `PYTHONPATH`. Do it each time you update contract addresses or models. Code you've wrote won't be overwritten.
104-
105-
### Define models
106-
107-
Dipdup uses [Tortoise](https://tortoise-orm.readthedocs.io/en/latest/) under the hood, fast asynchronous ORM supporting all major database engines. Check out [examples](https://tortoise-orm.readthedocs.io/en/latest/examples.html) to learn how to use is.
108-
109-
Now open `models.py` file in your project and define some models:
110-
```python
111-
from tortoise import Model, fields
112-
113-
114-
class Holder(Model):
115-
address = fields.CharField(58, pk=True)
116-
117-
118-
class Token(Model):
119-
id = fields.BigIntField(pk=True)
120-
creator = fields.ForeignKeyField('models.Holder', 'tokens')
121-
supply = fields.IntField()
122-
level = fields.BigIntField()
123-
timestamp = fields.DatetimeField()
124-
```
125-
126-
### Write event handlers
127-
128-
Now take a look at `handlers` module generated by `init` command. When operation group matching `pattern` block of corresponding handler at config will arrive callback will be fired. This example will simply save minted Hic Et Nunc tokens and their owners to the database:
129-
130-
```python
131-
import demo_hic_et_nunc.models as models
132-
from demo_hic_et_nunc.types.hen_minter.parameter.mint_objkt import MintOBJKTParameter
133-
from demo_hic_et_nunc.types.hen_minter.storage import HenMinterStorage
134-
from demo_hic_et_nunc.types.hen_objkts.parameter.mint import MintParameter
135-
from demo_hic_et_nunc.types.hen_objkts.storage import HenObjktsStorage
136-
from dipdup.models import TransactionContext, OperationHandlerContext
137-
138-
139-
async def on_mint(
140-
ctx: OperationHandlerContext,
141-
mint_objkt: TransactionContext[MintOBJKTParameter, HenMinterStorage],
142-
mint: TransactionContext[MintParameter, HenObjktsStorage],
143-
) -> None:
144-
holder, _ = await models.Holder.get_or_create(address=mint.parameter.address)
145-
token = models.Token(
146-
id=mint.parameter.token_id,
147-
creator=holder,
148-
supply=mint.parameter.amount,
149-
level=mint.data.level,
150-
timestamp=mint.data.timestamp,
151-
)
152-
await token.save()
153-
```
154-
155-
Handler name `on_rollback` is reserved by dipdup, this special handler will be discussed later.
156-
157-
### Atomicity and persistency
158-
159-
Here's a few important things to know before running your indexer:
160-
161-
* __WARNING!__ Make sure that database you're connecting to is used by dipdup exclusively. When index configuration or models change the whole database will be dropped and indexing will start from scratch.
162-
* Do not rename existing indexes in config file without cleaning up database first, didpup won't handle this renaming automatically and will consider renamed index as a new one.
163-
* Multiple indexes pointing to different contracts must not reuse the same models because synchronization is performed by index first and then by block.
164-
* Reorg messages signal about chain reorganizations, when some blocks, including all operations, are rolled back in favor of blocks with higher weight. Chain reorgs happen quite often, so it's not something you can ignore. You have to handle such messages correctly, otherwise you will likely accumulate duplicate data or, worse, invalid data. By default Dipdup will start indexing from scratch on such messages. To implement your own rollback logic edit generated `on_rollback` handler.
165-
166-
### Run your dapp
167-
168-
Now everything is ready to run your indexer:
169-
170-
```shell
171-
$ dipdup -c config.yml run
172-
```
173-
174-
Parameters wrapped with `${VARIABLE:-default_value}` in config could be set from corresponding environment variables. For example if you want to use another TzKT instance:
175-
176-
```shell
177-
$ TZKT_URL=https://api.tzkt.io dipdup -c config.yml run
178-
```
179-
180-
You can interrupt indexing at any moment, it will start from last processed block next time you run your app again.
181-
182-
Use `docker-compose.yml` included in this repo if you prefer to run dipdup in Docker:
183-
184-
```shell
185-
$ docker-compose build
186-
$ # example target, edit volumes section to change dipdup config
187-
$ docker-compose up hic_et_nunc
188-
```
189-
190-
For debugging purposes you can index specific block range only and skip realtime indexing. To do this set `first_block` and `last_block` fields in index config.
191-
192-
### Index templates
193-
194-
Sometimes you need to run multiple indexes with similar configs whose only difference is contract addresses. In this case you can use index templates like this:
195-
196-
```yaml
197-
templates:
198-
trades:
199-
kind: operation
200-
datasource: tzkt_staging
201-
contracts:
202-
- <dex>
203-
handlers:
204-
- callback: on_fa12_token_to_tez
205-
pattern:
206-
- type: transaction
207-
destination: <dex>
208-
entrypoint: tokenToTezPayment
209-
- type: transaction
210-
destination: <token>
211-
entrypoint: transfer
212-
- callback: on_fa20_tez_to_token
213-
pattern:
214-
- type: transaction
215-
destination: <dex>
216-
entrypoint: tezToTokenPayment
217-
- type: transaction
218-
destination: <token>
219-
entrypoint: transfer
220-
221-
indexes:
222-
trades_fa12:
223-
template: trades
224-
values:
225-
dex: FA12_dex
226-
token: FA12_token
227-
228-
trades_fa20:
229-
template: trades
230-
values:
231-
dex: FA20_dex
232-
token: FA20_token
233-
```
234-
235-
Template values mapping could be accessed from within handlers at `ctx.template_values`.
236-
237-
### Optional: configure Hasura GraphQL Engine
238-
239-
When using PostgreSQL as a storage solution you can use Hasura integration to get GraphQL API out-of-the-box. Add the following section to your config, Hasura will be configured automatically when you run your indexer.
240-
241-
```yaml
242-
hasura:
243-
url: http://hasura:8080
244-
admin_secret: changeme
245-
```
246-
247-
When using included docker-compose example make sure you run Hasura first:
248-
249-
```shell
250-
$ docker-compose up -d hasura
251-
```
252-
253-
Then run your indexer and navigate to `127.0.0.1:8080`.
254-
255-
### Optional: configure logging
256-
257-
You may want to tune logging to get notifications on errors or enable debug messages. Specify path to Python logging config in YAML format at `--logging-config` argument. Default config to start with:
258-
259-
```yml
260-
version: 1
261-
disable_existing_loggers: false
262-
formatters:
263-
brief:
264-
format: "%(levelname)-8s %(name)-35s %(message)s"
265-
handlers:
266-
console:
267-
level: INFO
268-
formatter: brief
269-
class: logging.StreamHandler
270-
stream : ext://sys.stdout
271-
loggers:
272-
SignalRCoreClient:
273-
formatter: brief
274-
dipdup.datasources.tzkt.datasource:
275-
level: INFO
276-
dipdup.datasources.tzkt.cache:
277-
level: INFO
278-
root:
279-
level: INFO
280-
handlers:
281-
- console
282-
```
18+
* Read the rest of the tutorial: [docs.dipdup.net](https://docs.dipdup.net/)
19+
* Check out [demo projects](https://github.com/dipdup-net/dipdup-py/tree/master/src)
28320

28421
## Contribution
28522

@@ -299,7 +36,8 @@ $ make
29936
## Contact
30037
* Telegram chat: [@baking_bad_chat](https://t.me/baking_bad_chat)
30138
* Slack channel: [#baking-bad](https://tezos-dev.slack.com/archives/CV5NX7F2L)
39+
* Discord group: [Baking Bad](https://discord.gg/JZKhv7uW)
30240

30341
## About
304-
This project is maintained by [Baking Bad](https://baking-bad.org/) team.
42+
This project is maintained by [Baking Bad](https://baking-bad.org/) team.
30543
Development is supported by [Tezos Foundation](https://tezos.foundation/).

0 commit comments

Comments
 (0)