Skip to content

Commit 5a0af94

Browse files
Generate Hasura metadata from Tortoise models (#2)
* Generate Hasura metadata from Tortoise models * CLI command configure-graphql * Generate select permissions in Hasura metadata * Readme improvements * README improvements * Fix __version__ * Catch ModuleNotFoundError
1 parent 138b3c3 commit 5a0af94

File tree

11 files changed

+260
-23
lines changed

11 files changed

+260
-23
lines changed

.gitignore

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -143,4 +143,6 @@ cython_debug/
143143

144144
*sqlite3*
145145

146-
.noseids
146+
.noseids
147+
148+
secrets.env

README.md

Lines changed: 29 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,9 @@ This command will create a new package with the following structure (some lines
7878
dipdup_hic_et_nunc/
7979
├── handlers
8080
│ ├── on_mint.py
81+
│ ├── on_rollback.py
8182
│ └── on_transfer.py
83+
├── hasura-metadata.json
8284
├── models.py
8385
├── schemas
8486
│ ├── KT1Hkg5qeNhfwpKW4fXvq7HGZB9z2EnmCCA9
@@ -98,6 +100,8 @@ dipdup_hic_et_nunc/
98100

99101
`schemas` directory is JSON schemas describing parameters of corresponding contract entrypoints. `types` are Pydantic dataclasses of these schemas. These two directories are autogenerated, you don't need to modify them. `models` and `handlers` modules will be discussed later.
100102

103+
You could invoke `init` command on existing project (must be in your `PYTHONPATH`. Do it each time you update contract addresses or models. Code you've wrote won't be overwritten.
104+
101105
### Define models
102106

103107
Dipdup uses [Tortoise](https://tortoise-orm.readthedocs.io/en/latest/) under the hood, fast asynchronous ORM supporting all major database engines. Check out [examples](https://tortoise-orm.readthedocs.io/en/latest/examples.html) to learn how to use is.
@@ -145,9 +149,21 @@ async def on_mint(
145149
await token.save()
146150
```
147151

152+
Handler name `on_rollback` is reserved by dipdup, this special handler will be discussed later.
153+
154+
### Atomicity and persistency
155+
156+
Here's a few important things to know before running your indexer:
157+
158+
* __WARNING!__ Make sure that database you're connecting to is used by dipdup exclusively. When index configuration or models change the whole database will be dropped and indexing will start from scratch.
159+
* Do not rename existing indexes in config file without cleaning up database first, didpup won't handle this renaming automatically and will consider renamed index as a new one.
160+
* Multiple indexes pointing to different contracts must not reuse the same models because synchronization is performed by index first and then by block.
161+
* Reorg messages signal about chain reorganizations, when some blocks, including all operations, are rolled back in favor of blocks with higher weight. Chain reorgs happen quite often, so it's not something you can ignore. You have to handle such messages correctly, otherwise you will likely accumulate duplicate data or, worse, invalid data. By default Dipdup will start indexing from scratch on such messages. To implement your own rollback logic edit generated `on_rollback` handler.
162+
148163
### Run your dapp
149164

150165
Now everything is ready to run your indexer:
166+
151167
```shell
152168
$ dipdup -c config.yml run
153169
```
@@ -165,14 +181,21 @@ $ docker-compose up dipdup
165181

166182
Note that can use `DIPDUP_DATABASE_PASSWORD` environment variable to avoid storing database password in `dipdup.yml`.
167183

168-
### Atomicity and persistency
184+
### Optional: configure Hasura GraphQL Engine
169185

170-
Here's a few important things to keep in mind while running your indexer:
186+
`init` command generates Hasura metadata JSON in the package root. You can use `configure-graphql` command to apply it to the running GraphQL Engine instance:
187+
188+
```shell
189+
$ dipdup -c config.yml configure-graphql --url http://127.0.0.1:8080 --admin-secret changeme
190+
```
191+
192+
Or if using included `docker-compose.yml` example:
193+
194+
```shell
195+
$ docker-compose up -d graphql-engine
196+
$ docker-compose up configure-graphql
197+
```
171198

172-
* Do not rename existing indexes in config file without cleaning up database first, didpup won't handle this renaming automatically and will consider renamed index as a new one.
173-
* Make sure that database you're connecting to is used by dipdup exclusively. When index configuration or models will change the whole database will be dropped and indexing will start from scratch.
174-
* Multiple indexes pointing to different contracts must not reuse the same models because synchronization is performed by index first and then by block.
175-
* Reorg messages signal about chain reorganizations, when some blocks, including all operations, are rolled back in favor of blocks with higher weight. Chain reorgs happen quite often, so it's not something you can ignore. You have to handle such messages correctly, otherwise you will likely accumulate duplicate data or, worse, invalid data. By default Dipdup will start indexing from scratch on such messages. To implement your own rollback logic edit generated `on_rollback` handler.
176199

177200
### Optional: configure logging
178201

docker-compose.yml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,5 +30,11 @@ services:
3030
restart: always
3131
env_file: secrets.env
3232

33+
configure-graphql:
34+
<<: *x-dipdup
35+
command: ["-c", "dipdup.yml", "configure-graphql", "--url", "http://graphql-engine:8080", "--admin-secret", "changeme"]
36+
depends_on:
37+
- graphql-engine
38+
3339
volumes:
3440
db:

src/dipdup/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
__version__ = '3.0.4'
1+
__version__ = '0.0.0'

src/dipdup/cli.py

Lines changed: 35 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,15 @@
11
import asyncio
22
import hashlib
3+
import json
34
import logging
45
import os
56
import sys
67
from dataclasses import dataclass
78
from functools import wraps
89
from os.path import dirname, join
9-
from typing import Dict, List
10+
from typing import Dict, List, Optional
1011

12+
import aiohttp
1113
import click
1214
from tortoise import Tortoise
1315
from tortoise.exceptions import OperationalError
@@ -147,3 +149,35 @@ async def init(ctx):
147149
await codegen.fetch_schemas(config)
148150
await codegen.generate_types(config)
149151
await codegen.generate_handlers(config)
152+
await codegen.generate_hasura_metadata(config)
153+
154+
155+
@cli.command(help='Configure Hasura GraphQL Engine')
156+
@click.option('--url', type=str, help='Hasura GraphQL Engine URL', default='http://127.0.0.1:8080')
157+
@click.option('--admin-secret', type=str, help='Hasura GraphQL Engine admin secret', default=None)
158+
@click.pass_context
159+
@click_async
160+
async def configure_graphql(ctx, url: str, admin_secret: Optional[str]):
161+
config: DipDupConfig = ctx.obj.config
162+
163+
url = url.rstrip("/")
164+
hasura_metadata_path = join(config.package_path, 'hasura_metadata.json')
165+
with open(hasura_metadata_path) as file:
166+
hasura_metadata = json.load(file)
167+
headers = {}
168+
if admin_secret:
169+
headers['X-Hasura-Admin-Secret'] = admin_secret
170+
async with aiohttp.ClientSession() as session:
171+
async with session.post(
172+
url=f'{url}/v1/query',
173+
data=json.dumps(
174+
{
175+
"type": "replace_metadata",
176+
"args": hasura_metadata,
177+
},
178+
),
179+
headers=headers,
180+
) as resp:
181+
result = await resp.json()
182+
if not result.get('message') == 'success':
183+
raise Exception(result)

src/dipdup/codegen.py

Lines changed: 109 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,15 @@
1+
import importlib
12
import json
23
import logging
34
import os
45
import subprocess
56
from contextlib import suppress
67
from os import mkdir
78
from os.path import dirname, exists, join
8-
from typing import Any, Dict
9+
from typing import List
910

1011
from jinja2 import Template
12+
from tortoise import Model, fields
1113

1214
from dipdup.config import ROLLBACK_HANDLER, DipDupConfig
1315
from dipdup.datasources.tzkt.datasource import TzktDatasource
@@ -18,7 +20,7 @@
1820
async def create_package(config: DipDupConfig):
1921
try:
2022
package_path = config.package_path
21-
except ImportError:
23+
except (ImportError, ModuleNotFoundError):
2224
package_path = join(os.getcwd(), config.package)
2325
mkdir(package_path)
2426
with open(join(package_path, '__init__.py'), 'w'):
@@ -147,3 +149,108 @@ async def generate_handlers(config: DipDupConfig):
147149
if not exists(handler_path):
148150
with open(handler_path, 'w') as file:
149151
file.write(handler_code)
152+
153+
154+
def _format_array_relationship(related_name: str, table: str, column: str):
155+
return {
156+
"name": related_name,
157+
"using": {
158+
"foreign_key_constraint_on": {
159+
"column": column,
160+
"table": {
161+
"schema": "public",
162+
"name": table,
163+
},
164+
},
165+
},
166+
}
167+
168+
169+
def _format_object_relationship(table: str, column: str):
170+
return {
171+
"name": table,
172+
"using": {
173+
"foreign_key_constraint_on": column,
174+
},
175+
}
176+
177+
178+
def _format_select_permissions(columns: List[str]):
179+
return {
180+
"role": "user",
181+
"permission": {
182+
"columns": columns,
183+
"filter": {},
184+
"allow_aggregations": True,
185+
},
186+
}
187+
188+
189+
def _format_table(name: str):
190+
return {
191+
"table": {
192+
"schema": "public",
193+
"name": name,
194+
},
195+
"object_relationships": [],
196+
"array_relationships": [],
197+
"select_permissions": [],
198+
}
199+
200+
201+
def _format_metadata(tables):
202+
return {
203+
"version": 2,
204+
"tables": tables,
205+
}
206+
207+
208+
async def generate_hasura_metadata(config: DipDupConfig):
209+
_logger.info('Generating Hasura metadata')
210+
metadata_tables = {}
211+
model_tables = {}
212+
models = importlib.import_module(f'{config.package}.models')
213+
214+
for attr in dir(models):
215+
model = getattr(models, attr)
216+
if isinstance(model, type) and issubclass(model, Model) and model != Model:
217+
218+
table_name = model._meta.db_table or model.__name__.lower()
219+
model_tables[f'models.{model.__name__}'] = table_name
220+
221+
table = _format_table(table_name)
222+
metadata_tables[table_name] = table
223+
224+
for attr in dir(models):
225+
model = getattr(models, attr)
226+
if isinstance(model, type) and issubclass(model, Model) and model != Model:
227+
table_name = model_tables[f'models.{model.__name__}']
228+
229+
metadata_tables[table_name]['select_permissions'].append(
230+
_format_select_permissions(list(model._meta.db_fields)),
231+
)
232+
233+
for field in model._meta.fields_map.values():
234+
if isinstance(field, fields.relational.ForeignKeyFieldInstance):
235+
if not isinstance(field.related_name, str):
236+
raise Exception(f'`related_name` of `{field}` must be set')
237+
related_table_name = model_tables[field.model_name]
238+
metadata_tables[table_name]['object_relationships'].append(
239+
_format_object_relationship(
240+
table=model_tables[field.model_name],
241+
column=field.model_field_name + '_id',
242+
)
243+
)
244+
metadata_tables[related_table_name]['array_relationships'].append(
245+
_format_array_relationship(
246+
related_name=field.related_name,
247+
table=table_name,
248+
column=field.model_field_name + '_id',
249+
)
250+
)
251+
252+
metadata = _format_metadata(tables=list(metadata_tables.values()))
253+
254+
metadata_path = join(config.package_path, 'hasura_metadata.json')
255+
with open(metadata_path, 'w') as file:
256+
json.dump(metadata, file, indent=4)

src/dipdup/config.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,14 +4,15 @@
44
import logging.config
55
import os
66
import sys
7+
from os import environ as env
78
from os.path import dirname
89
from typing import Any, Callable, Dict, List, Optional, Type, Union
910

1011
from attr import dataclass
1112
from cattrs_extras.converter import Converter
1213
from ruamel.yaml import YAML
13-
from tortoise import Model, Tortoise
14-
from os import environ as env
14+
from tortoise import Tortoise
15+
1516
from dipdup.models import IndexType, State
1617

1718
ROLLBACK_HANDLER = 'on_rollback'
@@ -62,7 +63,6 @@ def connection_string(self):
6263
return f'{self.driver}://{self.user}:{self.password}@{self.host}:{self.port}/{self.database}'
6364

6465

65-
6666
@dataclass(kw_only=True)
6767
class TzktDatasourceConfig:
6868
"""TzKT datasource config

src/dipdup/datasources/tzkt/datasource.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ def __init__(
2828
operation_index_configs: List[OperationIndexConfig],
2929
):
3030
super().__init__()
31-
self._url = url
31+
self._url = url.rstrip('/')
3232
self._operation_index_configs = {config.contract: config for config in operation_index_configs}
3333
self._synchronized = asyncio.Event()
3434
self._callback_lock = asyncio.Lock()

src/dipdup_hic_et_nunc/handlers/on_mint.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,4 +11,4 @@ async def on_mint(
1111
mint: HandlerContext[Mint],
1212
operations: List[OperationData],
1313
) -> None:
14-
...
14+
await Address.get_or_create(address=mint.parameter.address)
Lines changed: 67 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,67 @@
1+
{
2+
"version": 2,
3+
"tables": [
4+
{
5+
"table": {
6+
"schema": "public",
7+
"name": "address"
8+
},
9+
"object_relationships": [],
10+
"array_relationships": [
11+
{
12+
"name": "tokens",
13+
"using": {
14+
"foreign_key_constraint_on": {
15+
"column": "holder_id",
16+
"table": {
17+
"schema": "public",
18+
"name": "token"
19+
}
20+
}
21+
}
22+
}
23+
],
24+
"select_permissions": [
25+
{
26+
"role": "user",
27+
"permission": {
28+
"columns": [
29+
"address"
30+
],
31+
"filter": {},
32+
"allow_aggregations": true
33+
}
34+
}
35+
]
36+
},
37+
{
38+
"table": {
39+
"schema": "public",
40+
"name": "token"
41+
},
42+
"object_relationships": [
43+
{
44+
"name": "address",
45+
"using": {
46+
"foreign_key_constraint_on": "holder_id"
47+
}
48+
}
49+
],
50+
"array_relationships": [],
51+
"select_permissions": [
52+
{
53+
"role": "user",
54+
"permission": {
55+
"columns": [
56+
"token_id",
57+
"token_info",
58+
"id"
59+
],
60+
"filter": {},
61+
"allow_aggregations": true
62+
}
63+
}
64+
]
65+
}
66+
]
67+
}

0 commit comments

Comments
 (0)