Skip to content

Commit 9f7d21f

Browse files
Document internal env variables; update ctx reference and feature flags page (#597)
1 parent 5c2a7d3 commit 9f7d21f

File tree

7 files changed

+110
-74
lines changed

7 files changed

+110
-74
lines changed

docs/Makefile

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@ build:
2222
lint: markdownlint orphans
2323

2424
serve:
25+
make clean build
2526
mdbook serve -o
2627

2728
wc:

docs/advanced/feature-flags.md

Lines changed: 39 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,27 @@
11
# Feature flags
22

3-
Feature flags allow users to modify some system-wide tunables that affect the behavior of the whole framework. These options are either experimental or unsuitable for generic configurations.
4-
5-
A good practice is to use set feature flags in environment-specific config files.
6-
7-
```yaml
8-
advanced:
9-
early_realtime: False
10-
merge_subscriptions: False
11-
postpone_jobs: False
12-
metadata_interface: False
13-
skip_version_check: False
14-
crash_reporting: False
3+
Feature flags set in `advanced` config section allow users to modify parameters that affect the behavior of the whole framework. Choosing the right combination of flags for an indexer project can improve performance, reduce RAM consumption, or enable useful features.
4+
5+
| flag | description |
6+
| --------------------- | -------------------------------------------------------------------- |
7+
| `crash_reporting` | Enable sending crash reports to the Baking Bad team |
8+
| `early_realtime` | Start collecting realtime messages while sync is in progress |
9+
| `merge_subscriptions` | Subscribe to all operations/big map diffs during realtime indexing |
10+
| `metadata_interface` | Enable contract and token metadata interfaces |
11+
| `postpone_jobs` | Do not start the job scheduler until all indexes are synchronized |
12+
| `skip_version_check` | Disable warning about running unstable or out-of-date DipDup version |
13+
14+
## Crash reporting
15+
16+
Enables sending crash reports to the Baking Bad team. This is **disabled by default**. You can inspect crash dumps saved as `/tmp/dipdup/crashdumps/XXXXXXX.json` before enabling this option.
17+
18+
```admonish info title="See Also"
19+
* {{ #summary troubleshooting.md}}
1520
```
1621

1722
## Early realtime
1823

19-
By default, DipDup enters a sync state twice: before and after establishing a realtime connection. This flag allows starting collecting realtime messages while sync is in progress, right after indexes load.
24+
By default, DipDup enters a sync state twice: before and after establishing a realtime connection. This flag allows collecting realtime messages while the sync is in progress, right after indexes load.
2025

2126
Let's consider two scenarios:
2227

@@ -28,20 +33,34 @@ If you do not have strict RAM constraints, it's recommended to enable this flag.
2833

2934
## Merge subscriptions
3035

31-
Subscribe to all operations/big map diffs during realtime indexing instead of separate channels. This flag helps to avoid the 10.000 subscription limit of TzKT and speed up processing. The downside is an increased RAM consumption during sync, especially if `early_realtimm` flag is enabled too.
36+
Subscribe to all operations/big map diffs during realtime indexing instead of separate channels. This flag helps to avoid the 10.000 subscription limit of TzKT and speed up processing. The downside is an increased RAM consumption during sync, especially if `early_realtime` flag is enabled too.
3237

33-
## Postpone jobs
38+
## Metadata interface
3439

35-
Do not start the job scheduler until all indexes are synchronized. If your jobs perform some calculations that make sense only after indexing is fully finished, this toggle can save you some IOPS.
40+
Without this flag calling `ctx.update_contract_metadata` and `ctx.update_token_metadata` methods will have no effect. Corresponding internal tables are created on reindexing in any way.
3641

37-
## Metadata interface
42+
## Postpone jobs
3843

39-
Without this flag calling `ctx.update_contract_metadata` and `ctx.update_token_metadata` will make no effect. Corresponding internal tables are created on reindexing in any way.
44+
Do not start the job scheduler until all indexes are synchronized. If your jobs perform some calculations that make sense only after the indexer has reached realtime, this toggle can save you some IOPS.
4045

4146
## Skip version check
4247

4348
Disables warning about running unstable or out-of-date DipDup version.
4449

45-
## Crash reporting
50+
## Internal environment variables
4651

47-
Enables sending crash reports to the Baking Bad team. This is **disabled by default**. You can inspect crash dumps saved as `/tmp/dipdup/crashdumps/XXXXXXX.json` before enabling this option.
52+
DipDup uses multiple environment variables internally. They read once on process start and usually do not change during runtime. Some variables modify the framework's behavior, while others are informational.
53+
54+
Please note that they are not currently a part of the public API and can be changed without notice.
55+
56+
| env variable | module path | description |
57+
| --------------------- | ------------------------- | --------------------------------------------------------------------- |
58+
| `DIPDUP_CI` | `dipdup.env.CI` | Running in GitHub Actions |
59+
| `DIPDUP_DOCKER` | `dipdup.env.DOCKER` | Running in Docker |
60+
| `DIPDUP_DOCKER_IMAGE` | `dipdup.env.DOCKER_IMAGE` | Base image used when building Docker image (default, slim or pytezos) |
61+
| `DIPDUP_NEXT` | `dipdup.env.NEXT` | Enable features thar require schema changes |
62+
| `DIPDUP_PACKAGE_PATH` | `dipdup.env.PACKAGE_PATH` | Path to the currently used package |
63+
| `DIPDUP_REPLAY_PATH` | `dipdup.env.REPLAY_PATH` | Path to datasource replay files; used in tests |
64+
| `DIPDUP_TEST` | `dipdup.env.TEST` | Running in pytest |
65+
66+
`DIPDUP_NEXT` flag will give you the picture of what's coming in the next major release, but enabling it on existing schema will trigger a reindexing.

docs/context-reference.rst

Lines changed: 19 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,19 @@
1-
.. automodule:: dipdup.context
2-
:members: DipDupContext, HandlerContext, HookContext
1+
.. autoclass:: dipdup.context.DipDupContext
2+
.. autoclass:: dipdup.context.HandlerContext
3+
.. autoclass:: dipdup.context.HookContext
4+
5+
.. automethod:: dipdup.context.DipDupContext.add_contract
6+
.. automethod:: dipdup.context.DipDupContext.add_index
7+
.. automethod:: dipdup.context.DipDupContext.execute_sql
8+
.. automethod:: dipdup.context.DipDupContext.execute_sql_query
9+
.. automethod:: dipdup.context.DipDupContext.fire_hook
10+
.. automethod:: dipdup.context.DipDupContext.get_coinbase_datasource
11+
.. automethod:: dipdup.context.DipDupContext.get_http_datasource
12+
.. automethod:: dipdup.context.DipDupContext.get_ipfs_datasource
13+
.. automethod:: dipdup.context.DipDupContext.get_metadata_datasource
14+
.. automethod:: dipdup.context.DipDupContext.get_tzkt_datasource
15+
.. automethod:: dipdup.context.DipDupContext.reindex
16+
.. automethod:: dipdup.context.DipDupContext.restart
17+
.. automethod:: dipdup.context.DipDupContext.update_contract_metadata
18+
.. automethod:: dipdup.context.DipDupContext.update_token_metadata
19+
.. automethod:: dipdup.context.HookContext.rollback

docs/getting-started/project-structure.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -96,3 +96,4 @@ The same rules apply to handler callbacks. Note that the `callback` field must b
9696
* {{ #summary advanced/hooks.md }}
9797
* {{ #summary advanced/sql.md }}
9898
* {{ #summary graphql/hasura.md }}
99+
```

docs/graphql/README.md

Lines changed: 5 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -29,13 +29,11 @@ Hasura GraphQL engine subscriptions are **live queries**, i.e., a subscription w
2929

3030
This feature is essential to avoid complex state management (merging query results and subscription feed). In most scenarios, live queries are what you need to sync the latest changes from the backend.
3131

32-
> **WARNING**
33-
>
34-
> If the live query has a significant response size that does not fit into the limits, you need one of the following:
35-
>
36-
> 1. Paginate with offset (which is not convenient)
37-
> 2. Use cursor-based pagination (e.g., by an increasing unique id).
38-
> 3. Narrow down request scope with filtering (e.g., by timestamp or level).
32+
If the live query has a significant response size that does not fit into the limits, you need one of the following:
33+
34+
1. Paginate with offset (which is not convenient)
35+
2. Use cursor-based pagination (e.g., by an increasing unique id).
36+
3. Narrow down request scope with filtering (e.g., by timestamp or level).
3937

4038
Ultimately you can get "subscriptions" on top of live quires by requesting all the items having ID greater than the maximum existing or all the items with a timestamp greater than now.
4139

0 commit comments

Comments
 (0)