You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jan 27, 2026. It is now read-only.
@@ -62,25 +62,9 @@ make them catch up with others even if they are HOT, see [chain](chain/README.md
62
62
63
63
❌ Root key must be newtype struct with numeric inner type (that's part of the design decision to achieve fast indexing of even whole bitcoin)
64
64
65
-
### Why and when redb?
65
+
### Blockchains
66
66
67
-
Redb is copy-on-write (COW) B+Tree based so in comparison to LSM tree with WAL or log-structured heap, in order
68
-
to avoid benchmarking our SSD by random-access writes, ie. to rapidly reduce write amplification, we need to :
69
-
70
-
- systematically combine durable and non-durable writes to leverage Linux VM (page cache) and reduce amount of fsync calls
71
-
- sort all data in batches before writing it to reduce tree building overhead
72
-
- solved by parallelizing writes to all columns into long-running batching threads
73
-
74
-
### Why Macros?
75
-
76
-
1. Rust's type system is not as expressive as e.g. Haskell's or Scala's for performance reasons
77
-
2. Rust's macro system is powerful and straightforward to use
78
-
79
-
So, I find model driven development with code generation a great fit for Rust. It performs very well unless we generate 50k lines of code
80
-
which would be the case of deeply nested entities with many indexes and dictionaries.
81
-
82
-
The core idea is about deriving R/W entity methods and nested entity definition `println!("{:#?}", Block::definition()?);` from struct annotations.
83
-
Definition holds all the entity meta information, and it is used to create rich R/W transaction contexts that are used by derived entity R/W methods.
67
+
See [chain](./chain) and [chains](./chains).
84
68
85
69
### Development
86
70
@@ -338,174 +322,30 @@ Deleting blocks:");
338
322
339
323
The same api is accessible through http endpoints at http://127.0.0.1:3033/swagger-ui/.
340
324
341
-
### ⏱ Redbit benchmarks (results from github servers)
342
-
343
-
The demo example persists data into 24 tables to allow for rich querying. Each `index` is backed by 2 tables and `dictionary` by 4 tables.
344
-
Each PK, FK, simple column, index or dictionary is backed by its own redb DB and a long-running indexing thread. If you have 20 of these, you are still
345
-
fine on Raspberry Pi, consider stronger machine for deeply nested entities with many indexes and dictionaries.
325
+
### FAQ
346
326
347
-
Indexing process is always as slow as the column which in comparison to others has either bigger values, more values or combination of both.
327
+
**Why and when redb?**
348
328
349
-
See [chain](./chain) for more details on performance and data size.
329
+
Redb is copy-on-write (COW) B+Tree based so in comparison to LSM tree with WAL or log-structured heap, in order
330
+
to avoid benchmarking our SSD by random-access writes, ie. to rapidly reduce write amplification, we need to :
350
331
351
-
The `persist/remove` methods are slower because each bench iteration opens ~ 34 new databases for whole block.
352
-
The throughput is ~ **10 000 blocks/s** in batch mode which is ~ **300 000 db rows/s** until B+Tree grows significantly
353
-
=> write amplification increases and kernel page cache is fully utilized => kernel throttles writes.
354
-
355
-
The `block::_store_many` operation in this context writes and commits 3 blocks of 3 transactions of 1 input and 3 utxos of 3 assets, ie.
356
-
the operations writes :
357
-
- 3 blocks
358
-
- 3 * 3 = 9 transactions
359
-
- 3 * 3 = 9 inputs
360
-
- 3 * 3 * 3 = 27 utxos
361
-
- 3 * 3 * 3 * 3 = 81 assets
362
-
363
-
`block::_first` operation reads whole block with all its transactions, inputs, utxos and assets.
- systematically combine durable and non-durable writes to leverage Linux VM (page cache) and reduce amount of fsync calls
333
+
- sort all data in batches before writing it to reduce tree building overhead
334
+
- solved by parallelizing writes to all columns into long-running batching threads
507
335
336
+
**Why Macros?**
508
337
509
-
## Chain
338
+
1. Rust's type system is not as expressive as e.g. Haskell's or Scala's for performance reasons
339
+
2. Rust's macros are powerful, easy to use, maintain and insanely fast to compile, there is very little compile time overhead
340
+
3. Code generation speeds up runtime at hot spots, think of it as inlining
341
+
- for instance SQL layer has huge overhead of parsing, planning, optimizing and executing queries for inserting each row
342
+
- redbit macro derives the exact R/W code for the user commands so the only overhead is either :
343
+
- serialization of value to bytes
344
+
- handing a reference to redb in case we use `Vec<u8>` or `&[u8]` directly
345
+
- we write by many threads (CPU gen get fully utilized) and the dispatching thread would otherwise become a bottleneck
346
+
347
+
So, I find model driven development with code generation a great fit for Rust and blockchains. It performs very well unless we generate 50k lines of code
348
+
which would be the case of deeply nested entities with many indexes and dictionaries.
510
349
511
-
See [chain](./chain)
350
+
The core idea is about deriving R/W entity methods and nested entity definition `println!("{:#?}", Block::definition()?);` from struct annotations.
351
+
Definition holds all the entity meta information, and it is used to create rich R/W transaction contexts that are used by derived entity R/W methods.
0 commit comments