You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We want to expose some event metadata to users of Aztec.js (see:
#16245).
An important metadata attribute is block hash. Block number is not
enough due to the possibliity of reorgs.
Currently, in archiver nodes, logs are stored by block number, and the
corresponding block hash is lost. This works at the archiver, because
the archiver is designed to prune its stores and start over each time a
reorg is detected.
Users who download data from the node, however, have no reasonably
efficient way of detecting whether the log data they downloaded is still
valid unless they also get a block hash to pin the logs they get to.
So we can bridge this gap in one of two ways:
1. Whenever the node gets a request for logs, it queries the block store
to resolve block hashes from block numbers. This works because archiver
provides strong guarantees that the data stored corresponds to only one
chain. However, it it introduces more store reads
2. At the time of storing logs, we can also store the block hash
corresponding to the block number at the moment of storing. this
increments the size needed to store the logs of each indexed block by 1
Fr, but in exchange for that we can just unpack the block hash from the
logs we're processing without needing extra reads from block store. It
also makes block addition a bit more expensive since now we'll be
potentially calculating one hash per block (although since `addBlocks`
receives them and `hash()` is cached maybe it's not even the case?)
This PR implements option 2, but if I'm hitting any no-no's please let
me know
Note that this PR bumps the ARCHIVER_DB_VERSION, since it modifies how
we index public and contract class logs.
Closes F-220
0 commit comments