Conversation
…ferred-block-proving
| /// Returns the highest block number that has been proven, or `None` if no blocks have been | ||
| /// proven yet. | ||
| #[instrument(level = "debug", target = COMPONENT, skip_all, ret(level = "debug"), err)] | ||
| pub async fn select_latest_proven_block_num(&self) -> Result<Option<BlockNumber>> { |
There was a problem hiding this comment.
How do we treat the genesis block? Should it not always be considered proven?
There was a problem hiding this comment.
Updated the comments to reflect - we treat the genesis block as proven
| // Mark all sequentially proven blocks as completed. | ||
| while latest_complete.child().as_u32() < lowest_in_flight.as_u32() { | ||
| latest_complete = latest_complete.child(); | ||
| db.mark_block_proven(latest_complete) |
There was a problem hiding this comment.
Note: this now breaks the concept of having a signle write-only-connection since we now might end up here if we did proof all instances while apply_block is still running. Am I missing something?
There was a problem hiding this comment.
apply_block will only ever affect a single row one time right? and this query (col delete) will only ever happen after that
There was a problem hiding this comment.
This is fine as far as I can tell, we just need to find a better model than the single writer connection approach CC @Mirko-von-Leipzig
There was a problem hiding this comment.
I have an alternative which we can discuss and pursue in a follow-up PR.
We can add another Watch channel which is the inverse of the apply_block::chain_tip --> proof_scheduler one. The proof_scheduler never updates the DB itself, it just sets the latest proven block in the watch channel. On every apply_block, we update the proven block as well.
This adds a bit of latency to the marking, but considering the latency itself is 30s+, adding another 3s (worst case, avg 1.5s) doesn't seem too bad.
Whether this is worth it 🤷 I do like isolating database writes so we know for sure there are no problems.
…ferred-block-proving
| block_header BLOB NOT NULL, | ||
| signature BLOB NOT NULL, | ||
| commitment BLOB NOT NULL, | ||
| proving_inputs BLOB, -- Serialized BlockProofRequest needed for deferred proving. NULL if it has been proven or never proven (genesis block). |
There was a problem hiding this comment.
nit: leave a TODO that the size might become a problem in the future
Context
We are adding deferred (asynchronous) block proving for the node, as described #1592. Currently, block proving happens synchronously during
apply_block, which means block commitment is blocked until the proof is generated.Blocks will now exhibit committed (not yet proven) and proven states. A committed block is already part of the canonical chain and fully usable. Clients that require proof-level finality can opt into it via the new
finalityparameter onSyncChainMmr.Changes
proving_inputs BLOBto theblock_headerstable, with partial index for querying proven (proving_inputs = NULL) blocks.BlockStore(following the existing block file pattern) rather than as BLOBs in SQLite.mark_block_proven,select_block_proving_inputs(returns deserializedBlockProofRequest), andselect_latest_proven_block_num.apply_block: TheBlockProofRequestis now serialized and persisted alongside the block duringapply_block.proof_scheduler.rs) that drives deferred proving. It queries unproven blocks on startup (restart recovery), listens for new block commits viaNotify, and proves blocks concurrently usingFuturesOrderedfor FIFO completion ordering. Proofs are saved to files, then the block is marked proven in the DB.SyncChainMmr: Added aFinalityenum (COMMITTED,PROVEN) to the protobuf and afinalityfield onSyncChainMmrRequest.apply_blockquery: IntroducedApplyBlockDatastruct to replace the 7-parameter function signature.