Cross-checking data in state fraud proof with published rollup data #775
Replies: 4 comments 13 replies
-
We could go on to implement Solution 1 first if we can put some reasonable limit on rollup block size but are there any drawbacks that I missed between these solutions? |
Beta Was this translation helpful? Give feedback.
-
Why not implement the solution in section 4.3 https://arxiv.org/pdf/1809.09044.pdf ("solution 3")? Solution 1 defeats the point of compact fraud proofs (you might as well not even use ISRs at that point), so it's a different discussion altogether (whether we want compact fraud proofs or not). Solution 2 is suboptimal because it requires light nodes to download all ISRs and O(n) transaction metadata. The solution in the paper (as well as the microblocks solution) above avoids those pitfalls. However, there is an acceptable version of solution 1 based on "microblocks" ("solution 1.2"). As far as I know, the original plan was to not use ISRs but "microblocks" of one or few transaction each, and have multiple microblocks per DA block (in the same PFB batch), because ISRs were deemed to difficult to implement (similar to Optimism v1). Then we went back to ISRs if I recall correctly because an easy way to implement them was found. Does someone have a record of that discussion? This would be an acceptable solution. cc @jbowen93 @adlerjohn |
Beta Was this translation helpful? Give feedback.
-
Solution 4The serialization scheme utilizes two separate namespaces for transactions and intermediate state roots (ISRs). Transactions are divided into fixed-sized shares, with their length (TxL) added before each transaction. Transactions may span across multiple shares, allowing for efficient storage of larger transactions. The ISRs are stored in a separate namespace with a fixed length, and a byte indicates the starting position of the first ISR in each share. To match transactions and ISRs, IDs are used to mark the beginning of each share in both namespaces. Only the first transaction and ISR in a share include an ID, as the rest can be derived from their preceding components. Applying the example, we have two shares for transactions and one for ISRs. In the transactions namespace, Tx3 spans both shares, and TxL is used to indicate the length of each transaction. In the ISR namespace, the single share contains all 6 ISRs, and a byte denotes the starting position of the first ISR. The IDs are used to match the transactions and their corresponding ISRs. Transaction Namespace (Share ) :
Transaction Namespace (Share 2):
SR Namespace (Share 1):
|
Beta Was this translation helpful? Give feedback.
-
Solution 3We are going with Solution 3. The Flow will be this:
This will result in a Sparse share looking like a compact share for Rollkit. By using compact shares, we can read out-of-context shares because we have the location of the first unit per share. An optimization could be only to have the pre or post-ISR, not both, in the wrapped transaction to reduce duplicated data. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Problem
When a verifier/light client receives a State Fraud Proof, it must verify that the transaction data and intermediate state roots (ISRs) inside the proof were actually part of the data commitment of a rollup block published in the Rollup's DA layer. This verification step should be done before the verifier goes to the state transition execution step.
The three main components in the State Fraud Proof that need cross-checking with the DA layer are:
Note: ISRs need not be present after every transaction and can span a period,
p
. So ISRs get calculated every,p
transactions. We assumep = 1
for simplification in this post.Proposed Solutions
Solution 1
The naive solution can be to include the reference to the relevant rollup block in the State Fraud Proof, let the verifier download the whole rollup block, and cross-check the above components with the contents of the rollup block. This approach is fine in case we can put a low limit on the rollup block size. One could argue that since any Full Node can gossip State Fraud Proofs to the network, they can force every verifier to download all the blocks and thus force it to be Full Node. However, a valid response can be that it's a well-known griefing vector for fraud-proof-based systems and issues such as #386 tackle it.
Note: A reference to a rollup block is dependent on the DA layer. For example, It can look something like (DA Block Height, share Index, number of shares block spans) as described in this DA Interface ADR.
Solution 2
In the case of large rollup blocks where it might not be practical for verifiers to download the whole rollup block due to lower bandwidth/hardware requirements, we can define a custom rollup block serialization scheme to enable only parts of the block which are required to be downloaded.
Assume that a rollup block spans amongst a number of shares with fixed-size,
shareSize
. We can reserve the first share(s) to hold block metadata consisting of:N: Number of transactions, first 8 bytes
S: List of transaction sizes where Si = length(Txi), next
8 * N
bytesISRs: List of intermediate state roots between transactions, next
8 * (N / p)
For k = fraudulent transaction index, when a verifier receives State Fraud Proof, they can start by downloading the block metadata, cross-checking the ISRs, and then calculating the share(s) index at which a Txk lives just using Sk and
shareSize
. Then, it can go on to download these share(s) by constructing a reference just like in Solution 1 and then cross-check the transaction data with it as well.Beta Was this translation helpful? Give feedback.
All reactions