Support execution proofs with inputs bigger than 1MiB#2196
Support execution proofs with inputs bigger than 1MiB#2196tomaka merged 5 commits intoparitytech:mainfrom
Conversation
Right now execution proofs with an input bigger than 1MiB fail to execute. The problem is that the req-resp configuration only allows up to 1MiB for the initial request. Such a big input may happens for `validate_transaction` when doing for example a runtime upgrade. This pull request fixes this by checking if the input is too big and if this is the case, it starts an on demand execution. This means it starts executing the function and requests storage entries on demand. This increases the latency when executing functions that require a big input, but these should not be that common any way. On the positive side this makes smoldot support these big inputs without requiring any protocol changes. Closes: paritytech#1526 Closes: paritytech#2089
tomaka
left a comment
There was a problem hiding this comment.
Thanks! The solution actually seems so obvious that I wonder why we didn't do this before.
I think it would have been easier to support child tries by just adding an child_trie: Option<Vec<u8>> field to the existing code for storage requests, but since you did the work already, it's fine as it is.
I couldn't really spot any mistake except for a few nits. I'm not a huge fan of the logic in runtime_service.rs (I feel like we could de-duplicate a lot of code), but it's hard to tell you how it should be done without trying myself to implement it differently, so I think it's fine as it is as well.
Did you try if it works? The logic in runtime_service is a bit hard to read, but if you tried if it worked then that's good enough for me.
Could you also add an entry at the top of CHANGELOG.md?
light-base/src/runtime_service.rs
Outdated
| /// call proof requests. This avoids exceeding protocol limits for large transactions. | ||
| /// The light protocol has a 1 MiB request size limit, so we use 900 KiB to leave | ||
| /// room for the protobuf encoding overhead (block hash, method name, etc.). | ||
| const CALL_PROOF_REQUEST_PARAMETERS_SIZE_THRESHOLD: usize = 900 * 1024; |
There was a problem hiding this comment.
I think it would be better to add a constant with the actual limit (1MiB) to src/libp2p/network/codec/storage_call_proof.rs, and then do something like LIMIT.saturating_sub(1024) here.
Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com>
Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com>
Yes I have run it against some local node/. |
|
@tomaka applied your comments. |
Right now execution proofs with an input bigger than 1MiB fail to execute. The problem is that the req-resp configuration only allows up to 1MiB for the initial request. Such a big input may happens for
validate_transactionwhen doing for example a runtime upgrade.This pull request fixes this by checking if the input is too big and if this is the case, it starts an on demand execution. This means it starts executing the function and requests storage entries on demand. This increases the latency when executing functions that require a big input, but these should not be that common any way. On the positive side this makes smoldot support these big inputs without requiring any protocol changes.
Closes: #1526
Closes: #2089