Skip to content

Conversation

@LeilaWang
Copy link
Contributor

@LeilaWang LeilaWang commented Oct 15, 2025

Instead of setting and inserting the outHash of a checkpoint when proposing a checkpoint. The circuits now aggregate all outHashes in an epoch to build an unbalanced tree. The root is inserted to the Outbox when the epoch proof is proven on L1.

The outHash in the checkpoint header is removed in this pr. It doesn't need to be in the header as it doesn't get checked or used anywhere. And the blob data already contains all tx out hashes, the out hash at the checkpoint level is committed to by the blobsHash, which is in the checkpoint header.

It's added back in this pr where the out hash of an epoch is checked on L1 to match the "aggregated" out hash of the last proposed checkpoint.

@LeilaWang LeilaWang force-pushed the lw/out_hash_per_epoch branch 4 times, most recently from d24b4f5 to 31e4aa2 Compare October 16, 2025 13:14
const aztecNodeConfig: AztecNodeConfig = { ...getConfigEnvVars(), ...config };
const aztecNodeConfig: AztecNodeConfig = {
...getConfigEnvVars(),
aztecEpochDuration: 4,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing the epoch duration to 4 by default for sandbox. This allows the epoch to be proven* faster, so the users can consume out hash without having to wait for a long time. This doesn't affect anything else.

  • We are not proving anything in sandbox. But below in this file, I created a new helper to set the out hash to the outbox when an epoch is complete.

@LeilaWang LeilaWang force-pushed the lw/out_hash_per_epoch branch from 31e4aa2 to fac9b9e Compare October 16, 2025 14:11
@LeilaWang LeilaWang linked an issue Oct 16, 2025 that may be closed by this pull request
@LeilaWang LeilaWang changed the title feat!: aggregate out hashes and set on L1 per epoch (WIP) feat!: aggregate out hashes and set on L1 per epoch Oct 16, 2025
Copy link
Contributor

@sirasistant sirasistant left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Circuits changes LGTM! 🚀

Copy link
Contributor

@LHerskind LHerskind left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a few notes. Haven't looked all too deep yet. But the removal of the outhash from blocks is something that I think we will have to discuss a bit as it bypasses the committee and practically means the committee only helps for things that cannot be bridged.

// Handle L2->L1 message processing
if (_args.args.outHash != bytes32(0)) {
// Insert L2->L1 messages into outbox for consumption.
rollupStore.config.outbox.insert(endEpoch, _args.args.outHash);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we have an issue here because of partial epoch proofs.

Say you will first prove the full epoch, and then a partial proof is posted. It would overwrite and delete certain messages from the outbox.

Seems like something that should only be done when the chain is extended, e.g., like the tips above it.

if (_args.end > rollupStore.tips.getProvenBlockNumber()) {
 rollupStore.tips = rollupStore.tips.updateProvenBlockNumber(_args.end);

  if (_args.args.outHash != bytes32(0)) {
    rollupStore.config.outbox.insert(endEpoch, _args.args.outHash);
  }
} 

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I wasn't aware of the possibility of partial epoch proofs!
This is going to be a problem. With the fix to only set proven block number and out hash when the chain is extended, it’s still possible to overwrite the out hash of the same epoch with a longer proof. So as you suspected, it’s possible to consume the same message again when the out hash has changed, because the wonky tree structure has changed :(
I will change the tree to be a standard balanced tree at the epoch level, so the leaf id won't change even when more blocks are proven. It will make consuming the message more expensive, but I guess it's still better than a storage write for each proposal?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better from the perspective of the sequencers with respect to cost as least (since the users bear consumption costs). So it is likely the way to go ye.

struct ContentCommitment {
bytes32 blobsHash;
bytes32 inHash;
bytes32 outHash;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing that we got somewhat nicely from having them as part of proposals were that the committees signed over them. Which meant that even if the proof was broken, the committee would also need to be corrupted for it to fail. Without it here, we effectively don't have a training wheel related to the execution anymore, as if the proof was broken something decided fully separate could be included as the outHash 😬.

I think we likely need to chat about this before we proceed, as it is a big difference in the safety assumptions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The committees still sign over the blobsHash, which commits to all the tx effects of all the txs, including the l2-to-l1 messages. And the outHash is deterministic computing with the l2-to-l1 messages from the blob fields that results in the same blobsHash.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. But the act of going from blobHash => outhash is by the proof, ye? So if it is proof issues that don't help you here? So if you have a circuit bug, you can abuse that even if the committee signed over something else?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm I get what you mean now. I was only thinking about having enough information to verify the proposed data and get the block building going. But yeah if there's a bug in the circuit, then the out hash may be tweaked without involving the committees. But won't it still be a problem if the committees sign over a block out hash they verified, but the prover use the circuit bug at the epoch level and change the final out hash, even though the correct out hashes in the block level are signed over?

* @param _path - The sibling path used to prove inclusion of the message, the _path length depends
* on the location of the L2 to L1 message in the wonky tree.
*/
function consume(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably note that there is a potential race condition due to the potential of proofs altering currently in use data.

I think we also need to make clear somewhere (docs or something) how exactly the structure of the tree looks, to ensure that we are not running into a case where a partial proof could have pushed us into a case where there are now different leafIds for the same message over time because the path length changed or something like that. I haven't looked into the circuits yet so I kinda don't have a good understanding about if that could happen or not.


BlockLog memory parentBlockLog = rollup.getBlock(startBlockNumber - 1);

// What are these even?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😆

@AztecBot AztecBot requested a review from a team October 27, 2025 11:26
@AztecBot
Copy link
Collaborator

AztecBot commented Oct 27, 2025

Flakey Tests

🤖 says: This CI run detected 1 tests that failed, but were tolerated due to a .test_patterns.yml entry.

\033FLAKED\033 (\0338;;http://ci.aztec-labs.com/b198e08793259e19�b198e08793259e198;;�\033):  yarn-project/end-to-end/scripts/run_test.sh simple src/e2e_p2p/add_rollup.test.ts (601s) (code: 124) group:e2e-p2p-epoch-flakes (\033Leila Wang\033: feat!: aggregate out hashes and set on L1 per epoch (#17712))

@LeilaWang LeilaWang force-pushed the lw/out_hash_per_epoch branch 2 times, most recently from 343bc07 to c9e9b3c Compare November 5, 2025 10:22
Copy link
Contributor

@LHerskind LHerskind left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the original issue still exists, as I look at this it seems like the outhash of the proof is unrelated to the outhash of the individual proposals so there is no gain from the proposal outhash being signed. I might be missing something.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for updating these 🫡

* on the total amount of L2 to L1 messages in the block. i.e. the length of _path is equal to the depth of the
* L1 to L2 message tree.
* @param _epoch - The epoch that contains the message we want to consume
* @param _leafIndex - The index at the level in the wonky tree where the message is located
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have we explained somewhere, what a wonky tree is?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add the same reference I added in Outbox.sol to point people to see the comment in ts!

For detailed information about the tree structure and leaf ID computation, see:
yarn-project/stdlib/src/messaging/l2_to_l1_membership.ts

rollupStore.tips.updateProvenBlockNumber(Math.max(rollupStore.tips.getProvenBlockNumber(), _args.end));

// Handle L2->L1 message processing:
if (_args.args.outHash != bytes32(0)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As long as the circuits are sound, there should be no case where it is possible for us to end up needing to overwrite an outhash with 0 if it was previously non-zero.

This is kinda more a comment for myself.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be good to also make a note that if the tree is empty it will return 0 here, as that is not always the case. But having that makes it possible for us to skip the "isIgnition" check.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️

I think you still have the issue last discussed. There are outhashes in the proposalHeader again, but I'm not sure that they are actually linked to the eventual outHash that one seem to be provided by the prover? So if the circuits are broken what is stopping someone from proposing A_1, A_2, A_n, and then the prover posting B_1 as the outhash? I'm probably just missing something here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah you are right, if the circuits are broken, there's nothing stopping someone to post a completely different out hash via the root rollup proof. To actually use the out hashes in the proposed headers to validate the root out hash, I think we have to store the out hash in the block log. And when proving an epoch, we could either:

  • compute the tree root with all out hashes and compare it with the value in public inputs.
  • or change the out hash in the header to be the "accumulated" out hash. And compare the root out hash with the out hash in the last proposed header.

I think the second one is better because it's cheaper. Although it means the checkpoint header now depends on the state of the previous checkpoints in the same epoch, making it slightly complicated to compute and verify. But when we move to multiple blocks per checkpoint, the similar situation will apply to the block headers anyway.

Are we good with adding the out hash to the block log?

Or any other ideas?

On a similar note, if the circuits are broken, it would allow a prover to set a high fee in the root rollup circuit's public inputs. Do we want to do something about this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think storing it in the block log is acceptable. Since we are doing circular storage there it would add ~3K on a propose, but it is one of the things that can be optimized by storing just the hash of the blocklog.

I think as you say, that solution 2 is probably the way to go if we anyway would need to do that for multiple blocks per checkpoint.

If circuits are broken, it is acceptable that the fee asset controlled by them run into issues. The damage is much less troublesome than the outbox generally

}

struct ContentCommitment {
struct ProposedHeader {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So new update just flattened the structure?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, Mike wanted to get rid of this struct for a while 🤣

*
* @dev Messages are tracked using unique leaf IDs computed from their position in the epoch's tree structure.
* This design ensures that when longer epoch proofs are submitted (proving more blocks), messages from
* earlier blocks retain their consumed status because their leaf IDs remain stable.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

party cat GIF

}

struct AlphabeticalHeader {
bytes32 blobsHash;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hope that this did not cause too much pain. Always a pain in the ass if forgotten.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got hurt a couple of times now I immediately know where to look when it bites 😄

PublicInputArgs memory args =
PublicInputArgs({previousArchive: parentBlockLog.archive, endArchive: endFull.block.archive, proverId: _prover});
PublicInputArgs memory args = PublicInputArgs({
previousArchive: parentBlockLog.archive, endArchive: endFull.block.archive, outHash: bytes32(0), proverId: _prover
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the base be using the endFull values instead of just a 0?

) external override(IOutbox) {
require(_path.length < 256, Errors.Outbox__PathTooLong());
require(_leafIndex < (1 << _path.length), Errors.Outbox__LeafIndexOutOfBounds(_leafIndex, _path.length));
require(_l2BlockNumber <= ROLLUP.getProvenBlockNumber(), Errors.Outbox__BlockNotProven(_l2BlockNumber));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty neat that we can avoid the call entirely here as it is just 0 before now 👍

// end_archive.root: the new archive tree root
publicInputs[1] = _args.endArchive;

publicInputs[2] = _args.outHash;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned elsewhere this one seem to be essentially unconstrained (on L1) to the outhashes that was published during the proposals.

@LeilaWang LeilaWang force-pushed the lw/out_hash_per_epoch branch from 0fb4ba1 to 66eb709 Compare November 18, 2025 13:28
@LeilaWang LeilaWang force-pushed the lw/out_hash_per_epoch branch from 174b309 to 4650224 Compare November 26, 2025 13:16
Copy link
Contributor

@iAmMichaelConnor iAmMichaelConnor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Circuit changes look fine. I've read the Solidity, but I'm still unfamiliar, so would need to rely on Lasse's review of it.

Comment on lines 27 to 32
pub fn compute_epoch_out_hash(out_hashes: [Field; AZTEC_MAX_EPOCH_DURATION]) -> Field {
let left_subtree_out_hashes: [Field; 32] = subarray(out_hashes, 0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
pub fn compute_epoch_out_hash(out_hashes: [Field; AZTEC_MAX_EPOCH_DURATION]) -> Field {
let left_subtree_out_hashes: [Field; 32] = subarray(out_hashes, 0);
pub fn compute_epoch_out_hash(out_hashes: [Field; AZTEC_MAX_EPOCH_DURATION]) -> Field {
std::static_assert(AZTEC_MAX_EPOCH_DURATION == 48, "Remember to update these constants of 32 and 16");
let left_subtree_out_hashes: [Field; 32] = subarray(out_hashes, 0);

(I'm worrying about someone changing AZTEC_MAX_EPOCH_DURATION in future, and this code not breaking when it should break, and therefore causing some scary silent bugs).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A test below checks that AZTEC_MAX_EPOCH_DURATION == 48. But I can move it into the code as you suggested!

@AztecBot AztecBot force-pushed the lw/out_hash_per_epoch branch from a6bbc3e to 014b2fd Compare January 5, 2026 18:10
@AztecBot AztecBot enabled auto-merge January 5, 2026 18:10
@AztecBot AztecBot added this pull request to the merge queue Jan 5, 2026
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Jan 5, 2026
@LeilaWang LeilaWang added the ci-full Run all master checks. label Jan 6, 2026
@LeilaWang LeilaWang force-pushed the lw/out_hash_per_epoch branch 5 times, most recently from 3a99b19 to d7610c4 Compare January 6, 2026 17:18
@LeilaWang LeilaWang force-pushed the lw/out_hash_per_epoch branch from 3703e4d to 36c5519 Compare January 8, 2026 09:18
@LeilaWang LeilaWang added this pull request to the merge queue Jan 8, 2026
Merged via the queue into next with commit c3afdef Jan 8, 2026
16 checks passed
@LeilaWang LeilaWang deleted the lw/out_hash_per_epoch branch January 8, 2026 14:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci-full Run all master checks.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

refactor(gas): Aggregate outHashes

6 participants