You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello! We would like to get the discussion going for coming up with a way to support batching multiple AltDA commitments into a single L1 transaction. The following proposal tries to minimize the amount of changes required to the codebase and it achieves the goal in a fully backwards compatible way.
Any feedback on this is appreciated! And please let us know if it would be more appropriate to have this discussion somewhere else.
AltDA Batched Commitments Spec Proposal
1. Motivation
AltDA-based OP Stack chains periodically submit DA commitments to L1 to ensure data availability is challengeable on-chain.
Currently, each commitment is posted in an individual L1 transaction, incurring a base transaction cost (e.g. 21k gas on Ethereum) per commitment.
Goal: Reduce L1 cost by batching multiple commitments into a single L1 transaction. Only a single 21k base cost is paid for the combined transaction. Each sub-commitment within the batch remains individually challengeable without exceeding the L1 transaction size limit.
2. Proposal Overview
Introduce a new commitment type: BatchedCommitmentType = 2.
A batch can contain multiple sub-commitments, each either:
Keccak256CommitmentType (0) — 32-byte fixed-size
GenericCommitmentType (1) — arbitrarily sized
A single L1 transaction posts this BatchedCommitment, which:
Minimizes fees paid due to intrinsic transaction costs
Preserves the existing challenge model for Keccak256CommitmentType (each sub-commitment is challenged independently)
The derivation pipeline remains mostly unchanged, because the AltDA Data Source decodes the batch into multiple sub-commitments behind the scenes.
3. Batched Commitment Encoding
We define a top-level L1 transaction encoding (the “outer” commitment), plus a sub-commitment format for each item inside the batch:
Example: Suppose we have two Keccak256 sub-commitments (each 32 bytes) in a single batch. The L1 tx data might look like:
Field
Example (Hex)
Explanation
derivation_version (1 byte)
0x01
OP Stack derivation version
batched_commitment_type (1)
0x02
Indicates a batched commitment
subcommitment_type (1)
0x00
Keccak256CommitmentType
comm1_len (2 bytes)
0x0020
length = 32
comm1_data (32 bytes)
…
Commitment Hash
comm2_len (2 bytes)
0x0020
length = 32
comm2_data (32 bytes)
…
Commitment Hash
3.1 Alternative Encoding: Special Case for Keccak256
A more efficient approach for encoding batched commitments of Keccak256 sub-commitments would be to remove the length from the commitment data, as it is always 32 bytes. However, we didn’t include it as part of the main proposal as Keccak256 Commitments are expected to be deprecated in the future.
4. Data Source & Derivation Logic
The derivation pipeline retrieves the L1 transaction calldata that contains the Batched commitment.
Seeing that commitment_type = 2, it knows it must decode a Batched commitment.
The AltDA data source extracts the sub-commitments and then enqueues them individually for standard processing.
Downstream logic remains unchanged—each sub-commitment is treated as if it were a normal single commitment from an L1 transaction.
5. Submitting Batches
Consuming multiple frames per channel is enabled for the batcher through a new Channel configuration option, similar to the one used for blobs. If multiple frames are consumed, the batcher submits each frame to the DA Server independently, and uses the returned commitments to construct a Batched Commitment.
As each frame is submitted independently, the DA Server Spec does not require any changes related to encoding or storing commitments.
5.1 Alternative Implementation: DA Server Aggregation
The alternative approach we explored involves sending all the frames together to the DA Server, and the encoding into a batched commitment is abstracted away in the server. This means that the batcher logic would require no changes and wouldn’t need to be aware of Batched Commitments.
The problem with this approach is that the semantics of the DA Server API would need to be modified, as each frame input would still need to be stored independently for each subcommitment, otherwise the DA challenge logic would become more complex (as subcommitments must be challengeable independently).
One option to keep the challenge logic without modifications would be to add new endpoints to the DA Server that accept multiple frames and perform the encoding. However, in this case the batcher has to be aware of batched commitments anyways, so it would still require changes to the batcher's logic which makes the benefits of this approach less clear.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hello! We would like to get the discussion going for coming up with a way to support batching multiple AltDA commitments into a single L1 transaction. The following proposal tries to minimize the amount of changes required to the codebase and it achieves the goal in a fully backwards compatible way.
Any feedback on this is appreciated! And please let us know if it would be more appropriate to have this discussion somewhere else.
AltDA Batched Commitments Spec Proposal
1. Motivation
AltDA-based OP Stack chains periodically submit DA commitments to L1 to ensure data availability is challengeable on-chain.
Currently, each commitment is posted in an individual L1 transaction, incurring a base transaction cost (e.g. 21k gas on Ethereum) per commitment.
Goal: Reduce L1 cost by batching multiple commitments into a single L1 transaction. Only a single 21k base cost is paid for the combined transaction. Each sub-commitment within the batch remains individually challengeable without exceeding the L1 transaction size limit.
2. Proposal Overview
BatchedCommitmentType = 2
.Keccak256CommitmentType
(0
) — 32-byte fixed-sizeGenericCommitmentType
(1
) — arbitrarily sizedKeccak256CommitmentType
(each sub-commitment is challenged independently)3. Batched Commitment Encoding
We define a top-level L1 transaction encoding (the “outer” commitment), plus a sub-commitment format for each item inside the batch:
Where:
derivation_version
remainsparams.DerivationVersion1
(or0x01
).commitment_type = 0x02
indicates this is aBatchedCommitment
.subcommitment_type
indicates theCommitmentType
of the sub-commitments.The data of each sub-commitment in the batch is prefixed by a 2-byte big-endian length (
comm_len
) indicating its total length.Hence, each sub-commitment is encoded as follows:
Example: Suppose we have two Keccak256 sub-commitments (each 32 bytes) in a single batch. The L1 tx data might look like:
3.1 Alternative Encoding: Special Case for Keccak256
A more efficient approach for encoding batched commitments of Keccak256 sub-commitments would be to remove the length from the commitment data, as it is always 32 bytes. However, we didn’t include it as part of the main proposal as Keccak256 Commitments are expected to be deprecated in the future.
4. Data Source & Derivation Logic
commitment_type = 2
, it knows it must decode a Batched commitment.5. Submitting Batches
Consuming multiple frames per channel is enabled for the batcher through a new Channel configuration option, similar to the one used for blobs. If multiple frames are consumed, the batcher submits each frame to the DA Server independently, and uses the returned commitments to construct a Batched Commitment.
As each frame is submitted independently, the DA Server Spec does not require any changes related to encoding or storing commitments.
5.1 Alternative Implementation: DA Server Aggregation
The alternative approach we explored involves sending all the frames together to the DA Server, and the encoding into a batched commitment is abstracted away in the server. This means that the batcher logic would require no changes and wouldn’t need to be aware of Batched Commitments.
The problem with this approach is that the semantics of the DA Server API would need to be modified, as each frame input would still need to be stored independently for each subcommitment, otherwise the DA challenge logic would become more complex (as subcommitments must be challengeable independently).
One option to keep the challenge logic without modifications would be to add new endpoints to the DA Server that accept multiple frames and perform the encoding. However, in this case the
batcher
has to be aware of batched commitments anyways, so it would still require changes to thebatcher
's logic which makes the benefits of this approach less clear.Beta Was this translation helpful? Give feedback.
All reactions