-
Notifications
You must be signed in to change notification settings - Fork 8
Draft specification of new mini protocols for Linear Leios #484
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 2 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,208 @@ | ||
# Introduction | ||
|
||
This document proposes new mini protocols necessary for Linear Leios. | ||
It takes an understanding of Linear Leios for granted; it does not define the structure and semantics of RB, EB, vote, etc. | ||
|
||
It begins with two mini protocol schemas; every new mini protocol instantiates one of them. | ||
It includes a list of which conditions that should be considered a violation of the mini protocol, with the ultimate goal of limiting how many of a node's resources these mini protocols might consume within a short duration. | ||
The most complicated violation involves a proposed change to how party's are able to increment their operational certificate issue number. | ||
It concludes with a discussion of what to consider when specifying new mini protocols, to emphasize some flexibility in how this specification is implemented. | ||
|
||
This specification does not consider syncing nodes, only caught-up nodes. | ||
|
||
# Light schema | ||
|
||
The Light mini protocol pulls whichever payload the upstream peer wants to send next. | ||
These payloads should be small and/or rare, so that having every upstream peer send all of them is tolerable. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This reminds me of gossiping and I wonder what does the pull-based nature actually bring here if we fetch it from all peers anyways? As the client would identify protocol violations if too much / the wrong data is sent anyways, this very much sounds like a push-based pub/sub protocol. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah, Light is essentially a way for peers to push identifiers of big things they're offering (which could then be pulled via Heavy). It's still technically pull, in that the client is able to stop requesting new identifiers. But, in general, the expectation is that a Light client would send tens of |
||
|
||
```mermaid | ||
graph TD | ||
D(StDone) | ||
I(StIdle); style I color:cyan | ||
X(StRequested); style X color:gold | ||
|
||
I -->|MsgDone| D --- I | ||
linkStyle 1 stroke-width:0 | ||
|
||
I -->|MsgRequestNextLight| X -->|"MsgReplyLight<br>payload"| I | ||
``` | ||
|
||
# Heavy schema | ||
|
||
The Heavy mini protocol pulls from the upstream peer whichever payload was identified by the local node's request. | ||
These payloads can be large and/or numerous, since the local decision logic---which is assumed to be predominantly inherited from TxSubmission---will try to avoid fetching a specific payload from an excessive number of upstream peers but without excessively increasing latency. | ||
|
||
```mermaid | ||
graph TD | ||
D(StDone) | ||
I(StIdle); style I color:cyan | ||
X(StRequested); style X color:gold | ||
|
||
I -->|MsgDone| D --- I | ||
linkStyle 1 stroke-width:0 | ||
|
||
I -->|"MsgRequestHeavy<br>identifier"| X -->|"MsgReplyHeavy<br>payload"| I | ||
``` | ||
|
||
# Linear Leios mini protocols | ||
|
||
This proposal introduces the following mini protocols for the Linear Leios node, assuming that an EB merely contains the identifiers of txs rather than the txs themselves. | ||
In all of the new mini protocols, the payload travels the same direction as in BlockFetch---TxSubmission is still the only mini protocol that sends payloads in the opposite direction of BlockFetch. | ||
|
||
| Unchanged mini protocol | Comparable schema | Payload | Payload semantics | | ||
| - | - | - | - | | ||
| ChainSync | Light | an RB header | the valid RB body is available for download from this upstream peer (under limited circumstances, Block Diffusion Pipelining via Deferred Validation permits the block to instead be invalid) | | ||
| BlockFetch | Heavy | an RB | one of the RBs that ChainSync indicated | | ||
| TxSubmission | Light and Heavy | set of tx identifiers OR set of txs | some identified txs that are available from this upstream peer OR one of those txs | | ||
|
||
| New mini protocol | Schema | Identifier | Payload | Payload semantics | | ||
| - | - | - | - | - | | ||
| EbPublicize | Light | n/a | an RB header | the first or second valid RB header this upstream peer has seen with some pair of slot and RB issuer is now available from this upstream peer | | ||
| EbRelayHeader | Light | n/a | an RB header | the EB body and the txs it identifies are now available from this upstream peer | | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. My plan here is: the node fetches only the first EB whose header arrives via EbRelayHeader even if a different header for the same EB opportunity already arrived via EbPublicize. Is that still compliant with your specification? I think it is, because the node is fetching the first EB it could fetch per EB opportunity, but never more than one per EB opportunity. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The goal is to fetch the EB for the first header your heard. If the two headers you mention above arrive in less than 3\Delta_hdr time, then it does not matter as no certificate will be created. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We chatted more on Slack. My takeaways:
My intuition is stubbornly unresponsive about that assumption 🤷:) |
||
| EbRelayBody | Heavy | pair of EB-slot and EB-hash | an EB | one of the EBs that EbRelayHeader indicated | | ||
| EbRelayTx | Heavy | triplet of EB-slot, EB-hash, and sequence of indices | set of txs | some of the txs that EbRelayBody indicated | | ||
| VoteRelayId | Light | n/a | triplet of EB-slot, EB-issuer, and vote-issuer along with an optional RB header | the identified valid vote is now available from this upstream peer | | ||
| VoteRelayBody | Heavy | triplet of EB-slot, EB-issuer, and vote-issuer | a vote | one of the votes that VoteRelayId indicated | | ||
| EbFetchBody | Heavy | pair of EB-slot and EB-hash | an EB | one of the EBs that ChainSync indicated | | ||
| EbFetchTx | Heavy | triplet of EB-slot, EB-hash, and sequence of indices | set of txs | some of the txs that EbFetchBody indicated | | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm worried by the number of mini protocols. I see your point of finding repeating patterns of this Light/Heavy schema, but there seems to be a lot of duplication still? At the same time, this radical separation requires logic to interact across servers/clients which could be kept more local otherwise (see my comments below for example). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Could you elaborate? The only duplication from my perspective is the intentional "redundancy" I called out in one of the remarks. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah, here's an example you already gave.
|
||
|
||
Remarks. | ||
|
||
- EbRelayBody and EbRelayTx are separate mini protocols because an upstream peer should be able to serve new EBs even while it's already serving some txs. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why would we want to serve transactions of an EB and a body of a different EB at the same time to the same downstream peer? In fact, this would even make more sense if combined IMO. For example, a client may only ever request transactions for an EB it got relayed already or request a new EB body. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I see the temptation to enforce that, but I don't see any benefit to doing so. An honest server is going to offer its EBs to all of its downstream peers as soon as possible anyway, so why bother constraining their behavior based on what we've actually offered to them? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suppose two EBs arise within a couple slots. If I have a downstream peer who needs to get the first EB's txs from me (no one else has offered them) but could get the second EB's txs from other peers (eg they're already in mempools), then it seems plausible that they need both some txs from me as well as a second EB from me. And both should arrive as soon as possible for the sake of There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I think you misunderstood me. I would not want to limit what we offer. But push the burden what to request (either txs or bodies) from us at a given time to the downstream client within a protocol instance. This would allow us to specify per peer limits more specifically for the responsibility of "EB diffusion" and we'd need to coordinate and balance things less across multiple servers/clients. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Right, so the downstream peer can soon request everything we have. So why punish them for requesting something slightly earlier than we told them they could? |
||
- EbRelayHeader is separate from EbRelayBody and EbRelayTxs because of the _freshest first delivery_ rule (FFD). | ||
An upstream peer must be able to offer newer EBs even while sending some older EBs/txs. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I like the name
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Does it help if you instead think of it as the "super mini protocol" Edit: if that's compelling, then there's only four new protocols Edit 2: And moreover, all four are self-contained in the way you said feels more natural. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
My understanding is merely that FreshestFirstDelivery means that, whenever a node is aware of more than one EB it could allocate resources to fetch, it should allocate those resources to the ones with the greater slots. I don't necessarily know how to consider FFD across multiple objects---but maybe we don't need to, since all other objects are small and diffused by other proposed mini protocols and so can operate concurrently with the EB-diffusing mini protocols. (Edit: In particular, if I know some fresher EBs exist but none of my peers have offered those EBs to me, I think I should still start fetching the freshest of the EBs that I can already start fetching---just idling until the fresher EBs are actually available seems unwise.)
One of the reasons |
||
- EbRelay* and EbFetch* are separate mini protocols because EbFetch* is usually dormant but has utmost urgency when it's active. | ||
As soon as a node realizes it does not already have the EB certified by some RB that it needs to validate, it should be able to instantly request that EB from upstream peers who have already offered that RB via ChainSync. | ||
If EbRelay* and EbFetch* were not separate protocols, then the urgent EbFetch* requests might be blocked behind by the coincident EbRelay* requests. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @will-break-it wrote (in a top-level comment)
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. FYI, as I'm working through the details (we chatted a bit during the Consensus Office Hours), I'm now doubting whether separating Imagine that EBs just contain txs (or equivalently that we're fetching EBs for which we don't have any of the txs and their sets don't overlap).
Assuming that Y >= X, regardless of whether the mux is able to interleave the X bytes and the Y bytes, the However, if the mux were to do biased interleaving, then now timeouts for the lower-priority mini protocol would be naive unless they depend on whether a high-priority mini protocol were simultaneously active. This is getting more complicated than I had been envisioning. So at this point, maybe consolidating some of the complexity within a combined |
||
There might be better ways to avoid this risk, but the current method seems suitable for a specification: simple and explicit. | ||
- TODO a client might wisely decompose and rate-limit its EbRelayTx requests so that it can pause them if EbFetchTx suddenly has a significant number of txs to request from the same peer. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree that we want to be able to fetch an EB as quick as possible. But wouldn't this be even more the case if we'd have one protocol for this purpose? For example:
In this situation, if If we use the same protocol for 1. and 2., network clients that get work from that queue would stop downloading of There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Yes, them competing with each other is exactly what we want! The alternative is that fetching EB2 might not be able to start until (some aspect of) fetching EB1 has finished---ie E2 would be delayed. We can still truncate/pause the EB1 work as soon as possible. But the parts we cannot cancel should not prevent the EB2 part from starting. They should be able to happen in parallel. With today's There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. With competing I meant: If we don't stop 1., then 2. would be further delayed than it could be. Maybe I'm just worried about interactions across multiple client components?
You mentioned pipelining within a protocol above. This would be possible here too, right? i.e. the client that fetches EBs as fast as possible would be able to send the request for I think it's appealing to me if we could pack the whole EB fetching and scheduling complexity into a single protocol client (instead of spreading that responsibility across more "components"). One situation when this would be a really bad idea is when we would have a lot of big requests in 1. that are already pending and we couldn't act on doing 2. because we operate at our limits. However, I don't think this is the case here. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. (Bookkeeping: Will raised a similar concern in a different thread: #484 (comment)) |
||
- EbRelayBody and EbFetchBody instantiate Heavy because EBs are large enough that the node should not download every EB from every upstream peer. | ||
- VoteRelayBody instantiates Heavy even though votes are so small because there are so many votes per EB that a node should not redundantly download every vote from every upstream peer. | ||
- EbRelayTx and EbFetchTx instantiate Heavy because, under normal circumstances, the node will already have received most of the txs in an EB via TxSubmission. | ||
It shouldn't request all of the txs in some EB and certainly not from all peers. | ||
- Under normal circumstances, each upstream peer will send a single RB header four times: once via EbPublicize, once via ChainSync, once via EbRelayHeader, and once via VoteRelayId. | ||
This redundancy is intended, for the reasons below. | ||
Future optimizations could avoid some of the redundancy, but this specification avoids the complexity of synchronizing the mini protocols and the redundancy seems tolerable. | ||
- EbRelayHeader offers EBs by sending the corresponding RB header so that it's impossible for an upstream peer to offer an EB for which the local node has never seen a matching RB header. | ||
This scenario might be comparatively easy to handle within a node, but this specification is able to entirely avoid this awkward case. | ||
- The same scenario might arise for votes in VoteRelayId. | ||
It's much less probable because of the 3Δ_hdr moratorium on voting, but allowing VoteRelayId to send a corresponding RB header along with the first corresponding vote again prevents the corner case. | ||
- During an equivocation attack, EbPublicize will crucially deliver RB headers that wouldn't necessarily have arrived via the other mini protocols, because ChainSync only offers the best chain the upstream peer has selected and EbRelayHeader only offers the RB header for the first EB that upstream peer received. | ||
A proof of equivocation must arrive at all nodes if it exists. | ||
- During an equivocation attack, EbPublicize will crucially deliver RB headers much sooner than the other mini protocols would, since RB headers can propagate via EbPublicize more than one hop ahead of the relevant EB bodies/RB bodies/votes. | ||
A proof of equivocation must arrive as soon as possible. | ||
|
||
# Bounding resource usage, Step 1 of 2 | ||
|
||
The key strategy for bounding resource usage is that the protocol's various leader schedules and the nodes' equivocation detection together bound how much work might arise within a short duration. | ||
If the victim can therefore detect whenever the adversarial peer sends an object that is either not justified by an election or equivocates an election, then they can protect themselves by disconnecting. | ||
The following describes which behaviors justify such a disconnection. | ||
|
||
Every payload of a Light mini protocol and every identifier in a Heavy mini protocol include enough information to determine whether the corresponding RB header/EB/vote is too old or equivocates a leadership proof. | ||
This provides a very straight-forward means of limiting how many RB headers/EBs/votes/txs a node might need to send to its downstream peers or receive from its upstream peers. | ||
For this reason, the _window_ managed by the TxSubmission mini protocol would be an unnecessary complication in Light and Heavy. | ||
|
||
- Let ClockSkew be some duration that is a conservative upper bound on the difference between two honest nodes' clocks. | ||
(A plausible value could be double the upper bound on the difference between a single node's clock and a theoretical perfect clock.) | ||
- Let GracePeriod be some duration that is a conservative upper bound on how long after an honest immediate peer sends some message that that message might be processed locally, eg 15 seconds to accommodate an expensive garbage collection on either side. | ||
It should also account for ClockSkew. | ||
- Let _EB opportunity identifier_ be the pair of slot and hash of the issuing stake pool's public cold key. | ||
Note that this excludes the operational certificate issue number, which is discussed in the next section. | ||
- Let _vote opportunity identifier_ be the pair of an EB opportunity identifier and the hash of the voting stake pool's public cold key. | ||
|
||
The following table lists exceptional scenarios that would justify disconnecting from the upstream peer---ie checks that a client would do. | ||
|
||
| Mini protocol client | Observation that triggers disconnect | | ||
| - | - | | ||
| every Light mini protocol | the payload is fails as much validation as can be checked without a recent ledger state | | ||
| every Heavy mini protocol | the payload is not exactly what was requested | | ||
| EbPublicize | the RB header is the third from this peer with the same EB opportunity identifier (refined in next section) | | ||
| EbRelayHeader | the RB header is the second from this peer with the same EB opportunity identifier | | ||
| VoteRelayId | the RB header is the second from this peer with the same EB opportunity identifier | | ||
| VoteRelayId | vote is the second from this peer with the same vote opportunity identifier | | ||
| EbPublicize | the RB header is older than L_vote + GracePeriod seconds | | ||
| EbRelayHeader | the RB header is older than L_vote + L_diff + GracePeriod seconds (TODO increase this limit?) | | ||
| VoteRelayId | the RB header/vote is older than L_vote + GracePeriod seconds | | ||
| EbPublicize | the RB header is more than ClockSkew seconds early | | ||
| EbRelayHeader | the RB header is more than ClockSkew seconds early | | ||
| VoteRelayId | the RB header/vote is more than ClockSkew seconds early | | ||
| VoteRelayId | offered vote opportunity identifier includes an EB opportunity identifier that this node has never seen an RB header with | | ||
| VoteRelayId | the RB header has a different EB opportunity identifier than the vote opportunity identifier it accompanies | | ||
| VoteRelayBody | the vote is invalid | | ||
|
||
TODO an adversarial peer can _offer_ every potential vote (regardless of whether actually elected/it exists) for every RB, once they have reason to believe the victim won't request the bogus votes (eg that EB opportunity has already reached a quorum). | ||
This could be prevented by including signatures in VoteRelayId, but then we might as well be relaying every vote to every peer. | ||
Hmm. | ||
|
||
The following table is for checks against a downstream peer---ie checks a server would do. | ||
|
||
| Mini protocol server | Observation that triggers disconnect | | ||
| - | - | | ||
| EbRelayBody | requested object is unknown, including eg not immutable and more than 45 + GracePeriod seconds older than the immutable tip | | ||
| EbRelayTx | requested object is unknown, including eg not immutable and more than 45 + GracePeriod seconds older than the immutable tip | | ||
| VoteRelayBody | requested object is unknown, including eg older than L_vote + 15 + GracePeriod seconds | | ||
| EbFetchBody | requested object is unknown, including eg not immutable and more than 45 + GracePeriod seconds older than the immutable tip | | ||
| EbFetchTx | requested object is unknown, including eg not immutable and more than 45 + GracePeriod seconds older than the immutable tip | | ||
|
||
TODO replace 15, 45, etc with named parameters? | ||
|
||
TODO add L_recovery? | ||
|
||
# Bounding resource usage, Step 2 of 2 | ||
|
||
The above rules would suffice if the set of stake pools and their hot keys did not vary over time, but they do in Cardano. | ||
|
||
- Pools retire and new pools register, which determines their cold key. | ||
This does not pose a challenge, because the set of pools that could possibly be elected in any specific slot is determined so much earlier than the slot itself that the Praos security argument already requires all honest nodes agree on that set before its relevant. | ||
- If an adversary acquires some victim's hot keys---the keys that sign their blocks and their votes---then the adversary will always be able to issue blocks that equivocate the victim's recent and upcoming elections, for RBs, EBs, and votes. | ||
This does pose a challenge. | ||
|
||
In the Praos system, the _operational certificate issue number_ mechanism allows the victim to leverage their unique access to their cold private key in order to issue a higher precedence hot key for their next RB header. | ||
Even if the victim for whatever reason doesn't issue a higher precedence hot key, then at least the finite KES period passively disables the leaked key when it expires, which might be up to ~90 days later. | ||
See <https://github.com/IntersectMBO/ouroboros-consensus/pull/1610> for more details (TODO update once merged). | ||
|
||
In Linear Leios, EbPublicize must be able to relay RB headers even if it didn't relay their specific recent ancestors; otherwise, the bound on how many headers are relayed would be significantly greater than 2x. | ||
It therefore cannot simply include the operational certificate issue number in the EB opportunity identifier, because without the rules that rely on a header's recent ancestors, there'd be no limit to how often an adversary could increment their operational certificate issue number in an attack against EbPublicize. | ||
Unlike ChainSync, EbPublicize could not necessarily notice that the headers in such an attack are invalid. | ||
|
||
If EbPublicize merely approximated the existing Praos rate limiting logic, then other attack vectors might arise. | ||
For example, if EbPublicize assumed that the operational certificate issue number could not increment more than once per ten slots, then the adversary could diffuse higher precedence hot keys via EbPublicize but never via ChainSync. | ||
EbPublicize would then ignore equivocation that happens on the actual chain, since the equovicating headers would use hot keys with lower precedence. | ||
|
||
Therefore, Praos's operational certificate issue number mechanism should be further constrained, in order to enable a reasonable bound on the EbPublicize traffic. | ||
The proposed new constraint is that the operational certificate issue number cannot be incremented more than once per stability window (ie 36 hr on Cardano today), and EbPublicize would be relaxed as follows. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is an interesting proposal. It sounds very much related to the key registration / protocol setup of Leios though and only tangentially relevant to the network mini protocol(s). It would be great to define what makes a valid EB opportunity and state there why its nice that validating it needs only an immutable ledger state (and what consequence on op cert increments this has). Then, in the network protocol definition it would suffice to say, that at most three EB opportunities may be publicized and to validate the response one needs an immutable ledger state. .. only applies once we want to integrate this work into the CIP There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Yes, that's true. It arose here because I'm arguing here that the mini protocols' resource utilization is bounded, and I have to choose a concrete mechanism to achieve that, and there isn't a suitable one without this proposal. But if we all agree on it, then it does seem better to (eventually) "upstream" it to the "Linear Leios spec" side of the documentation. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @pagio should we chat about this? |
||
|
||
| Mini protocol client | Observation that triggers disconnect | | ||
| - | - | | ||
| EbPublicize | the RB header is the third from this peer with the same EB opportunity identifier and operational certificate issue number | | ||
| other rows | as in the previous section | | ||
|
||
There are two advantages to this new constraint. | ||
|
||
- EbPublicize would need to relay at most four headers per EB opportunity, by considering RB headers with an operational certificate issue number that is <Z(Sl) or >Z(Sl)+1 to be invalid, where Sl is slot of the header and the mapping Z is objectively determined by the node's immutable ledger state. | ||
- ChainSync would consider a header invalid if EbPublicize does. | ||
Note that the implication in the other direction is not guaranteed: ChainSync has more information to use, and so can be more strict. | ||
|
||
There are two disadvantages to this new constraint. | ||
|
||
- Most importantly, the operational certificate issue number mechanism would not promptly mitigate the Praos attacks available to an adversary that manages to acquire a victim's hot keys twice within some 36 hr duration. | ||
Instead of mitigating the attack as soon as the victim issues their second header, the victim would have to wait to issue a header 36 hours after their first increment. | ||
- Though ChainSync could still react immediately to an incremented operational certificate issue number, EbPublicize would unfortunately only start ignoring the adversary's equivocations 36 hr after the victim increments their operational certificate issue number. | ||
Thus, the adversary could prevent the victim's EBs from being certified until 36 hr after the victim increments their operational certificate issue number. | ||
|
||
# Architecting Mini Protocols | ||
|
||
An implementation could plausibly combine some of these mini protocols, eg VoteRelayId and VoteRelayBody. | ||
This specification maximally separates the new mini protocols for two reasons. | ||
|
||
- The separation emphasizes how loosely coupled each mini protocol's implementation can be---only their parameter values, such as maximum age, should influence each other. | ||
- In the context of the <https://github.com/IntersectMBO/ouroboros-network> framework used in today's Cardano node, messages within a single protocol have to be processed in order. | ||
So if the same mini protocol were responsible for delivering an EB as well as offering more EBs, it's possible that a node might need to wait for the EB to finish arriving before it could learn about new offered EBs. | ||
By separating those exchanges into different mini protocols, the existing framework already allows them to proceed independently---the mux lets them share bandwidth to avoid head-of-line blocking. | ||
Other hypothetical implementations might be able to achieve comparable guarantees with a combined mini protocol, but due to the current dominance of a single mature Cardano implementation, it is reasonable to consider its current implications for possible interpretations when specifying mini protocols. | ||
|
||
Due to their simplicity, it seems more plausible than ever that some existing pub/sub protocol might be suitable for implementing these. | ||
nfrisby marked this conversation as resolved.
Show resolved
Hide resolved
|
||
On the other hand, any framework that can already express the more sophisticated ChainSync, BlockFetch, and TxSubmission ought to easily accommodate Light and Heavy. | ||
|
||
Just as for the initial Peras developemnts, it's plausible that pairs of protocols such as VoteRelayId and VoteRelayBody could be initially implemented as a copy of TxSubmission with additional hooks to enforce the age limits. | ||
That'd be a plausible first iteration, though it does inherit some complexity and coupling that is unnecessary in the particular case of Linear Leios objects. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would love to see this contributed to the CIP draft