Conversation
| const finalSidecars = noSidecars | ||
| elif someSidecarsOpt is NoSidecarsAtFork and | ||
| typeof(blck).kind in ConsensusFork.Deneb..ConsensusFork.Electra: | ||
| let finalSidecars = Opt.none(BlobSidecars) |
There was a problem hiding this comment.
why is there no routing here?
There was a problem hiding this comment.
the routing is called in the else block, apart from that, NoSidecarsAtFork is trying to capture all cases where we are in sidecar fork but there's no sidecar proposed for that block.
There was a problem hiding this comment.
so .. why would we call this function with NoSideCarsAtFork and block from a fork that has sidecars?
There was a problem hiding this comment.
basically this should be when someSidecarsOpt is NoSidecarsAtFork: nosidecars else: await publishSidecars(router, blck, someSidecarsOpt)
There was a problem hiding this comment.
or add a noop overload for publishSidecars(..., NoSidecars)
| forkyBlck, Opt.some( | ||
| forkyBlck.create_blob_sidecars(kzg_proofs, blobs)), | ||
| Opt.none(seq[fulu.DataColumnSidecar]), | ||
| checkValidator = true) |
There was a problem hiding this comment.
why not overload routeSignedBeaconBlock too?
There was a problem hiding this comment.
we would have to re-write validate block and publish block part in each of the overloads, unless we add another function adapter of sorts, apart from that, there's no reason to not have it.
this looks ok too to me, i'm trying to assess what's the benefit of the overload.
| NoSidecarsAtFork | | ||
| Opt[seq[BlobSidecar]] | | ||
| Opt[seq[fulu.DataColumnSidecar]] | | ||
| Opt[seq[gloas.DataColumnSidecar]] |
There was a problem hiding this comment.
This PR doesn't seem to treat gloas.DataColumnSidecar at all?
There was a problem hiding this comment.
I'm trying to manage this in a follow up pr, to keep gloas changes atomic
064b158 to
86aef48
Compare
| signature = shortLog(signedBlock.signature), | ||
| msg = r.error() | ||
| return err(VerifierError.Invalid) | ||
| elif consensusFork in ConsensusFork.Phase0 .. ConsensusFork.Capella: |
There was a problem hiding this comment.
Removing this and replacing it with
when sidecarsOpt is NoSidecars:
static: doAssert consensusFork in ConsensusFork.Phase0 .. ConsensusFork.Capellameans that if one tries to call verifySidecars(ConsensusFork.Phase0 through Capella, but not NoSidecars, it will now fall into the Unknown consensus fork` error message case, a bit misleadingly.
|
|
||
| proc broadcastDataColumnSidecar*( | ||
| node: Eth2Node, subnet_id: uint64, data_column: fulu.DataColumnSidecar): | ||
| node: Eth2Node, subnet_id: uint64, |
There was a problem hiding this comment.
Why split this line? it's 76 characters total
|
|
||
| proc broadcastDataColumnSidecar*( | ||
| node: Eth2Node, subnet_id: uint64, | ||
| data_column: gloas.DataColumnSidecar): |
There was a problem hiding this comment.
Same as the fulu one, sort of, regarding splitting across two lines -- it's be 77 characters as a single line, which is fine.
| return RestApiResponse.jsonError(Http400, InvalidBlockObjectError) | ||
|
|
||
| when consensusFork in [ConsensusFork.Deneb, ConsensusFork.Electra]: | ||
| when consensusFork == ConsensusFork.Gloas: |
There was a problem hiding this comment.
In H fork, which we'll start development on sometime in the next few months, this will silently resolve to the pre-Deneb case, which is incorrect. Hopefully other compiletime checks pick it up, but regardless this isn't correct. We have at leasttwo basic approaches to this:
- Use
when consensusFork >= ConsensusFork.Gloas:when we're pretty sure that H fork has a good chance of still using Gloas datacolumns. It slightly a bet, so it does depend; or - If we want to express no view on the likely future of Gloas datacolumns, directly becore
when consensusFork == ConsensusFork.Gloas:insert astatic: doAssert high(ConsensusFork) == ConsensusFork.Gloasto signal to whoever does this that they have to specifically verify that the code is updated for H fork correctly.
But this has the failure mode of potentially silently breaking in H fork.
| await node.router.routeSignedBeaconBlock( | ||
| forkyBlck, Opt.none(seq[BlobSidecar]), | ||
| Opt.none(seq[fulu.DataColumnSidecar]), checkValidator = true) | ||
| when consensusFork >= ConsensusFork.Bellatrix: |
There was a problem hiding this comment.
How can consensusFork >= ConsensusFork.Bellatrix possibly be true here? it's the else clause directly after
elif consensusFork >= ConsensusFork.Bellatrix:
return RestApiResponse.jsonError(
Http400, $consensusFork & " builder API unsupported")
else:
# ... this code is here ...There was a problem hiding this comment.
They're on separate objects (currentfork vs withBlck) but in that case the question is why they'd be inconsistent.
beacon_chain/rpc/rest_beacon_api.nim
Outdated
| forkyBlck, Opt.none(seq[gloas.DataColumnSidecar]), | ||
| checkValidator = true) | ||
| elif consensusFork >= ConsensusFork.Fulu: | ||
| elif consensusFork >= ConsensusFork.Fulu and |
There was a problem hiding this comment.
This is exactly one hardfork, Fulu
* modularise message routing for sidecars * remove unused proc * some reviews applied * address some of the reviews * some more changes, caller functions * reviews
No description provided.