Skip to content

Conversation

@martyall
Copy link
Member

@martyall martyall commented Dec 2, 2025

fixes #18123

@martyall martyall force-pushed the martyall/issue-18123 branch from 583f19a to f58eb9a Compare December 2, 2025 05:32
@martyall martyall force-pushed the martyall/issue-18123 branch from 6aa03a9 to e73c974 Compare December 2, 2025 22:05
@martyall martyall requested review from georgeee and glyh December 2, 2025 22:15
@martyall martyall marked this pull request as ready for review December 2, 2025 22:38
@martyall martyall requested a review from a team as a code owner December 2, 2025 22:38
( message
, read_all_proofs_for_work_single_spec single_spec ) )
with
let module T = Transaction_snark.Make (struct
Copy link
Member Author

@martyall martyall Dec 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is unfortunate because the uptime snark worker state has its own transaction module derived from the same global constants. Ideally they would share one, but the setup makes it harder than its worth

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean, just move these code into snark worker & expose an interface specific for this case will solve the issue. Like a perform_uptime that just take last segment.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So there's still going to be semantical difference in that it'll no longer be interruptible. But the fact that we're using smaller data to construct this proof should make it less relevant.

, Sok_message.t * Snark_work_lib.Spec.Single.Stable.Latest.t
, (Ledger_proof.t * Time.Span.t) Or_error.t )
F.t
; perform_partitioned :
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note for later: it probably make sense to move code here back into snark worker.

It seems now a uptime snark worker is just a regular snark worker, nothing fancy.

Copy link
Member

@glyh glyh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it make sense to just eliminate this uptime_snark_worker module and move all proving related stuff on snark worker's end if that's doable.

@glyh
Copy link
Member

glyh commented Dec 4, 2025

The story I heard is we're aggregating data across the whole block of txns. It doesn't seems like so here?

Or is it just part of the bigger plan to use last txn segment in a block? Because now it seems we're using last txn segment in a txn.

@glyh
Copy link
Member

glyh commented Dec 4, 2025

For compatibility, I think it might make sense to one of:

  • retarget develop
  • target compatible, but have delegation backend accept both version of delegation payload until after mesa

Which route did we pick?

@martyall
Copy link
Member Author

martyall commented Dec 4, 2025

The story I heard is we're aggregating data across the whole block of txns. It doesn't seems like so here?
Or is it just part of the bigger plan to use last txn segment in a block? Because now it seems we're using last txn segment in a txn.

I was told to use the last transaction segment for zkapp txs -- due to protocol upgrades, zkapp txs will be to large to snark entirely within the timeout window for reporting to uptime service. This is not a problem with e.g. standard payments.

@martyall
Copy link
Member Author

martyall commented Dec 4, 2025

target compatible, but have delegation backend accept both version of delegation payload until after mesa

As far as I can tell, the delegation program can't tell the difference in the snark statement -- notice that the delegation verifier code is unchanged. it doesn't do anything to e.g. check the statement has a certain format

@glyh
Copy link
Member

glyh commented Dec 4, 2025

Okay, after the discussion here & on slack:

  • It seems the logic here is correct
  • We need to fix the bug mentioned on that thread, but in follow up PRs potentially
  • This PR could have some additional love on refactoring making the code more organized

@martyall
Copy link
Member Author

martyall commented Dec 4, 2025

!ci-build-me

@martyall
Copy link
Member Author

martyall commented Dec 4, 2025

Okay, after the discussion here & on slack:

  • It seems the logic here is correct
  • We need to fix the bug mentioned on that thread, but in follow up PRs potentially
  • This PR could have some additional love on refactoring making the code more organized

So I looked into it and IMO you're right that it would be better to just remove this uptime snark worker module and request snarks via the normal snark worker -- I can see no good reason why they are run as separate processes with separate setup boilerplate.

However that is a bigger task than what was required here, and IMO it would be cleaner to do as a followup with sign-off from the team. So for now I have just shoved all the logic in the worker module which at least avoids recreating the tx snark module

@martyall martyall requested a review from glyh December 4, 2025 23:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: In Progress

Development

Successfully merging this pull request may close these issues.

Use tx segments in delegation program

3 participants