Skip to content

Conversation

ffranr
Copy link
Contributor

@ffranr ffranr commented Jul 23, 2025

Towards closing: #1463


This PR introduces the initial push sync flow for supply commitments:

  1. Syncer (issuer side):

    • After broadcasting the anchoring transaction (but before finalization), the issuer can push a supply commitment to a universe server.
  2. Universe server:

    • Verifies the received supply commitment.
    • Stores it in its local database upon successful verification.
    • Serves the commitment through its RPC endpoints.
  3. Integration test:

    • Extended to fetch the committed supply snapshot from the universe server, ensuring verification and persistence are working correctly.

This PR establishes the push route for supply commitment sync. A follow-up PR will add the pull route to allow clients to fetch and verify supply commitments from a universe server.


Notes

  • There are likely opportunities to simplify and reduce code duplication. In this PR the focus is on delivering the required functionality. Once the architecture has stabilized, we can revisit the code to simplify and consolidate any duplicated logic.

This change is Reviewable

@ffranr ffranr self-assigned this Jul 23, 2025
@ffranr ffranr added the supply commit Work on the supply commitment feature, enabling issuers to attest to total asset supply on-chain. label Jul 23, 2025
@coveralls
Copy link

coveralls commented Jul 23, 2025

Pull Request Test Coverage Report for Build 17445452389

Details

  • 1725 of 2927 (58.93%) changed or added relevant lines in 32 files are covered.
  • 88 unchanged lines in 18 files lost coverage.
  • Overall coverage increased (+0.04%) to 56.911%

Changes Missing Coverage Covered Lines Changed/Added Lines %
universe/supplyverifier/states.go 0 1 0.0%
universe/supplyverifier/env.go 0 3 0.0%
universe/supplyverifier/log.go 3 6 50.0%
fn/func.go 8 13 61.54%
server.go 2 8 25.0%
tapdb/sqlc/supply_syncer.sql.go 33 41 80.49%
taprpc/universerpc/universe_grpc.pb.go 17 26 65.38%
tapdb/supply_syncer.go 54 65 83.08%
universe/supplycommit/mock.go 20 31 64.52%
mssmt/compacted_tree.go 27 39 69.23%
Files with Coverage Reduction New Missed Lines %
taprpc/universerpc/universe_grpc.pb.go 1 65.13%
taprpc/universerpc/universe.pb.gw.go 1 4.67%
address/mock.go 2 88.59%
asset/asset.go 2 80.13%
tapdb/addrs.go 2 78.23%
tapdb/sqlc/transfers.sql.go 2 83.33%
asset/mock.go 3 72.77%
tapchannel/aux_leaf_signer.go 3 43.53%
tapgarden/caretaker.go 4 76.63%
universe/supplycommit/transitions.go 4 84.31%
Totals Coverage Status
Change from base Build 17435593956: 0.04%
Covered Lines: 62890
Relevant Lines: 110505

💛 - Coveralls

@ffranr ffranr force-pushed the wip/supplycommit/add-verifier branch from 56b7c94 to efeb38a Compare July 23, 2025 21:17
@levmi levmi moved this to 🏗 In progress in Taproot-Assets Project Board Jul 24, 2025
Copy link
Contributor

@guggero guggero left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Took a quick first look. Makes sense to me this far, nothing jumps out as being on the wrong track.

}

// IdleState is the state we reach when a valid unspent commitment output is
// observed. We wait for a spend to re-enter the sync state.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have much experience with the protofsm package and the state machine. But what I found helpful in Laolu's PRs was the Godoc description for each state that described what events a state can process and what other state that leads to. From that a nice diagram can then easily be created using AI tools.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I after we get a bit further along w/ the final version, i can generate another set of similar docs/diagrams, and also share the prompts I used to generate them.

// to the idle state.
if len(events) == 0 {
return &StateTransition{
NextState: &IdleState{},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, just for me to check my understanding of the state machine and the code so far: If there are no outputs to watch, we go into the idle state, at which the machine does nothing. Once a spend event comes in (presumably from watching the chain), we then transition back into the sync state.

What I don't yet see in the code is the actual "watch on-chain" part. Shouldn't there be a call to RegisterConfNtfn or RegisterSpendNtfn and a wait somewhere?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the register spend ntfn is what's sent as ExternalEvents below. If a spend occurs, then a new event will be sent in that contains the spending transaction, etc.

}

// String returns the name of the state.
func (i *IdleState) String() string {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just took another look at how things are done in the supply commit state machine. And it looks like the idle state there is just the DefaultState. I think we should do the same here. Or if you want things to be more explicit in the name, then do we even need the DefaultState in this package? Where will it be used?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mmhm, np for it to just loop back to the default state. While it loops back, it can also send an "internal event" to stage another transition. The internal event might just be telling it about the new unspent pre-commitment output.


// OnChainLookup is an interface that is used to look up on-chain information
// about supply commitments.
type OnChainLookup interface {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't it also need a way to fetch the asset commitment leaves/tree that existed at a given height?

The loop I have in my mind is:

  1. Verify a new asset, realize that it has a universe commitment field set, and a pre-commitment.
  2. Make one of these state machines for it.
  3. It watches for the pre-commitment outpoint spend.
    3a. In this state, if the normal syncing detects a new issuance event (as rn we don't force input linkage between minting events), then we send an event to also watch that outpoint.
  4. After it's spent, fetch from the universe RPC server, using the new height based index we added recently.
  5. After fetching that, verify everything matches up (you get the delta, the prior tree, then the new root).
  6. If it all matches up, add the pre-commitment, go back to the watch for spend state.

}

// String returns the name of the state.
func (i *IdleState) String() string {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mmhm, np for it to just loop back to the default state. While it loops back, it can also send an "internal event" to stage another transition. The internal event might just be telling it about the new unspent pre-commitment output.

// If we don't have a last verified commitment, then this is the
// first time we're verifying this asset group. We'll kick
// things off by watching for the latest on-chain state.
if lastCommit == nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we also check for a pre-commitment here? As when we first come up, we may not know of the verified commitment at all, but if it's a grouped asset w/ universe commitments, then it should have a pre-commitment output on chain.

// things off by watching for the latest on-chain state.
if lastCommit == nil {
return &StateTransition{
NextState: &WatchOutputsSpendState{},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this just go back to default with a self-transition?

// to the idle state.
if len(events) == 0 {
return &StateTransition{
NextState: &IdleState{},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the register spend ntfn is what's sent as ExternalEvents below. If a spend occurs, then a new event will be sent in that contains the spending transaction, etc.


// We'll gather all the UTXOs we need to watch. We start with
// the unspent pre-commitments.
preCommits, err := env.OnChainLookup.UnspentPrecommits(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to get through a bit more, but perhaps we can collapse this state into the above?

If we don't have an any unspent pre-commitments, or then we do nothing.

From the default state, we can then also accept a new event that's sent every block (or w/e) to check for new unspent pre-commitments. If we have some, then we go to the spend phase, etc. The callers can also send these events anytime they detect a new spend.

env *Environment) (*StateTransition, error) {

switch e := event.(type) {
case *SpendEvent:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing that we'll want to make sure we can handle is that: if there's 5 unspent pre-commitments (we're just catching up, and they recently did a batch mint or something), then we'll get this event 5 times. So any state after this, needs to be able to handle the spend event as a noop, if it doesn't contribute any new information.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I see it that way also.

I think we should also ensure that the spend consumes all outstanding pre-commitment outputs. The SpendEvent includes the fields SpentPreCommitment and PreCommitments to track this.

"commitment: %w", err)
}

// TODO(ffranr): proof syncing logic here.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this is the queries to go height-by-height right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. At this point, we know that either a pre-commitment output or a supply commitment output has been spent, so we need to look up any associated supply-related leaves.

// TODO(ffranr): verification logic here. This should include
// checking that there are no more unspent pre-commitments. All
// the pre-commitments should have been spent in the same supply
// commitment transaction.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd imagine that we may go back and forth between the fetch, then verify states many times until we reach the very last unspent pre-commitment. So we go height by height to fetch the latest commitment state, verify it matches up, set up the spend for the next one, repeat.

One option would be to split things into like an "initial sync" vs "sync from tip" transitions. On the other hand, i miagine that we can just unify it, as the spends will be insta if we're still catching up.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as the spends will be insta if we're still catching up.

That's what I'm hoping for. I think the spend notification should work like that.

@ffranr ffranr force-pushed the wip/supplycommit/add-verifier branch 5 times, most recently from e91bd93 to 1171731 Compare August 6, 2025 23:26

-- The latest block height that has been successfully synced for this asset
-- group.
latest_sync_block_height INTEGER NOT NULL
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

// Reuse the internal supply update logic which handles all
// the complex sub-tree and root tree updates within the
// transaction.
_, err := applySupplyUpdatesInternal(ctx, dbTx, spec, updates)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we don't want to insert these items yet, as they haven't been validated. Instead, we want to insert them on disk as supply update logs. They can be inserted as dangling, and may need some other attribtues such as that these are staged, or their height, etc.

Then the verifier (which should now persistently which height it has verified up to), can do a query for the next set of updates to verify. Once it verifies those, then it can all a method like this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment still stands. IIUC we have a new top level InsertSupplyCommit that can be used to insert the pre-validated commitment. Right now it accepts all the leaves, but could in theory just do a query for dangling supply updates, and then insert those.

@github-project-automation github-project-automation bot moved this from 🏗 In progress to 👀 In review in Taproot-Assets Project Board Aug 7, 2025
@ffranr ffranr force-pushed the wip/supplycommit/add-verifier branch 2 times, most recently from 727429d to 4abae0a Compare August 7, 2025 14:04
@ffranr ffranr force-pushed the wip/supplycommit/add-verifier branch from 4abae0a to ca19ee8 Compare August 7, 2025 16:52
Copy link
Contributor

@guggero guggero left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did another pass to load current context. Looks good!


// stateMachineCache is a thread-safe cache mapping an asset group's public key
// to its supply verifier state machine.
type stateMachineCache struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we extract this into a separate package so we don't have to re-defined here? Looks to be identical to the one in supplycommit (but probably needs to be parametrized since it's a different state machine struct that's cached).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I need to reconcile that eventually. I'm focusing on getting the end-to-end itest working first, then my plan is to revisit and clean up any remaining duplication.

@ffranr ffranr force-pushed the wip/supplycommit/add-verifier branch from ca19ee8 to f4d0771 Compare August 8, 2025 10:36
tapcfg/server.go Outdated
return db.WithTx(tx)
},
)
supplySyncerStore := tapdb.NewSupplySyncerStore(supplySyncerDb)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can just be: tapdb.NewSupplySyncerStore(uniDB) as that has the same executor signature.

@ffranr ffranr force-pushed the wip/supplycommit/add-verifier branch 4 times, most recently from 5726969 to 3f5f816 Compare August 12, 2025 23:20
@ffranr
Copy link
Contributor Author

ffranr commented Aug 12, 2025

Current state:

  • The fetch/pull side of the syncer and the FetchSupplyCommit RPC endpoint are incomplete. They are missing fields required for straightforward verification, such as those already present in the InsertSupplyCommit RPC endpoint. For example, the request message should likely include a block height range start.

  • The FetchSupplyLeaves RPC endpoint has been reframed and is no longer served publicly by the universe server. It is now primarily for itest support and can likely be dropped later.

  • The starting point for universe verification is now in the InsertSupplyCommit RPC endpoint, via r.cfg.SupplyVerifyManager.InsertSupplyCommit.

  • The syncer API uses push/pull terminology, while the RPC level uses insert/fetch. This matches perspectives: the syncer “pushes” to the universe server, which “inserts” on its side.

  • The supply commit creator state machine now attempts to push commitments to remote canonical universe servers via the syncer. Since we currently cannot set canonical universe addresses in asset metadata during minting (issue: [feature]: allow setting canonical universe in asset metadata during minting #1728), we need a fallback universe address. This address should be added to the supply commit state machine Environment and passed into the syncer.

@guggero guggero force-pushed the wip/supplycommit/add-verifier branch from 8ebe23f to e8527e9 Compare August 19, 2025 07:28
ffranr and others added 28 commits September 3, 2025 15:22
Renames applyTreeUpdates to make it suitable for reuse in
supplyverifier during the verification process.
Refactor existing logic into a standalone exported function,
UpdateRootSupplyTree, to enable reuse in the `supplyverifier`
package for verification purposes.
This refactor enables reuse of the function in a new method to be
introduced in a subsequent commit. The upcoming method will support
reading both the sbtrees and the (upper) root tree from the database
atomically.
Introduces a new method to retrieve all supply trees for a given
asset group in a single atomic database read.
Introduces a manager with an InsertSupplyCommit method to handle supply
commit verification and local DB insertion for universe server tapd
nodes. Subsequent commits will further develop this component.

The manager follows the same pattern as the supplycommit package's state
machine manager. Each state machine is tied to an asset group, and the
manager oversees all machines across groups.

We avoid specifying the full state machine now, as it won't be used for
universe server supply commit verification, which will be implemented
first.

This placeholder enables progress on the RPC implementation in upcoming
commits.
This endpoint allows the supply commit syncer to push new supply commits
into a canonical universe.

In this commit we also modify the REST path for UpdateSupplyCommit to
`/v1/taproot-assets/universe/supply/update/{group_key_str}` from
`/v1/taproot-assets/universe/supply/{group_key_str}`.
Remove the FetchSupplyLeaves RPC endpoint from the public universe
whitelist, as it will primarily be used by integration tests from now
on.
The FetchSupplyCommit RPC previously accepted leaf keys and returned
inclusion proofs for those leaves. This feature is now moved to the
FetchSupplyLeaves RPC. The FetchSupplyLeaves endpoint is not served
publicly by the universe server (mostly used in itests), whereas
FetchSupplyCommit is public. Limiting FetchSupplyCommit avoids
on-the-fly inclusion proof generation. FetchSupplyCommit is geared
towards supply commit syncing.
Update supply commit construction state machine to push the generated
supply commitment and diff leaves to the remote universe server.

We will pass the supply syncer into the supply commit manager in a
separate future commit once RocSupplySync is in place.
We clean up and consolidate a bunch of code:
- Parsing the supply commitment and data was duplicated, is now a single
  function.
- The commitmentChainInfo can be merged directly into the
  commitment.CommitmentBlock (add fields BlockHeader and MerkleProof),
  gets rid of a parameter.

We also add a new spent_commitment field to the supply_commitments table
that tracks the previous commitment transaction that was spent, to
create the full chain.
To be able to traverse that chain, we also need to be able to query the
commitments either by a commitment's outpoint or by the outpoint a
commitment is spending, which allows backward lookup in a loop.
These fields in CommitmentBlock were introduced in an earlier commit.
Populating them here allows simplifying the Manager.InsertSupplyCommit
method interface in the supplyverifier package.
To find out what commitment is being spent by a next one, we add the
optional SpentCommitment field to the RootCommitment struct.
The field should only be None for the very first commitment of an asset
group.
This field will allow us to directly find the previous supply commitment
which will be useful during verification. We will make use of this data
in a subsiquent commit.
Introduce RpcSupplySync as an intermediary between the supply commit
leaf fetch RPC endpoints and the supply verifier syncer.
Enhances ApplyTreeUpdates by ensuring that an empty subtree is included
for each relevant subtree type when appropriate. This makes the
function's return value more consistent and reliable.
Adds a verifier component to encapsulate all supply commit verification
logic. The verifier is used in the Manager.InsertSupplyCommit method,
which allows a universe server tapd node to verify supply commitments
before inserting them into the local universe server DB.

The actual DB insertion logic will be added in a subsequent commit.
Adds the InsertSupplyCommit method to tapdb for inserting supply
commitments into the local DB. The supply verifier now uses this method
to complete the supply commit insertion flow.

This change resolves the final TODO, enabling a universe server tapd
node to verify and store a given supply commitment.
Introduce a CopyFilter method to the Tree interface, along with
implementations for both Compact and Full trees. This method enables
selective copying of tree nodes using a caller-provided predicate
function, similar to the existing Copy method.

Include unit tests to verify the new functionality.

This will be used to filter leaves from the supply subtree based on a
maximum block height, supporting reconstruction of subtrees anchored to
the chain via a supply commitment transaction.
Add a block height range end parameter to FetchSubTrees, enabling
filtering of supply subtree leaves by block height. This allows
reconstruction of supply subtrees as they existed at a specific supply
commitment block height.

This functionality is useful for reproducing a supply commitment at a
given block height for syncing purposes.
Add a method to retrieve supply commitments from the verifier using a
supply commitment locator. Locator supported options are: the first
supply commit, the supply commit committed to at a given outpoint, and
the supply commit that spends a given outpoint.

A follow-up commit will remove FetchCommitment from the supply commit
manager in favor of this new method.
Add `locator` field to the FetchSupplyCommit RPC request to specify
which supply commit to retrieve. Supported options include: the first
supply commit, the supply commit committed to at a given outpoint, and
the supply commit that spends a given outpoint.

Also update the response message for this endpoint. The response now
includes all leaves new to the specified supply commit, as well as the
outpoint that the fetched supply commit spends.
Favor use of supplyverifier.Manager.FetchCommitment instead.
Extend itest testSupplyCommitIgnoreAsset to fetch the supply commitment
from the universe server. The issuer's supply commit state machine
should push the commitment to the universe server, which must validate
it before serving it. The universe server should only serve the snapshot
after successful validation.

This change verifies that the commitment can be fetched from the
universe server as expected.
@ffranr ffranr force-pushed the wip/supplycommit/add-verifier branch from 26cf469 to ded200f Compare September 3, 2025 20:35
Copy link
Member

@Roasbeef Roasbeef left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Roasbeef reviewed 2 of 17 files at r1, 42 of 42 files at r2, 45 of 45 files at r3, all commit messages.
Reviewable status: all files reviewed, 44 unresolved discussions (waiting on @ffranr, @GeorgeTsagk, and @guggero)


universe/supplyverifier/verifier.go line 142 at r3 (raw file):

// verifyInitialCommit verifies the first (starting) supply commitment for a
// given asset group.
func (v *Verifier) verifyInitialCommit(ctx context.Context,

Additional things we should verify:

  • the signatures for the ignore updates
  • for burn+ignore, verify the proofs
  • verify the anchor output information:
    • the delegation key was used
    • the root committed to matches up

universe/supplyverifier/verifier.go line 166 at r3 (raw file):

		}

		return fmt.Errorf("found initial commitment for asset group; "+

I'm not sure I understand this check. Wouldn't it always fail after we've inserted the initial commitment?


universe/supplyverifier/verifier.go line 285 at r3 (raw file):

	// that they correspond to the spent supply commitment outpoint.
	spentRootTree, spentSubtrees, err :=
		v.cfg.SupplyTreeView.FetchSupplyTrees(

So we're expected to already have this on disk before we attempt to verify? Meaning we write to disk without verifying first?


universe/supplyverifier/verifier.go line 391 at r3 (raw file):

		return v.verifyInitialCommit(ctx, assetSpec, commitment, leaves)
	}

See my comment about validation that's missing.


mssmt/interface.go line 9 at r3 (raw file):

// whether to include the leaf in the copy operation. A true value means the
// leaf should be included, while false means it should be excluded.
type CopyFilterPredicate = func([hashSize]byte, LeafNode) (bool, error)

Any reason to reach for an alias vs an actual type here?


itest/assertions.go line 2805 at r3 (raw file):

		GroupKey: groupKeyReq,
		Locator: &unirpc.FetchSupplyCommitRequest_VeryFirst{
			VeryFirst: true,

Perhaps an index to fetch instead would be more generic?


tapdb/sqlc/migrations/000045_supply_syncer_push_log.up.sql line 41 at r3 (raw file):

-- commitment of an asset group, each subsequent commitment needs to spend a
-- prior commitment to ensure continuity in the supply chain.
ALTER TABLE supply_commitments

👍


tapdb/supply_tree.go line 324 at r3 (raw file):

			}

			filteredSubTree, err := filterSubTree(

Very cool pattern + solution here w/ the tree filter!


itest/supply_commit_test.go line 508 at r3 (raw file):

	t.Log("Fetch first supply commitment from universe server")
	// Ensure that the supply commitment was pushed to the universe server

Style nit: missing a space above.

Can also be a sub test potentially.


tapdb/supply_commit_test.go line 2115 at r3 (raw file):

	require.True(t, rows.Next(), "Expected supply commitment to exist")

	var rootHashBytes []byte

Style nit: can collapse into a single var statement.


rpcserver.go line 4352 at r3 (raw file):

// mapSupplyLeaves is a generic helper that converts a slice of supply update
// events into a slice of RPC SupplyLeafEntry objects.
func mapSupplyLeaves[E any](entries []E) ([]*unirpc.SupplyLeafEntry, error) {

Couldn't this just take the supply leaves interface from the supplycommit package?

You're already doing the type assert below.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
supply commit Work on the supply commitment feature, enabling issuers to attest to total asset supply on-chain.
Projects
Status: 👀 In review
Development

Successfully merging this pull request may close these issues.

7 participants