Skip to content

Conversation

ffranr
Copy link
Contributor

@ffranr ffranr commented Sep 1, 2025

Extend the supplyverifier syncer to support pulling supply commitments from universe servers.

Add a supplyverifier state machine so that non-universe tapd nodes (regular node) can verify supply commitments by:

  • Watching for on-chain spends
  • Pull-syncing from universe servers

Fixes #1452
Fixes #1463

@ffranr ffranr self-assigned this Sep 1, 2025
@ffranr ffranr added the supply commit Work on the supply commitment feature, enabling issuers to attest to total asset supply on-chain. label Sep 1, 2025
@ffranr ffranr marked this pull request as draft September 1, 2025 15:24
@ffranr ffranr moved this from 🆕 New to 🏗 In progress in Taproot-Assets Project Board Sep 1, 2025
@ffranr ffranr force-pushed the wip/supplyverify/pull-sync-verify branch 2 times, most recently from 64ffd37 to cb2486f Compare September 1, 2025 16:48
@ffranr ffranr force-pushed the wip/supplycommit/add-verifier branch from 26cf469 to ded200f Compare September 3, 2025 20:35
@ffranr ffranr force-pushed the wip/supplyverify/pull-sync-verify branch from cb2486f to 87ca58f Compare September 3, 2025 21:42
@ffranr ffranr force-pushed the wip/supplycommit/add-verifier branch 6 times, most recently from f3eed03 to 21c412d Compare September 8, 2025 13:20
@Roasbeef
Copy link
Member

Roasbeef commented Sep 8, 2025

Can be rebased now.


// SupplyCommitPullResult represents the result of a supply commitment pull
// operation across multiple universe servers.
type SupplyCommitPullResult struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like this could be just fn.Result?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Style nit: can move this one declaration below, to sit right above the PullSupplyCommitment method.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like this could be just fn.Result?

My intent is to keep both the fetch results (FetchResult) and a map of errors (ErrorMap) in the same structure, since there can be one error per universe server sync target. In that sense, it’s a custom alternative to fn.Result. I considered using an Either, but here both FetchResult and ErrorMap may be populated at the same time. But maybe the syncer doesn't need to return so much?

err)
}

results[serverAddr.HostStr()] = result
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will likely trigger the "concurrent write to map" race condition error.

An alternative here would be to use a buffered results channel.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well spotted, thanks!

For now I'll add a mutex (because it seems like the smallest change). But I'll look into expanding ParSliceErrCollect with the buffered results channel like you suggest. Sounds like a nice addition.

// Fetch the delegation key locator. We need to ensure that the
// delegation key is owned by this node, so that we can generate
// supply commitments (ignore tuples) for this asset group.
_, err = assetLookup.FetchInternalKeyLocator(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand your future plans correctly, then another way we'd be able to determine this is if is ours or not is via the presence of an entry in the new pre commitment table.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, or through mint batch or seedling lookup. But with a potential distant recovery mode in mind, it might be better to rely solely on key access as a design principle. I haven’t thought that through in detail yet.


case *SpendEvent:
// TODO(ffranr): This is basically the same as SyncVerifyEvent
// but we add a delay before syncing because the issuer may not
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this is just a temp state? Or a way to move forward the state machine if we detect a spend right as it's being created?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The high-level architecture I have in mind is:

  • The state machine watches on-chain outpoints for spends.
  • A spend triggers SyncVerifyState.SpendEvent.
  • SpendEvent simply applies a delay before transitioning to SyncVerifyState.SyncVerifyEvent. The delay is needed because after confirmation, the issuer peer must publish the new supply commitment to the universe server, and the server needs time to verify and store it.
  • After the delay, SyncVerifyState.SyncVerifyEvent can sync and verify immediately, with no further waiting.

@ffranr ffranr force-pushed the wip/supplyverify/pull-sync-verify branch from 87ca58f to 950a41d Compare September 9, 2025 15:50
@ffranr ffranr changed the base branch from wip/supplycommit/add-verifier to wip/supplyverify/store-nonissuer-supply-pre-commits September 9, 2025 18:01
@ffranr ffranr force-pushed the wip/supplyverify/pull-sync-verify branch from 950a41d to e5147b4 Compare September 9, 2025 18:11
@ffranr ffranr force-pushed the wip/supplyverify/store-nonissuer-supply-pre-commits branch from 8ffa1eb to 306bf35 Compare September 10, 2025 00:30
@ffranr ffranr force-pushed the wip/supplyverify/pull-sync-verify branch from e5147b4 to 28ef0de Compare September 10, 2025 00:32
@ffranr ffranr force-pushed the wip/supplyverify/store-nonissuer-supply-pre-commits branch from 306bf35 to 48336f5 Compare September 10, 2025 10:11
@ffranr ffranr force-pushed the wip/supplyverify/pull-sync-verify branch from 28ef0de to 867fbf9 Compare September 10, 2025 10:12
@ffranr ffranr force-pushed the wip/supplyverify/store-nonissuer-supply-pre-commits branch from 48336f5 to c9081c0 Compare September 10, 2025 17:30
@ffranr ffranr force-pushed the wip/supplyverify/pull-sync-verify branch from 867fbf9 to fe562a1 Compare September 10, 2025 17:40
@ffranr ffranr force-pushed the wip/supplyverify/store-nonissuer-supply-pre-commits branch from c9081c0 to e8ea035 Compare September 10, 2025 21:02
@ffranr ffranr force-pushed the wip/supplyverify/pull-sync-verify branch from f7f4c61 to 333be92 Compare September 11, 2025 14:53
@coveralls
Copy link

coveralls commented Sep 11, 2025

Pull Request Test Coverage Report for Build 17652734676

Details

  • 710 of 1110 (63.96%) changed or added relevant lines in 22 files are covered.
  • 71 unchanged lines in 18 files lost coverage.
  • Overall coverage increased (+0.2%) to 57.142%

Changes Missing Coverage Covered Lines Changed/Added Lines %
universe/supplyverifier/verifier.go 9 10 90.0%
mssmt/node.go 8 10 80.0%
mssmt/proof.go 21 23 91.3%
tapcfg/server.go 3 6 50.0%
universe/archive.go 8 11 72.73%
universe/supplyverifier/events.go 0 4 0.0%
universe/supplycommit/env.go 16 23 69.57%
universe/supplycommit/mock.go 0 7 0.0%
universe/supplycommit/manager.go 23 33 69.7%
tapdb/supply_commit.go 37 51 72.55%
Files with Coverage Reduction New Missed Lines %
universe/supplyverifier/manager.go 1 63.28%
address/mock.go 2 96.2%
asset/group_key.go 2 72.15%
fn/iter.go 2 62.07%
tapdb/mssmt.go 2 90.45%
universe/supplycommit/env.go 2 74.3%
universe/supplycommit/transitions.go 3 84.64%
authmailbox/receive_subscription.go 4 77.89%
rpcserver.go 4 61.29%
tapdb/multiverse.go 4 79.52%
Totals Coverage Status
Change from base Build 17628726079: 0.2%
Covered Lines: 64058
Relevant Lines: 112103

💛 - Coveralls

Copy link
Member

@jtobin jtobin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM -- I think subsequent refinement can be made in other PR's. 👍

Updates the supply commit tables to store group keys in canonical
32-byte x-only form as defined in BIP340 (schnorr.SerializePubKey)
instead of the previous 33-byte tweaked format.
Earlier commits changed group_key columns across several SQL tables
to store 32-byte BIP340 public keys instead of 33-byte compressed
(tweaked) keys.

This commit replaces btcec.ParsePubKey with schnorr.ParsePubKey and
pubKey.SerializeCompressed() with schnorr.SerializePubKey when
reading from or writing to those columns.
Aligns DB encoding with BIP340 and avoids size/format mismatches.
Use the new IsEquivalentPubKeys helper to compare group pub keys
during validation. This ensures we treat keys as equivalent if their
BIP340 Schnorr serialization matches, avoiding issues with multiple
encodings of the same point.
This function is helpful when unmarshalling gRPC compressed proof bytes.
This enables extending the supplyverifier syncer to support pulling
supply commitments in a follow-up commit.
Introduce the PullSupplyCommitment method, which calls the
FetchSupplyCommit RPC endpoint. Support for calling this endpoint from
the syncer was added in the previous commit.
Consolidate all state machine startup logic into a new method
called startAssetSM to improve code clarity. Added IsRunning
checks to ensure the state machine is active before returning
it from the cache.
Consolidate all state machine startup logic into a new method
called startAssetSM to improve code clarity. Added IsRunning
checks to ensure the state machine is active before returning
it from the cache.
Add a method that returns all asset groups with supply commitment
enabled.

This will be used to set up supply verifier state machines for each
eligible asset group at `tapd` startup. Which enables on-chain spend
watch from the start and ensures proper setup after a reboot.
Add a new SQL query to fetch the latest stored supply commitment
based on the highest block height.
Query the local database for asset groups with supply commitment enabled
where the node cannot publish commitments (no delegation key). These are
commitments published by external issuers. Commitments from the current
node are excluded, as they do not need to be tracked by the verifier.

This step is skipped when running in universe server mode.
Extract CheckSupplyCommitSupport function for use in the
supplyverifier package. Rename fetchLatestAssetMetadata to
FetchLatestAssetMetadata to allow external usage.
Verify that the asset group is supported before starting a
supplyverifier state machine for it.
Removed database read/write for state machine state, as recovery on
reboot doesn't require stored state. The state can be fully
regenerated on startup, eliminating the need for persistence and
simplifying recovery/restart logic.
Begin monitoring universe syncer issuance events and start
supplyverifier state machines for asset groups that support supply
commitments and do not already have a running state machine. This allows
a peer node to initiate supply verification and on-chain UTXO watching
after syncing a new asset group issuance.
If an asset ID is supplied, use it when fetching asset metadata in
FetchLatestAssetMetadata.
Lower the log level of transaction and commit-related output from info
to trace/debug to reduce noise and improve log file readability. This
level of detail is no longer needed at the info level.
Significant rewrite of state machine events, states, and transitions.
Replaces the placeholder state machine.
Extend the testSupplyCommitIgnoreAsset integration test to verify that
the verifying peer node correctly processes the supply commitment and
that the commitment can be retrieved from this peer.
@ffranr ffranr force-pushed the wip/supplyverify/pull-sync-verify branch from 333be92 to aa7dff3 Compare September 11, 2025 17:43
-- The asset group key for this pre-commitment.
-- Stored in canonical 32-byte x-only form as defined in BIP340
-- (schnorr.SerializePubKey).
group_key BLOB CHECK(length(group_key) = 32),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Love how we can view the final generated schema for diffs (in addition to the migration ofc), makes larger migrations like this easier to review.

Alternatively, we could just change the existing versions in place....as this was just in master.

// BIP340 serialization provides a unique, canonical byte representation.
//
// TODO(ffranr): This should be a method on btcec.PublicKey.
func IsEquivalentPubKeys(a, b *btcec.PublicKey) bool {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@@ -41,6 +42,38 @@ func NewProof(nodes []Node) *Proof {
}
}

// NewProofFromCompressedBytes initializes a new merkle proof from its
// compressed byte representation.
func NewProofFromCompressedBytes(compressedProofBytes []byte) (Proof, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you aware that we already have this?

// Decode decodes the compressed proof encoded within Reader.
func (p *CompressedProof) Decode(r io.Reader) error {
var numNodes uint16
if err := binary.Read(r, byteOrder, &numNodes); err != nil {
return err
}
nodes := make([]Node, 0, numNodes)
for i := uint16(0); i < numNodes; i++ {
var keyBytes [sha256.Size]byte
if _, err := r.Read(keyBytes[:]); err != nil {
return err
}
var sum uint64
if err := binary.Read(r, byteOrder, &sum); err != nil {
return err
}
nodes = append(nodes, NewComputedNode(NodeHash(keyBytes), sum))
}
var bitsBytes [MaxTreeLevels / 8]byte
if _, err := r.Read(bitsBytes[:]); err != nil {
return err
}
bits := UnpackBits(bitsBytes[:])
*p = CompressedProof{
Bits: bits,
Nodes: nodes,
}
return nil
}


// Build a map from server addresses to their corresponding errors.
errorMap := make(map[string]error)
for idx, pullErr := range pullErrs {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah ok, yeah it's just an integer index, it's a map key. I think it's fine as is.

}

// Store the verified commitment.
err = env.SupplyCommitView.InsertSupplyCommit(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A ref to come back to: this needs to spend the spent pre commits in the DB.

switch {
case e.WatchStartTimestamp.IsZero():
// No watch start timestamp: wait the full sync delay.
time.Sleep(env.SpendSyncDelay)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need this sleep at all? Is it just an itest artefact?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternatively, we can implement a delay/retry loop when we go back to the sync verify state if/when we fail to fetch the supply commit. We also have this handy function:

// RetryFuncN executes the provided function with exponential backoff retry
// logic. This is particularly useful for RPC calls in load-balanced
// environments where nodes may temporarily return inconsistent results. The
// function respects context cancellation and returns immediately if the context
// is cancelled.
func RetryFuncN[T any](ctx context.Context,
config RetryConfig, fn func() (T, error)) (T, error) {
var (
result T
err error
)
backoff := config.InitialBackoff
// We'll retry the function up to MaxRetries times, backing off each
// time until it succeeds.
for attempt := 0; attempt <= config.MaxRetries; attempt++ {
result, err = fn()
if err == nil {
return result, nil
}
if attempt == config.MaxRetries {
return result, err
}
// Cap the backoff at the configured maximum to prevent
// excessive delays.
if backoff > config.MaxBackoff {
backoff = config.MaxBackoff
}
// Wait for the backoff duration or until the context is
// cancelled, whichever comes first.
select {
case <-ctx.Done():
return result, ctx.Err()
case <-time.After(backoff):
// Apply the multiplier to implement exponential
// backoff.
backoff = time.Duration(
float64(backoff) * config.BackoffMultiplier,
)
}
}
return result, err
}

// nolint: lll
return &SpendEvent{
SpendDetail: spend,
SpentPreCommitment: &preCommit,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like we don't ever read or act on this field.

}
}

events = append(events, &protofsm.RegisterSpend[Event]{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't look like we're handling the case of many of the pre commitments being spent. Or rather it can cause us to run the verification loop many times, potentially inserting the same commitment several times. Doesn't look dire, but I think we can tighten up the logic here.

t.Log("Attempting to fetch supply commit from secondary node")

var peerFetchResp *unirpc.FetchSupplyCommitResponse
require.Eventually(t.t, func() bool {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should wrap this in a helper assertion (optional VerifyFirst) and apply it to the other test as well, since that creates several supply commitments.

@Roasbeef Roasbeef merged commit 35fa961 into main Sep 12, 2025
19 checks passed
@github-project-automation github-project-automation bot moved this from 👀 In review to ✅ Done in Taproot-Assets Project Board Sep 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
supply commit Work on the supply commitment feature, enabling issuers to attest to total asset supply on-chain.
Projects
Status: ✅ Done
Development

Successfully merging this pull request may close these issues.

[feature]: implement on-chain universe asset commitment verifier [feature]: serve+sync canonical Universe commitments over RPC
4 participants