Conversation
|
Just amazing. |
It also speeds up recovery, because you send log N requests. |
XX.md
Outdated
| The recovery procedure defined in [NUT-13][13] reveals the wallet's entire transaction history to the mint. During recovery, the wallet sends every `BlindedMessage` it has ever generated — including nonces for tokens that were never issued — to the mint in sequential batches. This has two consequences: | ||
|
|
||
| 1. **Privacy**: The mint can correlate all of the user's past ecash activity retroactively, defeating the unlinkability guarantees of ecash. | ||
| 2. **Efficiency**: Recovery requires O(T/b) network requests, where T is the total number of issued notes and b is the batch size. |
There was a problem hiding this comment.
I would cut this and instead put in the PR description. This is more of a motivation.
There was a problem hiding this comment.
Good point trimmed it down to a two-line summary. The detailed motivation is already in the PR description.
There was a problem hiding this comment.
Not specific to this pr but we should also make sure we include context like this not just in the PR description but in the git log so we don't lose it the day github enviably doesn't work.
| 2. **On send (swap-to-send)**: A send operation involves a [NUT-03][03] swap that creates new outputs (send proofs + change proofs), which increases T. If this increase causes any remaining unspent proofs (those not involved in the swap) to have index `i ≤ T - d`, the wallet **MUST** consolidate them. Note: a pure melt (paying a Lightning invoice without swap) does not increase T and therefore cannot violate the invariant. | ||
|
|
||
| 3. **Coin selection**: Wallets **MAY** use arbitrary coin selection. After any operation that increases T, the wallet **MUST** check whether any unspent proof now has index `i ≤ T - d` and consolidate if so. This preserves free coin selection while enforcing the invariant. | ||
|
|
There was a problem hiding this comment.
There is also the case where a wallet already has d unspent proofs, and adding any more would violate the constraint no matter if you reissue the old ones. In that case, the operation should fail and compaction should be done to compress all of the existing unspent proofs into fewer ones.
| ### Breakeven points | ||
|
|
||
| This NUT reveals **more** nonces than [NUT-13][13] for small wallets: | ||
|
|
||
| - **Privacy breakeven**: T ≈ 132. For wallets with T < 132, [NUT-13][13] reveals fewer nonces. | ||
| - **Efficiency breakeven**: T ≈ 825 (with batched gap scan). For wallets with T < 825, [NUT-13][13] makes fewer requests. | ||
|
|
||
| For wallets below these thresholds, the overhead of binary search (32 sequential round-trips) and the fixed recovery window (`d` nonces) exceeds the cost of a simple linear scan. Wallets **MAY** use a heuristic: if a previous recovery or local state suggests T < 200, fall back to the [NUT-13][13] linear scan. | ||
|
|
There was a problem hiding this comment.
really good, although one doesn't know T until found.
| ### Privacy properties | ||
|
|
||
| The privacy gain for wallets above the breakeven comes from two properties: | ||
|
|
||
| 1. **Bounded leakage**: The number of revealed nonces is constant (182) regardless of wallet history size. A wallet with T = 1,000 reveals the same number of nonces as one with T = 100,000. | ||
| 2. **Non-sequential probes**: The 32 binary search probes are spread across the entire nonce space and are non-contiguous, making transaction history correlation significantly harder for the mint compared to the sequential batches of [NUT-13][13]. |
There was a problem hiding this comment.
I think this information is not necessary to implementors and can be documented elsewhere.
|
I think |
Co-authored-by: a1denvalu3 <43107113+a1denvalu3@users.noreply.github.com>
Motivated by #301 . Replaces the NUT-13 linear scan with binary search + a depth invariant that bounds unspent proofs to the last
dnonce indices.Reduces BlindedMessages revealed to the mint from O(T) to ~182 (constant), and requests from O(T/b) to ~35. No mint changes required.
Builds on the binary search and d-invariant ideas from #301 . Credit to @a1denvalu3 for the original proposal and algorithm design, @robwoodgate for the halfway-house approach and the concerns around coin-selection that shaped the final invariant, @gandlafbtc for the local-search alternative that framed the trade-offs, and @davidcaseria for raising the practicality constraints that led to the relaxed invariant.
Open for discussion this is a draft NUT, not final. Feedback welcome especially on: the Depth Invariant definition, the gap-tolerance approach, and the default value of
d.