Skip to content

feat: user batch support#1846

Open
Mirko-von-Leipzig wants to merge 5 commits intonextfrom
mirko/mempool-user-batches
Open

feat: user batch support#1846
Mirko-von-Leipzig wants to merge 5 commits intonextfrom
mirko/mempool-user-batches

Conversation

@Mirko-von-Leipzig
Copy link
Copy Markdown
Collaborator

@Mirko-von-Leipzig Mirko-von-Leipzig commented Mar 26, 2026

This PR is the third and final part of the mempool refactoring PR stack. Part 1 (#1820) performs the broad mempool refactoring to simplify this PR. Builds on part 2 (#1832).

Batch submissions must include their transaction inputs since we currently require this for the validator to verify them before inclusion in a block. This PR abuses this by treating the batch as a set of normal transactions at the mempool level. This simplifies the mempool implementation, which is currently built around a DAG of transactions - so having to insert a batch directly would be more complex. This will need to change once we stop requiring transaction inputs as part of the validator; but it won't be too bad.

The way this is implemented here, is that the transaction DAG tracks user batches and ensures that when a batch is selected, that transactions from user batches are not mixed with conventional transactions. That is, select_batch outputs either a user batch, or a conventional batch.

Effectively, the transaction DAG internally ensures that the user batch's transactions remain coherent even though the batch has been deconstructed into individual transactions. The benefit is that this doesn't require any major structural changes to the mempool. The rest of the mempool then treats the user batch as per normal.

Closes #1112 and closes #1859

@Mirko-von-Leipzig Mirko-von-Leipzig force-pushed the mirko/mempool-user-batches branch 2 times, most recently from 51e74e4 to 6dd5f53 Compare March 26, 2026 16:31
// Encoded using [winter_utils::Serializable] implementation for
// [miden_protocol::transaction::proven_tx::ProvenTransaction].
bytes encoded = 1;
message TransactionBatch {
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The suggestion here I think was to re-add the proven_batch property, and have the others be optional so we can drop them at some point.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is probably fine for now, but in the next release, we'll need to change this to something like ProvenTransactionBatch which would contain the ProvenBatch struct + optional data that would allow us to re-execute all transactions in the batch and validate that the proven batch is correct (similar to how we do it for proven transactions).

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added it in now. Naming might still be off, e.g. TransactionBatch but I think maybe its more useful to end users than Batch.. maybe not though.

@Mirko-von-Leipzig Mirko-von-Leipzig marked this pull request as ready for review March 26, 2026 16:34
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@PhilippGackstatter if you could throw an eye on the process here to ensure I'm checking the correct things.

The state itself is checked in the mempool, so here we really just want to ensure that the batch and its transactions are valid and the reference block is correct iiuc.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. I left another comment re batch expiration, but your call where/if to do that.

&mut self,
txs: &[Arc<AuthenticatedTransaction>],
) -> Result<BlockNumber, MempoolSubmissionError> {
assert!(!txs.is_empty(), "Cannot have a batch with no transactions");
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just checking we want to crash here instead of return error

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assumed that one cannot build a ProvenBatch without one, so this would indicate an internal bug somewhere. But maybe that's a poor assumption.

}

pub fn select_batch(&mut self, budget: BatchBudget) -> Option<SelectedBatch> {
self.select_user_batch().or_else(|| self.select_conventional_batch(budget))
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might want some doc comments to make it clear that budget is intended to only relevant for conventional batches.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also are we OK with user batches always taking priority over conventional here?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also wondering if we need to prevent user batches of size 1 (or some other limit). Unsure if that is relevant to this PR just a general thought.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also are we OK with user batches always taking priority over conventional here?

I'm unsure, but at the moment it doesn't matter much. If its a concern we can make it random -- I was thinking maybe that's best.

Also wondering if we need to prevent user batches of size 1 (or some other limit). Unsure if that is relevant to this PR just a general thought.

Good question. I'm unsure 😬 I wonder if that makes some user loop more difficult i.e. they always submit user batches, but sometimes they don't have many transactions to bundle..

Probably we would want some limit even in the future? cc @bobbinth

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also are we OK with user batches always taking priority over conventional here?

Would having fees solve this?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would give us a strategy to use, so yes fees solve this imo. Though it will be potentially complex to implement.

Copy link
Copy Markdown
Collaborator

@igamigo igamigo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! I left some mostly minor, non-blocking comments. Not sure if you tested already but jsut in case, I'm going to run the client integration tests and report back.

}

pub fn select_batch(&mut self, budget: BatchBudget) -> Option<SelectedBatch> {
self.select_user_batch().or_else(|| self.select_conventional_batch(budget))
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also are we OK with user batches always taking priority over conventional here?

Would having fees solve this?

Base automatically changed from mirko/mempool-tx-reverting to next March 29, 2026 09:02
Comment on lines +474 to +486
// Verify batch transaction proofs.
//
// Need to do this because ProvenBatch has no real kernel yet, so we can only
// really check that the calculated proof matches the one given in the request.
let expected_proof = LocalBatchProver::new(MIN_PROOF_SECURITY_LEVEL)
.prove(proposed_batch.clone())
.map_err(|err| {
Status::invalid_argument(err.as_report_context("proposed block proof failed"))
})?;

if expected_proof != proven_batch {
return Err(Status::invalid_argument("batch proof did not match proposed batch"));
}
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the idea? I'm unsure on how else to align the proof with the batch.. unless I also compare headers and then just assume?

Can a proof differ based on some other variables e.g. time, rng? Or is this safe to do.

cc @PhilippGackstatter

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That seems fine and sufficient. The "proof" (which is not a cryptographic one now anyway) should be deterministic based on the input (the proposed batch), since it's really just destructuring the proposed batch, verifying transaction proofs and constructing the proven batch.

@Mirko-von-Leipzig Mirko-von-Leipzig force-pushed the mirko/mempool-user-batches branch from 1e09f16 to e528296 Compare March 30, 2026 13:21
Comment on lines +507 to +510
"batch reference commitment {} at block {} does not match canonical chain's commitment of {}",
expected_proof.reference_block_num(),
expected_proof.reference_block_commitment(),
reference_header.commitment()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"batch reference commitment {} at block {} does not match canonical chain's commitment of {}",
expected_proof.reference_block_num(),
expected_proof.reference_block_commitment(),
reference_header.commitment()
"batch reference commitment {} at block {} does not match canonical chain's commitment of {}",
expected_proof.reference_block_commitment(),
expected_proof.reference_block_num(),
reference_header.commitment()

Nit: I think these need to be swapped

Comment on lines +512 to +513
}

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the batch expiration deliberately not checked? This would also be a pretty cheap check, assuming the latest block is easily retrievable here. E.g. if the batch expiration is already older than the latest block, we should be able to discard it directly.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. I left another comment re batch expiration, but your call where/if to do that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Update mempool doc comments Support submitting ProvenBatch

5 participants