Open
Conversation
…nt session access and simplify synchronization. Switch from manual locking to DashMap's atomic operations in `RBC` implementations and restructure session management for efficiency. Optimize `lagrange_interpolate` with parallel computation for large inputs using crossbeam. Simplify and streamline `NetworkErrorCode` handling. Enhance triple generation with parallel processing for batch execution. Include batched `RanSha` functionality support. Integrate `DashMap` and `crossbeam` as new dependencies.
…tputs to prevent mismatched results across parties due to network delays. Transition to using `HashMap` for collected outputs and ensure sorted processing. Add diagnostic logging for interpolation steps and optimize robust interpolation error handling.
…ds for testing. Update threshold for sequential computation and integrate Criterion for performance analysis.
…eration, including support for batched message processing, deterministic session ID generation, and enhanced diagnostic logging. Refactor existing RanDouSha logic to accommodate both regular and batched modes, ensuring efficient share collection and output handling.
…otocols - Introduced extensive benchmarking for core operations, including share generation, recovery, and share multiplication. - Added tests for Lagrange interpolation, robust vs. non-robust recovery, and FFT vs. Lagrange for optimization analysis. - Developed stress tests for RanDouSha, RanSha, and batched RanSha protocols to evaluate scalability and throughput under heavy workloads. - Enhanced diagnostics with detailed performance summaries, memory usage estimation, and bottleneck analysis.
- Introduced `batch_ops` module with efficient implementations of common field operations (batched multiplication, addition, subtraction, scalar multiplication), as well as polynomial operations such as coefficient-wise addition, scalar multiplication, and summation. - Optimized Vandermonde matrix computation and evaluation functions (single-point and batched) with parallel processing and Montgomery's trick for batch inversion to reduce field inversion overhead. - Updated Lagrange interpolation and robust interpolation to leverage batched operations, improving performance by parallelizing computation and minimizing redundant field operations. - Refactored double-share generation logic in RanDouSha and Batched RanDouSha to detect readiness for output phase earlier, avoid redundant locking, and better handle partially processed states. - Enhanced logging granularity by integrating `trace` logs for finer debugging.
… batched share generation and RanDouSha; adjust test parameters for large-scale workloads
Limits on the number of times a subprotocol can be executed as well as on other protocol-specific resources such as storage are set. Related to that, session IDs are checked to minimize the target surface for malicious messages. This means - protocol-specific checks such as sub ID == 0 - check that instance ID matches the one of the current instance To test session IDs, a preprocessing test with a lot of shares and triples has been added.
The network test only worked in release mode, probably because functions were blocking in threads (such as accept/recv) and these threads were cancelled, leading to cancelling while locks were still held. This is fixed by simply accepting/receiving once instead of infinitely. Additionally, some memory leaks are removed and appropriate freeing functions are added.
- implemented Pedersen commitments with basic tests. - Avss and Random share generation with avss - Rand() function added
Add automated CI pipeline with: - Build job (release mode) - Test job (all integration tests) - Lint job (rustfmt + clippy) Triggers on push to main/dev/feature branches and PRs. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace incorrect action name 'dtolnay/rust-action@stable' with the correct 'dtolnay/rust-toolchain@stable' at lines 21, 45, and 68. This fixes the CI failure on PR #69. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Applied cargo fmt and cargo clippy fixes across the codebase to ensure clean CI builds. Updated CI workflow to use targeted test command that runs only the stoffelmpc-mpc library tests. Changes: - CI: Updated test command from `cargo test` to `cargo test -p stoffelmpc-mpc --lib` - Formatting: Applied cargo fmt to 33 files across mpc/ and network/ crates - Linting: Fixed clippy warnings including unused imports, style issues - Added crate-level lint allows in mpc/src/lib.rs for acceptable style patterns Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Ensures consistent Rust version between local development and CI to prevent environment-specific lint failures. Fixes unnecessary_unwrap clippy warning in bad_fake_network.rs by using if-let pattern. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Use iterator pattern instead of index-based loop for cleaner code. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Implement missing `rand` trait method in HoneyBadgerMPCNode - Update SubProtocolCounters to use Mutex<Option<u8>> type - Add async .await to get_next() counter calls - Fix unused imports in pedersen.rs test module with #[cfg(test)] - Add LimitError variant to RandBitError enum - Update get_or_create_storage to return Result for session limits - Fix Box::from_raw to use drop() for proper memory freeing - Prefix unused sessionid variable with underscore - Fix test to properly check session storage limit Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace fixed 500ms sleep with a polling loop that waits for specific conditions to be met. This fixes intermittent CI failures caused by the sleep not being long enough on slower machines. The polling loop checks every 50ms with a 10 second timeout, ensuring the test is both fast on fast machines and reliable on slow ones. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…s-ci ci: Add GitHub Actions CI workflow (STO-386)
…e batch processing, add handling for out-of-order messages, and improve resource cleanup and diagnostics.
…otocols Merge the optimization branch which includes: - DashMap for concurrent session access - Crossbeam parallel Lagrange interpolation - Batched RanSha, RanDouSha, and Triple Generation protocols - Batch field/polynomial operations - Benchmarking infrastructure - Async yielding for executor efficiency - Large-scale performance tests Also preserves dev-branch additions (Avss, CI, session checks). Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Convert all try_join_all patterns for network sends and protocol initialization to sequential for loops. This removes the parallel network concurrency from the gabe-optimizations branch while keeping all CPU-side optimizations (DashMap, crossbeam, batched protocols). Changes: - Replace try_join_all send patterns with sequential loops in share_gen, batched_share_gen, batched_ran_dou_sha, batch_recon, and double_share_generation - Replace parallel orchestration (try_join_all + HashMap collection + deterministic ordering) with sequential init-recv-add loops in mod.rs - Remove derive_task_seed function and unused imports (sha2, HashMap, try_join_all, ShamirBeaverTriple, Polynomial) - Add missing error variants (TripleGenError::SessionIdError/LimitError, HoneyBadgerError::LimitError/InstanceIdError) - Fix test files to match get_or_create_store return type (Arc<Mutex<..>> not Result/Option) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
GarryFCR
reviewed
Feb 5, 2026
| // Build a new Merkle tree from the reconstructed shards and verify | ||
| // that its root matches the original. This is the correct way to verify | ||
| // reconstruction integrity - comparing roots directly rather than | ||
| // verifying new proofs against the old root (which was the bug). |
Collaborator
There was a problem hiding this comment.
It is not a bug, the logic is sound and follows the algorithm, however your code might be faster.
GarryFCR
reviewed
Feb 5, 2026
| // For small batches, sequential is faster (avoids allocation overhead) | ||
| if a.len() <= 8 { | ||
| a.iter().zip(b.iter()).map(|(x, y)| *x * *y).collect() | ||
| } else { |
Collaborator
There was a problem hiding this comment.
Is there supposed to be a todo here?
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
gabe optimizations minus the concurrent network messaging