-
Notifications
You must be signed in to change notification settings - Fork 45
Performance Optimization: dot_product_scalar iterator handling #363
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Performance Optimization: dot_product_scalar iterator handling #363
Conversation
The `dot_product_scalar` function previously cloned and traversed the input iterators multiple times for validation and counting, which could be inefficient for complex iterators. This change collects the iterators into `Vec`s of references upfront, ensuring only a single pass over the input iterators and allowing efficient repeated access via slice iteration. Benchmarks show up to 10% performance improvement for large degree parameters (N=16384). While there is minor overhead for simple iterators due to allocation, this approach guarantees performance stability for arbitrary input iterators. Co-authored-by: tlepoint <1345502+tlepoint@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: a671bbc681
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| let ct_vec: Vec<&'a Ciphertext> = ct.collect(); | ||
| let pt_vec: Vec<&'a Plaintext> = pt.collect(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid materializing full iterators for dot product
Collecting both iterators into Vecs forces full materialization of all ciphertexts/plaintexts even though the function only uses the first min(len(ct), len(pt)) items. If callers pass a very large iterator (or a lazy iterator backed by storage), this changes memory behavior from O(1) to O(n) and can trigger OOMs or severe slowdowns. The previous version only traversed for counts and then streamed values, so this is a new regression for large inputs where one side is much longer.
Useful? React with 👍 / 👎.
Previously, `dot_product_scalar` cloned and traversed iterators multiple times to determine counts and validate parameters. This caused O(N) traversal overhead even for elements that were not used (if one iterator was significantly longer than the other) and forced the iterators to support cheap cloning. This change: 1. Uses `izip!(ct, pt).collect::<Vec<_>>()` to iterate both inputs simultaneously and stop at the length of the shorter one. This avoids traversing the tail of the longer iterator and prevents infinite loops if one iterator is infinite. 2. Collects only the paired references into a `Vec`, ensuring O(min(N, M)) memory usage (storing pointers) rather than O(max(N, M)) or O(N) element materialization. 3. Performs validation and computation on the collected pairs, ensuring single-pass behavior for the underlying iterators. This significantly improves robustness for mismatched iterator lengths and avoids redundant computation while maintaining correctness. Benchmarks show mixed results with some improvements in large-scale scenarios and minor regressions in small-scale scenarios due to allocation overhead, which is an acceptable trade-off for the algorithmic fix. Co-authored-by: tlepoint <1345502+tlepoint@users.noreply.github.com>
Optimized
dot_product_scalarincrates/fhe/src/bfv/ops/dot_product.rsto avoid redundant iterator clones and multiple traversals.Key changes:
ctandptintoVec<&Ciphertext>andVec<&Plaintext>..iter().cloned()on the vector of references to ensure cleaner type handling (&Ciphertextinstead of&&Ciphertext).Verified with
cargo test -p fheandcargo bench -p fhe --bench bfv_optimized_ops. Benchmarks indicate performance improvements in scenarios with heavy computation, validating the reduced overhead of iterator management.PR created automatically by Jules for task 9384040259958191629 started by @tlepoint