Conversation
Replace binary token-overlap scoring with BM25 (Best Matching 25) probabilistic scoring for match/term/phrase queries. - Add bm25_idf() for inverse document frequency computation - Add compute_avg_field_length() for field length normalization - Add extract_query_terms() to collect query terms for IDF precomputation - score_match_query now uses full BM25 formula with TF from postings and document frequency from PositionsReader across all segments - search() precomputes IDF map and avg field length before scoring - Update match_queries_find_tokens_in_text_fields test to accept BM25 scores (non-exact since scoring now considers term frequency and document frequency, not just token overlap) - Fix clippy collapsible_if in compute_avg_field_length - Add #[allow(clippy::too_many_arguments)] to scoring functions
|
Warning Rate limit exceeded
To continue reviewing without waiting, purchase usage credits in the billing tab. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughThis change enhances search ranking with BM25-style term weighting. The search entry point now precomputes BM25 parameters (IDF per term, average field length) and supplies them to scoring functions, which apply normalized term frequency calculations for improved relevance ranking over the simpler fixed-score approach. ChangesBM25 Search Ranking Enhancement
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@rust/crates/cloudsearch-index/src/lib.rs`:
- Around line 805-806: The BM25 normalization currently computes avg_field_len
using a hardcoded "content" which misnormalizes scores for other fields; in
score_match_query replace the literal "content" with the actual field from the
query (query.field) when calling compute_avg_field_length (and keep the existing
.max(1.0) fallback), so avg_field_len is computed per the field being scored and
normalization is correct; locate compute_avg_field_length and score_match_query
references to update the argument to use query.field (with a safe fallback if
query.field can be empty).
- Around line 794-804: The IDF precompute loop uses extract_query_terms(query,
"") which strips field names and thus fails to extract terms for fielded
Match/Phrase queries, leaving idf_map empty; change the extraction to preserve
the query's field context so term lookup against self.positions_readers works:
call extract_query_terms with the actual field (or a variant that returns
fielded terms) for the incoming Query used by BM25, then compute total_df by
iterating readers and summing pl.docs.len(), compute idf via bm25_idf and insert
into idf_map as before (ensure the functions extract_query_terms, idf_map,
bm25_idf, and positions_readers are used consistently so Match/Phrase terms are
included).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5171ac10-b9ef-484c-9ce3-0775dcfc7673
📒 Files selected for processing (1)
rust/crates/cloudsearch-index/src/lib.rs
- Add extract_target_field() helper to get field name from query - Use actual query field (not hardcoded "content") for avg_field_len - Deduplicate query_terms via BTreeSet before IDF loop
Summary
bm25_idf()for inverse document frequency using the standard Lucene formula:ln((N - df + 0.5) / (df + 0.5))compute_avg_field_length()to compute average field length for field length normalization (b=0.75)score_match_querynow uses the full BM25 formula with term frequency from postings and document frequency fromPositionsReaderacross all segmentssearch()precomputes IDF map and average field length before scoring, then threads them through the scoring chaink1=1.2(TF saturation parameter) andb=0.75(field length normalization) are the standard Lucene defaultsTest plan
cargo test --workspace— 333 tests passcargo clippy --workspace --all-targets -- -D warnings— cleanSummary by CodeRabbit
Improvements
Tests