perf: Batch embeddings in RAGIndexer instead of per-file encode#156
perf: Batch embeddings in RAGIndexer instead of per-file encode#156shrutu0929 wants to merge 1 commit intoRefactron-ai:mainfrom
Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 9 minutes and 12 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
solve #150
The recent update to the RAGIndexer resolves a significant bottleneck in the index-building process by eliminating the overhead of per-file embedding encoding. Previously, the system would invoke the machine learning model immediately after processing each file, which essentially starved the SentenceTransformer model of its ability to process text in bulk. To solve this, we decoupled chunk extraction from vector submission: the indexer now continuously sweeps through Python files in the repository and funnels all generated chunks into shared memory rather than committing them file-by-file. We exposed a configurable parameter, batch_size (defaulted to 100), which dictates the maximum size of this accumulation pool. The system only triggers the expensive hardware encoding step once that precise batch threshold is breached, safely carrying over any residual chunks across module boundaries. By feeding the underlying neural network large, unified batches of documents instead of countless sparse ones, you gain drastically improved GPU throughput and significantly lower wall-clock indexing times.