Skip to content

fix: Unbounded batch processing in embedding#49

Open
mrwind-up-bird wants to merge 1 commit intomainfrom
autofix/b8403982/unbounded-batch-processing-in-
Open

fix: Unbounded batch processing in embedding#49
mrwind-up-bird wants to merge 1 commit intomainfrom
autofix/b8403982/unbounded-batch-processing-in-

Conversation

@mrwind-up-bird
Copy link
Copy Markdown
Collaborator

AutoFix: Unbounded batch processing in embedding

Category: performance
Severity: medium

Issue

The embedding service processes texts in batches of 128, but there's no upper limit on the total number of texts. For very large documents, this could lead to memory exhaustion or extremely long processing times without progress feedback.

Fix

Added a maximum limit (MAX_TOTAL_TEXTS = 10000) on the total number of texts that can be processed in a single embedding request. This prevents memory exhaustion and extremely long processing times by failing fast with a clear error message when the limit is exceeded. The limit is set high enough to handle legitimate large documents while preventing abuse or accidental resource exhaustion.


Generated by nyxCore AutoFix

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant