Skip to content

Conversation

@pawelangelow
Copy link
Collaborator

@pawelangelow pawelangelow commented Nov 27, 2025

What

The current bulk delete implementation keeps all deleted keys in memory. When the number grows large (e.g., around 1M keys), the backend eventually crashes.
This PR introduces batch processing to avoid that issue.

However, reports now only return the keys from the last processed batch, meaning the user can no longer see the complete list of deleted keys. So that's why I implemented #5257

Testing

Before

Fill a database with enough keys (depending on your RAM), and bulk delete them all.

Screen.Recording.2025-11-27.at.15.33.02.mov

After

Screen.Recording.2025-11-27.at.15.28.09.mov

@pawelangelow pawelangelow self-assigned this Nov 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants