Skip to content

[stonecrop] optimize field trigger rollback memory usage #309

@Alchez

Description

@Alchez

Description

The current snapshot-based rollback implementation creates a full deep copy of the entire record before executing field trigger actions, which may introduce memory overhead for:

  • Large documents (100KB+ records)
  • High-frequency field changes
  • Rapid sequential edits

Tasks

  • Measure baseline memory usage with realistic record sizes
  • Profile memory allocation patterns during field trigger execution
  • Benchmark performance with high-frequency changes (e.g., 100 changes/sec)
  • Evaluate alternative snapshot strategies:
    • Structural sharing
    • Diff-based snapshots
    • Copy-on-write
  • Implement optimizations if measurements show significant overhead
  • Add performance tests to regression suite

Success Criteria

  • Memory usage profiled and documented
  • Performance benchmarks established
  • Optimization implemented if >10% overhead detected
  • No breaking changes to public API

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions