hotfix/code_update_after_fix: The code is properly updated after fixing the code#31
hotfix/code_update_after_fix: The code is properly updated after fixing the code#31
Conversation
There was a problem hiding this comment.
Pull request overview
Adds client-side normalization and a shared helper to consistently apply AI-generated/edited code updates to notebook cells in the Plainbook web UI.
Changes:
- Introduces
normalizeSource()to coerce cell source content into a consistent string format. - Adds
applyGeneratedCode()to centralize updating a cell’s source, clearing outputs, and clearing validation state. - Uses these helpers across code generation, test-code generation, and code-save flows to keep client state consistent.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| } else { | ||
| test.cells[role].source = r.code; | ||
| test.cells[role].source = normalizeSource(r.code); | ||
| test.cells[role].outputs = []; |
There was a problem hiding this comment.
When regenerating unit test sub-cell code for roles other than target, the code and outputs are replaced but any existing metadata.validation for that sub-cell is left intact. This can show stale validation results for code that no longer matches the cell content. Consider clearing test.cells[role].metadata.validation (and any related timestamp field, if present) at the same time you reset source/outputs (mirroring applyGeneratedCode).
| test.cells[role].outputs = []; | |
| test.cells[role].outputs = []; | |
| if (test.cells[role].metadata) { | |
| delete test.cells[role].metadata.validation; | |
| delete test.cells[role].metadata.validation_timestamp; | |
| delete test.cells[role].metadata.validationTimestamp; | |
| } |
No description provided.