-
Notifications
You must be signed in to change notification settings - Fork 1
Improve assumption handling: replace yield confirmations with comprehension iteration and judgement augmentation #50
Description
Problem Statement
Multiple stages of the work-package workflow collect assumptions and then present a yield checkpoint asking the user whether the assumptions are correct. This interaction pattern is not effective because users are asked to validate assumptions without sufficient context to make informed decisions — they must triage which assumptions are code-verifiable versus which genuinely require human judgement, a distinction the agent is better positioned to make automatically.
The reconcile-assumptions skill already exists and can autonomously resolve code-analyzable assumptions through iterative codebase analysis. However, not all assumption-generating activities use it, and the activities that do still present the remaining open assumptions as a flat confirmation prompt rather than structuring the interaction to assist the user in reasoning through the genuinely open questions.
Current state:
- Activities 02 (design-philosophy), 03 (requirements-elicitation), 04 (research), 05 (implementation-analysis), 06 (plan-prepare), and 08 (implement) all collect assumptions and present
assumptions-reviewyield checkpoints - Several activities list
reconcile-assumptionsas a supporting skill but not all do — activities 03, 04, and 05 lack it - The
review-assumptionsskill (skill 13) presents assumptions one at a time for binary confirm/correct responses, without trade-off context or structured guidance - Users are asked to confirm assumptions they may not have enough context to evaluate, leading to uninformed rubber-stamp confirmations
Desired state:
- Every activity that generates assumptions uses the
reconcile-assumptionscomprehension iteration loop to autonomously resolve all code-analyzable assumptions before presenting anything to the user - Remaining non-code-resolvable assumptions are presented through a structured "judgement augmentation" process that highlights technical trade-offs and provides supporting context
- The user interaction takes the form of an interview-style list of open questions, each with relevant context, alternatives, and trade-off analysis — enabling informed decisions rather than uninformed confirmations
Goal
Enable users to make well-informed decisions on genuinely open assumptions by automating resolution of code-verifiable assumptions and providing structured trade-off context for the remainder.
Scope
In Scope
- Ensuring all assumption-generating activities (02, 03, 04, 05, 06, 08) include
reconcile-assumptionsas a supporting skill - Designing and implementing a "judgement augmentation" interaction pattern for the
review-assumptionsskill - Updating the
assumptions-reviewcheckpoints across affected activities to use the new pattern - Updating resource 13 (assumptions review guide) and resource 26 (assumption reconciliation) to document the new approach
Out of Scope
- Changes to the
reconcile-assumptionsskill's convergence loop itself (it already works correctly) - Changes to activity 07 (assumptions-review) which posts to the issue tracker — this is a separate stakeholder-facing interaction
- Changes to the workflow's transition logic or activity sequencing
User Stories
US-1: Automated assumption triage
As a workflow user, I want code-verifiable assumptions to be resolved automatically so that I am not asked to confirm things the agent can verify through codebase analysis.
Acceptance Criteria:
- All assumption-generating activities include
reconcile-assumptionsas a supporting skill - The reconciliation loop runs before any user-facing assumption checkpoint
- Only non-code-resolvable assumptions are presented to the user
US-2: Structured judgement support
As a workflow user, I want remaining open assumptions presented with technical trade-offs and supporting context so that I can make informed decisions rather than uninformed confirmations.
Acceptance Criteria:
- Each open assumption is presented with: the question being asked, relevant technical context, identified alternatives, and trade-off analysis
- The presentation follows an interview-style format (structured list of open questions, not one-at-a-time binary prompts)
- The user can see why each assumption could not be resolved through code analysis
Success Metrics
| Metric | Target |
|---|---|
Assumption-generating activities with reconcile-assumptions |
100% (currently ~50%) |
| User-facing assumptions per activity | Only non-code-resolvable (currently all) |
| Context provided per open assumption | Trade-offs, alternatives, and rationale for each |
References
review-assumptionsskill:work-package/skills/13-review-assumptions.toonreconcile-assumptionsskill:work-package/skills/23-reconcile-assumptions.toon- Assumptions reconciliation resource:
work-package/resources/26-assumption-reconciliation.md