Problem
Multiple runs of the technical-reviewer agent have diminishing returns. Once the first pass reaches a high enough confidence level, second and third iterations add little value while consuming significant time and tokens.
Current behavior
The docs-orchestrator already has iteration logic (SKILL.md lines 397-415) that exits early on HIGH confidence and treats MEDIUM with zero critical/significant issues as acceptable. However:
-
docs-review-technical (standalone skill) — The multi-agent pipeline (Agent 1 + Agent 2 + validation subagents) does not have any iteration/re-run logic itself, but callers may invoke it multiple times manually. There is no guidance on when a single pass is sufficient.
-
Orchestrator iteration — The 3-iteration loop always re-runs the full review after fixes, even when the delta between iterations is marginal (e.g., MEDIUM→MEDIUM with only minor issues remaining). There is no check for whether the fix cycle actually improved the score.
Proposed changes
- Add a "no improvement" early exit to the orchestrator iteration loop: if iteration N produces the same confidence level and severity counts as iteration N-1, stop iterating (the fix cycle did not improve anything).
- Document single-pass sufficiency in
docs-review-technical: if the standalone review returns zero errors and zero warnings, a second run is unnecessary.
- Consider a
--max-iterations flag on the orchestrator to let users cap iterations (default: 3, but settable to 1 for quick reviews).
Context
The orchestrator's step-result.json sidecar already captures confidence and severity_counts per iteration, so detecting "no improvement" is straightforward — compare current sidecar with the previous iteration's values.
Files involved
plugins/docs-tools/skills/docs-orchestrator/SKILL.md (iteration logic, lines 397-415)
plugins/docs-tools/skills/docs-workflow-tech-review/SKILL.md (step skill, writes step-result.json)
plugins/docs-tools/skills/docs-review-technical/SKILL.md (standalone review skill)
Problem
Multiple runs of the technical-reviewer agent have diminishing returns. Once the first pass reaches a high enough confidence level, second and third iterations add little value while consuming significant time and tokens.
Current behavior
The docs-orchestrator already has iteration logic (
SKILL.mdlines 397-415) that exits early onHIGHconfidence and treatsMEDIUMwith zero critical/significant issues as acceptable. However:docs-review-technical(standalone skill) — The multi-agent pipeline (Agent 1 + Agent 2 + validation subagents) does not have any iteration/re-run logic itself, but callers may invoke it multiple times manually. There is no guidance on when a single pass is sufficient.Orchestrator iteration — The 3-iteration loop always re-runs the full review after fixes, even when the delta between iterations is marginal (e.g., MEDIUM→MEDIUM with only minor issues remaining). There is no check for whether the fix cycle actually improved the score.
Proposed changes
docs-review-technical: if the standalone review returns zero errors and zero warnings, a second run is unnecessary.--max-iterationsflag on the orchestrator to let users cap iterations (default: 3, but settable to 1 for quick reviews).Context
The orchestrator's
step-result.jsonsidecar already capturesconfidenceandseverity_countsper iteration, so detecting "no improvement" is straightforward — compare current sidecar with the previous iteration's values.Files involved
plugins/docs-tools/skills/docs-orchestrator/SKILL.md(iteration logic, lines 397-415)plugins/docs-tools/skills/docs-workflow-tech-review/SKILL.md(step skill, writes step-result.json)plugins/docs-tools/skills/docs-review-technical/SKILL.md(standalone review skill)