Skip to content

Blog: Keeping V-model iterative — latency and practice, not just discipline #26

@avrabe

Description

@avrabe

Follow-up to "Spec-driven development is half the loop"

A reviewer raised this question:

Several critiques point out that over-formalizing specs and models can recreate waterfall: slow change cycles, "shadow architecture," and a lot of bureaucratic drag. What concrete metrics or practices do you use to ensure your rivet/spar-driven process stays iterative and doesn't grind teams down?

The draft gives a short answer ("AI collapses the authoring cost that made V-model waterfall-y") but no data. This topic deserves the data.

Scope

  • Latency budgets: rivet validate in seconds not minutes; per-commit invocation rather than per-PR
  • Small-changeset discipline: don't revise requirements + design + code + tests in one PR; let the V descend and ascend incrementally
  • Shadow-architecture antipattern: when the spar model drifts from reality because nobody updates it
  • AI-specific: agent round-trip time on rivet gap closures (we have numbers from sigil PR #90 — 9 of 12 errors closed in one cycle; measure the cycle)
  • Real metrics to track: rivet-validate latency, PR throughput, time-to-close vs. gap count
  • When it grinds down: what gap-closure velocity looks like when the floor is failing
  • How to detect "shadow architecture" — model-to-code divergence metrics

Target

1500–2000 words with concrete numbers from sigil PRs #86, #87, #90 and the vmodel pilot cycle.

Source

Part of the review on content/blog/2026-04-23-spec-driven-development-is-half-the-loop.md (currently draft). Deferred from that post as "big enough topic for a post on AI-velocity MBSE specifically."

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions