Skip to content

docs(0.2): product vision + README/quickstart/feature-status/CHANGELOG anchored to pitch (Track 1)#137

Open
pmclSF wants to merge 2 commits intomainfrom
feat/0.2-product-vision
Open

docs(0.2): product vision + README/quickstart/feature-status/CHANGELOG anchored to pitch (Track 1)#137
pmclSF wants to merge 2 commits intomainfrom
feat/0.2-product-vision

Conversation

@pmclSF
Copy link
Copy Markdown
Owner

@pmclSF pmclSF commented May 2, 2026

Summary

Bundles the doc-only Track 1 work from the 0.2.0 parity-gated release plan into one PR: every user-visible doc now leads with the unified pitch verbatim, and capabilities are tiered by pillar.

Bigger PR per request — Tracks 1.1, 1.2, 1.3, 1.4, 1.5, 1.8 in one merge. Tracks 1.6 (per-pillar reproducible examples) and 1.7 (per-detector "known false positives" sections) are larger and follow separately.

What's in the PR

Track 1.1 — `docs/product/vision.md` (NEW, ~270 lines)
The durable north-star doc. Headline pitch verbatim, three pillars with internal job + external framing + anchored capabilities, capability → tier map, anti-goals (no safe-skip; doesn't run tests; doesn't judge model truthfulness), trajectory through 0.4.

Track 1.2 + 1.3 — README rewrite

  • Headline replaced with the pitch verbatim
  • Primary workflow (`terrain analyze && terrain report pr`) shown above the install snippet
  • "What 0.2 Is and Isn't" rewritten as Pillar × Tier (Tier 1/2/3) instead of flat stable/experimental
  • Anti-goals section called out
  • "Map your test terrain" demoted to secondary tagline

Track 1.4 — `docs/quickstart.md` anchored to first-user gate
The walkthrough is now organized as the three insight types from the first-user success gate: coverage gap → PR risk → test-selection. Each step ~90 seconds; total ≤ 5 min. Safe-skip caveat called out explicitly.

Track 1.5 — `docs/release/feature-status.md` Pillar + Tier columns
Every shipping capability tagged with Pillar (Understand / Align / Gate) and Tier (1 / 2 / 3). `terrain explain finding ` and `terrain suppress ` added (these ship in Tracks 4.6 / 4.7).

Track 1.8 — `CHANGELOG.md` 0.2.0 entry restructure
Pitch as the headline opener. New parity-gate framing paragraph explains Gate ≥ 4, Understand ≥ 3, Align ≥ 3 soft. Release deliverables grouped by pillar.

Test plan

  • `make docs-verify` — passes (manifest/severity/rules in sync; rule docs unchanged this PR)
  • `go test ./... ./internal/testdata/` — green
  • Manual read-through: README + quickstart + vision doc all lead with the same pitch and tell the same story
  • Cross-references between docs (vision ↔ feature-status ↔ README ↔ audit) point at the right files

Plan link

`/Users/pzachary/.claude/plans/kind-mapping-turing.md` (Track 1, Tracks 1.1 / 1.2 / 1.3 / 1.4 / 1.5 / 1.8).

Follow-ups

  • Track 1.6 — per-pillar reproducible examples in `docs/examples/{understand,align,gate}/` (separate PR)
  • Track 1.7 — per-detector "known false positives" sections in `docs/rules/*` (bulk regen via `cmd/terrain-docs-gen`)

🤖 Generated with Claude Code

pmclSF and others added 2 commits May 2, 2026 05:21
Durable north-star doc capturing the unified Terrain pitch we
converged on through the launch-readiness review threads. The
headline verbatim:

  Terrain is the control plane for your test system.
  It maps how your unit, integration, e2e, and AI tests actually
  relate to your code — and lets you gate changes based on that
  system as a whole.

  See what's covered, what's missing, and what's overlapping.
  See which tests matter for a PR — and why.
  Bring AI evals into the same review pipeline as the rest of
  your tests.

What's in the doc:

  * The user's actual job (six questions; today no single tool
    answers more than two)
  * "What Terrain is" — the control-plane framing in two phrases
  * The three pillars (Understand / Align / Gate) with internal
    job + external framing + anchored capabilities
  * The unifying thread (CI gate primitives shared across pillars)
  * What's distinctive vs. Jest / pytest / SonarQube / Promptfoo /
    Bazel / GitHub code scanning / AI safety tools
  * What Terrain explicitly isn't
  * Anti-goals for 0.2.x (no safe-skip guarantee; we don't run
    tests; we don't judge model truthfulness; no public precision
    floor yet)
  * Trajectory: 0.2.0 = "see clearly + gate progressively"; 0.3 =
    "take control"; 0.4 = "test the universe" (AI-aware
    integration/e2e under the control plane)
  * Capability → pillar → tier map covering every shipping command
  * Primary workflow (terrain analyze && terrain report pr) as the
    canonical entry point
  * Doc-evolution rules (what stays stable, what updates per release)

This is the source of truth that the 0.2.0 README rewrite (Track 1.2),
quickstart anchor (Track 1.4), and feature-status pillar columns
(Track 1.5) point at. When the README and this doc disagree, this
doc wins until the README is updated.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…/1.3/1.4/1.5/1.8)

Bundles the doc-only Track 1 work into one cohesive PR following the
"control plane for your test system" pitch as the verbatim anchor.
Track 1.6 (per-pillar reproducible examples) and Track 1.7 (per-
detector "known false positives" sections) are larger and follow as
separate PRs.

Track 1.1 — `docs/product/vision.md` (committed earlier in this branch)
  Durable north-star doc with the verbatim pitch, three pillars,
  capability map, anti-goals, and 0.2.0 → 0.3 → 0.4 trajectory.

Track 1.2 + 1.3 — `README.md` headline rewrite + capability list
  Lead with the pitch verbatim. "Map your test terrain" demoted to
  the secondary tagline. Primary workflow (terrain analyze && terrain
  report pr) shown above the install snippet so the entry point is
  unmistakable. "What 0.2 Is and Isn't" rewritten around pillars +
  tiers (Understand / Align / Gate × Tier 1 / 2 / 3) instead of a
  flat stable/experimental list.

Track 1.4 — `docs/quickstart.md` anchored to first-user gate insights
  Walkthrough restructured around the three insight types from the
  first-user success gate:
    1. Coverage gap with explanation (`terrain analyze`)
    2. PR risk explanation (`terrain report pr --base main`)
    3. Test-selection explanation (`terrain report impact
       --explain-selection`)
  Each step is ~90 seconds; total walkthrough ≤ 5 min. The safe-skip
  caveat is called out explicitly per the 0.2.x anti-goal.

Track 1.5 — `docs/release/feature-status.md` Pillar + Tier columns
  Every shipping capability now has Pillar (Understand / Align /
  Gate / cross-cutting) and Tier (1 / 2 / 3) tags. Tier-1 means
  publicly claimable in 0.2.0 marketing; Tier-2 ships but stays
  experimental; Tier-3 is in development with no public claim.
  Workflows table extended to include `terrain explain finding <id>`
  and `terrain suppress <id>` (Tracks 4.6 / 4.7).

Track 1.8 — `CHANGELOG.md` 0.2.0 entry restructure
  Headline replaced with the pitch verbatim as the section opener.
  New paragraph explains parity-gate framing (Gate ≥ 4, Understand
  ≥ 3, Align ≥ 3 soft) and points at the audit doc as source of
  truth. The release groups deliverables by pillar instead of by
  feature category.

Plan link: `/Users/pzachary/.claude/plans/kind-mapping-turing.md`
(Tracks 1.1, 1.2, 1.3, 1.4, 1.5, 1.8).

Verification: `make docs-verify` green; `go test ./...` green; manual
read of README + quickstart confirms continuity with the pitch.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown

github-actions Bot commented May 2, 2026

[PASS] Terrain — Safe to merge

All changed code is well protected by existing tests.

Metric Value
Changed files 5 (0 source · 0 test)
Tests selected 1 of 772 (0% of suite)

Recommended tests

1 test(s) selected via structural heuristics. 0 unit(s) remain uncovered.

Test Confidence Why
docs/examples/e2e/login.spec.js inferred inferred structural relationship

Limitations
  • No coverage artifacts provided; protection gaps reflect missing data, not measured absence. Provide --coverage to improve accuracy.
  • Mixed test cultures reduce cross-framework optimization confidence. Consider standardizing on fewer frameworks.

Generated by Terrain · terrain pr --json for machine-readable output

Targeted Test Results

Terrain selected 1 test(s) instead of the full suite.

@github-actions
Copy link
Copy Markdown

github-actions Bot commented May 2, 2026

Terrain AI Risk Review

Metric Value
AI surfaces 13
Eval scenarios 16
Impacted scenarios 0
Uncovered surfaces 13

Decision: PASS — AI surfaces are covered.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant