Skip to content

Calibration

Enreign edited this page Mar 8, 2026 · 3 revisions

Estimates improve with feedback. Calibration is the process of comparing estimates to actuals and adjusting your defaults.

Research shows calibration feedback improves estimation accuracy from 64% to 81% (Jorgensen & Grimstad, 2004).

The Feedback Loop

Estimate → Do the work → Log actual → Compare → Adjust → Repeat
  1. Estimate before you start (using this skill)
  2. Track how long it actually took
  3. Log estimated vs actual in a consistent format
  4. Compare patterns across 10+ data points
  5. Adjust the specific multiplier that's off
  6. Repeat — each cycle improves accuracy

What to Track

After completing a task, log:

Field Estimated Actual Delta
Task Auth service
PERT Expected 4 hrs
Actual Total 4.8 hrs +20%
Within PRED(25)? Yes 20% < 25%

Adjustment Rules

After 10+ data points:

Pattern Adjustment
>60% tasks exceed committed Increase confidence multiplier or base rounds
>60% tasks below expected Decrease base rounds or risk coefficient
Human review consistently over Increase review_minutes values
One task type consistently off Adjust that type's multiplier
One complexity band off Adjust that band's rounds only

Change incrementally: ±0.05 on ratios, ±2-3 on rounds, ±0.1 on multipliers. Never adjust more than 20% in one cycle.

Calibration Cadence

PRED(25) Score Cadence
< 50% (poor) Weekly
50-65% (developing) Bi-weekly
65-75% (good) Monthly
> 75% (excellent) Quarterly

See PRED 25 for the full accuracy metric. See Reference Stories for calibration anchors.

Sprint Velocity

The calibration system includes mode-aware sprint velocity tracking. See Sprint Velocity for the full guide on:

  • Logging sprint data per cooperation mode
  • Capacity planning formulas
  • Sprint fit checks
  • Divergence detection for hybrid teams
  • Velocity trend analysis

Full details in calibration.md.

Clone this wiki locally