-
Notifications
You must be signed in to change notification settings - Fork 0
How It Works
Enreign edited this page Mar 11, 2026
·
3 revisions
Every estimate follows this pipeline:
1. Base Rounds (from complexity lookup)
× Risk Coefficient
× Domain Familiarity
= Adjusted Agent Rounds
2. Adjusted Rounds × Minutes per Round (from maturity)
= Agent Time
3. Agent Time × Integration Overhead %
= Integration Time
4. Agent Time × Adjusted Fix Ratio (agent-effectiveness-adjusted)
= Human Fix Time
5. Review Minutes (from depth × complexity lookup)
= Human Review Time
6. Planning Minutes (from complexity lookup)
= Human Planning Time
7. Human times × Org Size Overhead
= Adjusted Human Times
8. Sum of all times = Subtotal
9. Subtotal × Task Type Multiplier = Total
10. Apply Cone of Uncertainty Spread
(widens min/max range based on definition phase)
11. PERT Expected = (min + 4×midpoint + max) / 6
SD = (max - min) / 6
12. Total × Confidence Multiplier = Committed Estimate
13. Multi-agent/multi-human adjustments (if applicable)
14-15. Token & Cost Estimation
Rounds × tokens-per-round × num_agents = total tokens
Split into input/output by complexity ratio
Optional cost at model tier pricing
A round is one agent invocation cycle:
Prompt → Reasoning → Code/Output → Validation
One round might produce a function, fix a bug, or write a test. Complex tasks require many rounds of iteration.
| Complexity | Rounds | Example |
|---|---|---|
| S | 3-8 | Add a 404 page, fix a typo in API response |
| M | 8-20 | Stripe integration, REST API endpoint |
| L | 20-50 | Database migration, test suite setup |
| XL | 50-120 | Architecture rewrite, multi-system refactor |
Every estimate produces two numbers:
- Expected — The PERT-weighted most likely outcome (50% chance of finishing sooner, 50% later)
- Committed — The risk-adjusted estimate at your chosen confidence level (80% or 90%)
These serve different audiences:
- Expected is for internal planning ("how much work is this?")
- Committed is for external promises ("when will it be done?")
Every number in the output traces back to:
- A lookup table (base rounds, review minutes, planning minutes)
- A multiplier from a questionnaire answer (risk, domain, org size, task type)
- A formula defined in
formulas.md
Nothing is a guess. If an estimate seems wrong, you can trace it back to a specific input and adjust.
Getting Started
Core Concepts
- How It Works
- Task Types
- Agent Effectiveness
- Confidence Levels
- Cone of Uncertainty
- PERT Statistics
- Small Council
Reference
Accuracy
Contributors