Make AI agents check what they actually verified before claiming something is done.
-
Updated
May 2, 2026 - Shell
Make AI agents check what they actually verified before claiming something is done.
Per-repo memory, outcome telemetry, and a calibrated-confidence gate for Claude Code, with MCP and AGENTS.md projections so other AI coding tools can read its context. Notes survive sessions; success claims need test evidence; your reverts are remembered. Local-only, stdlib runtime.
Add a description, image, and links to the calibrated-confidence topic page so that developers can more easily learn about it.
To associate your repository with the calibrated-confidence topic, visit your repo's landing page and select "manage topics."