An anti-shortcut protocol for Claude. One file. Copy-paste it. Claude stops cutting corners.
Claude takes shortcuts. It skips error handling, gives surface-level analysis, drops steps from multi-part instructions, and delivers partial work as if it were complete. Teams end up treating every Claude output as a rough draft, spending significant time reviewing and fixing what Claude should have done right the first time.
Zero-Shortcuts is a set of 12 mandatory process rules that move the quality gate from after Claude's output (your review) to inside Claude's process (self-audit before delivery). It's not aspirational ("be thorough"). It's structural ("enumerate failure modes before delivering").
- Open claude.ai and create a new Project (or open an existing one)
- Click "Set custom instructions" (or "Edit project instructions")
- Copy the entire contents of
ZERO-SHORTCUTS.mdand paste it in - Done. All conversations in this Project now use the protocol
Copy ZERO-SHORTCUTS.md into your project root as CLAUDE.md:
cp ZERO-SHORTCUTS.md /path/to/your/project/CLAUDE.mdOr append to an existing CLAUDE.md:
cat ZERO-SHORTCUTS.md >> /path/to/your/project/CLAUDE.mdClaude Code automatically reads CLAUDE.md at the root of your project.
Use the contents of ZERO-SHORTCUTS.md as your system prompt:
import anthropic
client = anthropic.Anthropic()
# Read the protocol
with open("ZERO-SHORTCUTS.md") as f:
protocol = f.read()
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
system=protocol,
messages=[{"role": "user", "content": "Your prompt here"}]
)Or prepend it to your existing system prompt.
Prompt: "Write a Python function that reads a JSON config file and returns the parsed data."
Without Zero-Shortcuts:
def read_config(path):
with open(path) as f:
return json.load(f)That's it. No error handling. No import. File not found? Malformed JSON? Permission denied? Silent crash.
With Zero-Shortcuts:
import json
from pathlib import Path
def read_config(path: str) -> dict:
"""Read and parse a JSON config file.
Args:
path: Path to the JSON config file.
Returns:
Parsed config as a dictionary.
Raises:
FileNotFoundError: If the config file does not exist.
PermissionError: If the file cannot be read.
json.JSONDecodeError: If the file contains invalid JSON.
ValueError: If the file is empty.
"""
config_path = Path(path)
if not config_path.exists():
raise FileNotFoundError(f"Config file not found: {path}")
if not config_path.is_file():
raise ValueError(f"Path is not a file: {path}")
content = config_path.read_text(encoding="utf-8")
if not content.strip():
raise ValueError(f"Config file is empty: {path}")
return json.loads(content)Every failure mode enumerated and handled. Import included. Docstring documents the contract.
Prompt: "Why is our API response time increasing?"
Without Zero-Shortcuts: "Your API response time is likely increasing due to database query performance. Consider adding indexes or caching frequently accessed data."
One guess. No investigation. No sources. No uncertainty flagged.
With Zero-Shortcuts:
"What I investigated: endpoint response time logs, database query patterns, recent deployments, connection pool metrics, and upstream service latency.
What I did NOT investigate: client-side rendering time, CDN cache hit rates, or DNS resolution (these require access I don't have).
Findings:
- The
GET /usersendpoint p95 latency increased 3x after the March 12 deploy - The deploy added a new JOIN query that scans the full
activity_logtable (no index onuser_id) - Connection pool utilization is at 89%, up from 40% before the deploy
Uncertainty: I cannot confirm whether the connection pool pressure is caused by the new query or by increased traffic. Both happened in the same week. You would need to correlate request volume with pool usage to isolate the cause.
Recommendation: Add an index on activity_log.user_id. Monitor pool utilization after the index is live to determine if traffic volume is a separate issue."
Prompt: "Set up a new TypeScript project with ESLint, Prettier, and Jest."
Without Zero-Shortcuts:
npm init -y
npm install typescript
npx tsc --initOne of three tools installed. ESLint, Prettier, and Jest silently dropped.
With Zero-Shortcuts:
"You asked for 4 things: (1) TypeScript project, (2) ESLint, (3) Prettier, (4) Jest. Addressing each:
1. TypeScript project:
mkdir my-project && cd my-project
npm init -y
npm install -D typescript
npx tsc --init2. ESLint with TypeScript support:
npm install -D eslint @typescript-eslint/parser @typescript-eslint/eslint-pluginCreates .eslintrc.json with TypeScript config...
3. Prettier (with ESLint integration):
npm install -D prettier eslint-config-prettierCreates .prettierrc and updates .eslintrc.json...
4. Jest with TypeScript:
npm install -D jest ts-jest @types/jest
npx ts-jest config:initCreates jest.config.js...
Verification: All 4 requirements addressed. Project compiles, lints, formats, and runs tests."
If Claude starts slipping back into shortcuts during a long conversation, paste this:
Remember: zero shortcuts. Run your full checklist.
By default, Claude runs the checklist internally. To make the audit visible (useful for debugging or building trust):
For all conversations: Add this line to the end of your pasted ZERO-SHORTCUTS.md:
Show your self-audit before every response.
For one conversation: Just type "show your self-audit" in your message.
Will this make Claude slower? Slightly. Claude will take a moment longer to think before responding. The tradeoff: you spend less time re-prompting and fixing.
Does this use a lot of tokens? The protocol is ~350 tokens. On a model with 200K context, that's 0.2% of your window.
Can I customize the rules?
Yes. Edit ZERO-SHORTCUTS.md to fit your needs. The rules are plain text. Remove what doesn't apply, add what does.
Does this work with other AI models? The rules are model-agnostic. They work with any model that accepts a system prompt or custom instructions. Written for Claude, tested on Claude.
See CONTRIBUTING.md for how to propose rule changes.