Reusable engineering review skill for AI coding agents.
It gives agents a consistent review workflow with two clear modes:
review: bug finding, regressions, correctness issues, and merge blockersaudit: architecture, security, policy, trust boundaries, and system risk
The skill turns vague requests like review this into a structured engineering pass:
- detect the right review scope
- load the minimum useful context
- run multi-pass analysis across requirements, architecture, implementation, and tests
- output findings first, then open questions, then a short review report
It also supports Program-driven workspaces that organize work around PROGRAM.md, STATUS.yml, and SCOPE.yml.
SKILL.md- core skill definitionreferences/- detailed review playbook and templatesagents/openai.yaml- UI metadata for compatible environmentsscripts/install.ps1- install into Codex skills on Windowsscripts/install.sh- install into Codex skills on macOS/Linux
./scripts/install.ps1Optional custom target:
./scripts/install.ps1 -TargetDir C:\Users\<you>\.codex\skills./scripts/install.shOptional custom target:
./scripts/install.sh /path/to/.codex/skillsCopy this repository into your local skills directory as:
<skills-root>/engineering-review/
If the target environment does not support Codex skills directly, load SKILL.md and keep the references/ folder beside it.
$engineering-review review current diff
$engineering-review review current Program
$engineering-review audit current architecture
Typical prompts:
review this feature for bugs and regressionsreview the current Program and give me a reportaudit this runner for architecture and security risk
Fast code generation increases output, but it also increases the chance of shipping hidden regressions, weak assumptions, and incomplete implementations.
This skill exists to make review reusable, evidence-based, and consistent across repositories and agents.