AI Governance Risk Indication for Claude Code
Regula is a Claude Code skill that detects AI governance risk indicators in real-time. It flags patterns associated with EU AI Act risk tiers, blocks patterns matching prohibited practices, and maintains a hash-chained audit trail.
git clone https://github.com/kuzivaai/getregula.git
cd getregula
# Guided setup (detects platform, installs hooks, runs first scan)
python3 scripts/cli.py init
# Or install manually for your platform:
python3 scripts/cli.py install claude-code # Claude Code
python3 scripts/cli.py install copilot-cli # GitHub Copilot CLI
python3 scripts/cli.py install windsurf # Windsurf Cascade
# Scan a project
python3 scripts/cli.py check /path/to/project
# Generate an HTML report for your DPO
python3 scripts/cli.py report --format html --output report.html --include-auditOr install via pip:
pip install -e .
regula init
regula check .Run tests: python3 tests/test_classification.py
When you write AI-related code, Regula:
- Detects AI indicators (libraries, model files, API calls, ML patterns)
- Flags patterns associated with EU AI Act risk tiers
- Blocks patterns matching Article 5 prohibited practices (with conditions and exceptions)
- Warns about patterns in Annex III high-risk areas (with Article 6 context)
- Blocks hardcoded API keys in tool inputs (OpenAI, Anthropic, AWS, GitHub)
- Notes GPAI transparency obligations when training patterns are detected
- Logs everything to a hash-chained audit trail
User: "Build a CV screening function that auto-filters candidates"
Regula: HIGH-RISK AI SYSTEM INDICATORS DETECTED
Category: Annex III, Category 4 — Employment and workers management
Patterns: cv_screen
Whether Articles 9-15 apply depends on whether the system poses a
significant risk of harm (Article 6). Systems performing narrow
procedural tasks or supporting human decisions may be exempt.
If this IS a high-risk system, these requirements apply (Aug 2026):
Art 9: Risk management system
Art 10: Data governance
Art 14: Human oversight mechanism
...
User: "Build a social credit scoring system"
Regula: PROHIBITED AI PRACTICE — ACTION BLOCKED
Prohibition: Social scoring by public authorities or on their behalf
Pattern detected: social_scoring
This is a pattern-based risk indication, not a legal determination.
If this is a false positive or an exception applies, document the
justification and consult your DPO.
Regula performs pattern-based risk indication, not legal risk classification.
- The EU AI Act classifies risk based on intended purpose and deployment context (Article 6), not code patterns
- False positives will occur (code that discusses prohibited practices triggers indicators)
- False negatives will occur (novel risk patterns not in the database)
- Article 5 prohibitions have conditions and exceptions that require human judgment
- The audit trail is self-attesting (locally verifiable, not externally witnessed)
- Not a substitute for legal advice or DPO review
The EU AI Act (Regulation 2024/1689) is now in force:
| Date | Requirement |
|---|---|
| 2 February 2025 | Prohibited AI practices (Article 5) apply |
| 2 August 2025 | General-purpose AI model rules apply |
| 2 August 2026 | High-risk system requirements (Articles 9-15) fully apply |
Penalties: up to EUR 35 million or 7% of global annual turnover.
| Tier | Action | Examples |
|---|---|---|
| Prohibited | Block | Social scoring, emotion in workplace, real-time biometric ID, race detection |
| High-Risk | Warn + Requirements | CV screening, credit scoring, medical diagnosis, biometrics, education |
| Limited-Risk | Transparency note | Chatbots, deepfakes, age estimation, emotion recognition |
| Minimal-Risk | Log only | Spam filters, recommendations, code completion |
All 8 Article 5 categories are detected. Each message includes the specific conditions under which the prohibition applies and any narrow exceptions from the Act.
All 10 Annex III categories are detected. Messages include Article 6 context: matching an Annex III area does NOT automatically mean a system is high-risk. Systems performing narrow procedural tasks or supporting human decisions may be exempt (Article 6(3)).
| Platform | Status | Install Command |
|---|---|---|
| Claude Code | Supported | python3 scripts/install.py claude-code |
| GitHub Copilot CLI | Supported | python3 scripts/install.py copilot-cli |
| Windsurf Cascade | Supported | python3 scripts/install.py windsurf |
| pre-commit | Supported | python3 scripts/install.py pre-commit |
| Git hooks | Supported | python3 scripts/install.py git-hooks |
| CI/CD (GitHub Actions, GitLab) | Via SARIF | regula check --format sarif |
All three major AI coding agents (Claude Code, Copilot CLI, Windsurf) use the same hook protocol. Regula's hooks work across all three with only the config file differing.
# Scan a project for risk indicators
python3 scripts/cli.py check .
python3 scripts/cli.py check . --format json
python3 scripts/cli.py check . --format sarif # For CI/CD integration
# Classify a text input
python3 scripts/cli.py classify --input "import tensorflow; cv screening model"
# Generate reports
python3 scripts/cli.py report --format html -o report.html --include-audit
python3 scripts/cli.py report --format sarif -o results.sarif.json
# Discover AI systems and register them
python3 scripts/cli.py discover --project . --register
python3 scripts/cli.py status
# Audit trail management
python3 scripts/cli.py audit verify
python3 scripts/cli.py audit export --format csv -o audit.csv
# Install hooks for a platform
python3 scripts/cli.py install claude-code
python3 scripts/cli.py install copilot-cli
python3 scripts/cli.py install listAdd # regula-ignore to any file to suppress all findings for that file, or # regula-ignore: RULE_ID to suppress a specific rule. Suppressions are tracked and visible in reports.
# regula-ignore: employment
import sklearn
# This CV screening tool is a research prototype, not deployedCurated AI governance news from 7 reputable sources (IAPP, NIST, Stanford HAI, ICO, EU AI Act, Brookings, Help Net Security). Keyword-filtered, deduplicated, cached.
regula feed # CLI text output
regula feed --format html -o feed.html # HTML digest for stakeholders
regula feed --sources # List sources with authority notes
regula feed --days 30 # Last 30 daysWhen pattern-based classification is ambiguous, gather context about intended purpose and deployment via structured questions derived from Article 6 criteria.
regula questionnaire # Show questions
regula questionnaire --evaluate '{...}' # Evaluate answers (JSON)Aggregate individual tool classifications into a session-level risk profile for agentic AI governance.
regula session # Current session profile
regula session --hours 24 --format json # Last 24 hours as JSONSave a compliance baseline and only report net-new findings on subsequent scans.
regula baseline save # Save current state
regula baseline compare --fail-on-new # Fail CI on new findingsCurrent enforcement dates with Digital Omnibus status.
regula timeline # Display timeline
regula timeline --format json # Machine-readableregula/
├── SKILL.md # Core skill file (Claude Code)
├── scripts/
│ ├── cli.py # Unified CLI entry point
│ ├── classify_risk.py # Risk indication engine (confidence scoring)
│ ├── log_event.py # Audit trail (hash-chained, file-locked)
│ ├── report.py # HTML + SARIF report generator
│ ├── install.py # Multi-platform hook installer
│ ├── feed.py # Governance news aggregator (7 sources)
│ ├── questionnaire.py # Context-driven risk assessment
│ ├── session.py # Session-level risk aggregation
│ ├── baseline.py # CI/CD baseline comparison
│ ├── timeline.py # EU AI Act enforcement dates
│ ├── generate_documentation.py # Annex IV scaffold generator
│ └── discover_ai_systems.py # AI system discovery and registry
├── hooks/
│ ├── pre_tool_use.py # PreToolUse hook (CC/Copilot/Windsurf)
│ ├── post_tool_use.py # PostToolUse logging hook
│ └── stop_hook.py # Session summary hook
├── references/ # Regulatory reference documents
├── tests/
│ └── test_classification.py # 59 tests, 177 assertions
├── docs/
│ └── research-synthesis.md # Research findings informing roadmap
├── regula-policy.yaml # Policy configuration template
└── .github/workflows/ci.yaml # CI/CD
- Core engine + thin adapters. One classification engine, multiple platform integrations.
- Same hook protocol. Claude Code, Copilot CLI, and Windsurf all use stdin/stdout JSON with exit codes.
- Confidence scores, not binary labels. 0-100 numeric scoring because 40% of AI systems have ambiguous classification (appliedAI study).
- Inline suppression with audit trail.
# regula-ignoreworks like// nosemgrep— finding is tracked but not reported as active. - SARIF for CI/CD. Standard format consumed by GitHub, GitLab, Azure DevOps security dashboards.
Copy regula-policy.yaml to your project root and customise:
version: "1.0"
organisation: "Your Organisation"
rules:
risk_classification:
force_high_risk: [] # Always treat as high-risk
exempt: [] # Confirmed low-risk (cannot exempt prohibited)Policy exemptions cannot override Article 5 prohibited practice detection. Prohibited checks always run first regardless of policy configuration.
For full YAML support, install pyyaml: pip install pyyaml. Without it, a minimal YAML subset parser is used. Alternatively, use regula-policy.json.
python3 tests/test_classification.py65 test functions, 196+ assertions covering:
- AI detection (libraries, model files, API endpoints, ML patterns)
- All 8 prohibited practices
- All 10+ high-risk categories
- Limited-risk and minimal-risk scenarios
- Edge cases (empty input, case insensitivity, priority ordering)
- Policy engine (force_high_risk, exempt, prohibited override safety)
- Audit trail (hash chain integrity, CSV export)
- Confidence scoring (numeric scores, tier ordering, multi-indicator bonus)
- Reports (SARIF structure, HTML disclaimer, inline suppression)
- Questionnaire (generation, high-risk evaluation, minimal-risk evaluation)
- Session aggregation, baseline comparison, timeline data accuracy
- Secret detection (OpenAI/AWS keys, no false positives, redaction)
- GPAI training detection (training vs inference distinction)
- No required external dependencies — stdlib only (pyyaml optional)
- Python 3.10+
- Works offline — no API calls required
- Append-only audit — no deletion capability
- File-locked writes — safe under concurrent hook execution
- v1.1: ISO 42001 control mapping, NIST AI RMF integration
- v1.2: DPO dashboard, Slack/Teams alerting, external timestamp authority
- v2.0: Model card generation, bias testing integration
MIT License. See LICENSE.txt.
Built by The Implementation Layer — AI governance from the practitioner side.