Thank you for your interest in contributing. This document covers how to set up a dev environment, how to add new rule types and integrations, and what the PR process looks like.
git clone https://github.com/practicalmind-ai/gateframe.git
cd gateframe
pip install -e ".[dev]"
python -m pytest tests/ -vLint and format:
ruff check gateframe/ tests/ examples/
ruff format --check .
ruff format .All tests must pass and both checks must be clean before a PR is mergeable.
gateframe/
├── core/ # ValidationContract, WorkflowContext, FailureMode, EscalationRouter
├── rules/ # StructuralRule, SemanticRule, BoundaryRule, ConfidenceRule
├── integrations/ # OpenAI, Anthropic, LiteLLM, LangChain
├── audit/ # AuditLog, exporters
└── cli/ # inspect, replay subcommands
tests/ # mirrors gateframe/ structure
examples/ # runnable end-to-end examples
- Create
gateframe/rules/<name>.pywith a single class inheriting fromRule. - Implement
validate(self, output, **context) -> FailureResult | None. ReturnNoneon pass. - Catch all exceptions inside
validate— never let them propagate. Return aFailureResultwith the exception message instead. - Choose a default
failure_modethat fits the severity:HARD_FAILfor scope or authority violations,SOFT_FAILfor confidence or quality degradation. - Write actionable error messages.
"Validation failed"is not acceptable."Confidence 0.42 is below minimum threshold 0.7."is. - Add the class to
gateframe/__init__.pyand the__all__list so it is part of the public API. - Create
tests/rules/test_<name>.py. Cover: happy path, failure path, exception handling, custom failure mode, context forwarding, and the default name.
- Create
gateframe/integrations/<provider>.py. - Add a
_require_<provider>()guard that raisesImportErrorwith install instructions if the SDK is not available. - Implement
extract_text,extract_json, andextract_metadataas standalone functions (not methods). Keeping them separate from the validator class makes them individually testable. - Implement a
<Provider>Validatorclass that accepts aValidationContractand optional config, and calls the extract functions insidevalidate(). - Add the optional dependency to
pyproject.tomlunder[project.optional-dependencies]. - Create
tests/integrations/test_<provider>.py. Use dataclasses to mock provider responses — no real API calls. Cover all three extract functions and the validator class.
- Every failure path must be tested, not just the happy path.
- Use dataclasses to mock provider responses, not
unittest.mockor real API calls. - Tests mirror the source tree:
gateframe/rules/boundary.py→tests/rules/test_boundary.py. - Construction-time errors (e.g.
ValueErrorfor bad rule configuration) are tested separately from validation failures.
- No
printin library code. Usestructlogfor all internal logging. - No unnecessary docstrings or comments. Code should be self-explanatory.
- Type hints required everywhere.
Anyrequires a comment explaining why. - One class per file where possible.
- Error messages must be actionable — include what failed, the actual value, and the expected value or constraint.
- Python 3.10+. No dependencies in core beyond
pydantic>=2andstructlog.
- Fork the repository and create a branch from
main. - Make your changes. If you're adding a feature, add tests for it.
- Run
python -m pytest tests/ -v— all tests must pass. - Run
ruff check gateframe/ tests/ examples/andruff format --check .— both must be clean. - Open a pull request with a description of what the change does and why.
If you're unsure whether a change fits the scope of the project, open an issue first.