Status: Research-stage primitive. Coherence verified. Functionality unverified against real-world deployment.
Version: 1.0.4-stable
Calculus hash (SHA256): 09CADC6708DBFF411C4F216739CDC35231F1C07F2C3388FCA04EE8E05BDB76D5
Frozen since: December 25, 2025. No modifications since original authoring window.
Origin commit: 9e04d4b — Initial Keystone app (source only), Jan 8 2026
The Aperture Calculus is a small, deterministic pre-execution gate for LLM pipelines.
It is a pure function:
(phase, instruction, payload, pinnedModel, requiredModel) → Allow | Refuse(code)
It makes no model calls. It holds no state. It does not learn. It does not adapt.
Given identical inputs, it always produces identical outputs.
The core artifact is src/v2/calculus.ts. Everything else in this repository is context, verification, or testing infrastructure.
Existing LLM guardrail tools (NeMo Guardrails, Lakera, etc.) classify inputs probabilistically — they use a model to judge whether an input is safe or appropriate. That classification is itself non-deterministic.
The Aperture Calculus takes a different posture: no model calls, no intent inference, no probabilistic classification. It evaluates only explicit structural properties:
- Is this instruction permitted in the current phase?
- Is the model pinned to the required version?
- Is the payload non-empty, within bounds, and free of truncation artifacts?
- Does the payload contain multi-intent or advice/judgment language that exceeds scope?
If any check fails, the calculus refuses. Refusal is terminal and produces a single reason code.
This calculus was not designed by an AI engineer.
It was abstracted from 20 years of production workflow experience in film, motion graphics, and large-scale media delivery — contexts where authority, phase-gating, and artifact immutability are not theoretical but operational necessities.
The core insight came from working on ComfyUI-based AI production pipelines, where a pattern kept recurring: failures were not caused by model quality or UI layout. They were caused by ambiguous context ownership — specifically, what was allowed to become real, and when.
That problem ("Reading Across Panels") led to a multi-month abstraction process, documented in the governance folder, that progressively separated:
- Blocking from controlling
- Authority from inference
- Phase from workflow step
- Refusal from failure
The calculus is the distilled output of that process.
The calculus has been structured against a Lean 4 shadow specification (Lean v4.26.0, Lake build system).
The Lean layer proves:
| Property | Lean File |
|---|---|
| State space and primitive definitions | Core.lean |
| Governance invariants hold for all inputs | Invariants.lean |
| Only permitted transitions are reachable | Transitions.lean |
| No unauthorized execution paths exist | Closure.lean |
| System is non-empty and coherent | Witness.lean |
Build result: All 8 jobs completed successfully. No axioms introduced.
What Lean verifies: Structural coherence. The shape of the calculus is sound.
What Lean does not verify: Whether the calculus handles real-world inputs correctly. Whether the keyword lists are sufficient. Whether the phase model maps correctly onto any specific deployment domain.
Coherence verified. Functionality unverified. That distinction is intentional and honest.
INV-01: Instruction must be allowed in the current phase
INV-02: Payload must be non-empty
INV-03: Payload length must not exceed MAX_PAYLOAD_LENGTH
INV-04: Truncated payloads are refused
INV-05: Multi-intent payloads are refused
INV-06: Refusal is terminal
INV-07: No authority is inferred
INV-08: Silence is a valid outcome
INV-09: Deterministic evaluation only
These invariants are complete and frozen. No new invariants are expected. Any change requiring a new invariant constitutes a new kernel.
- Not a complete AI safety solution
- Not a content filter
- Not a replacement for model-level safety measures
- Not a deployed system
- Not a product
It is a primitive — a missing structural piece in the pre-execution layer of AI pipelines. Its value depends entirely on what wraps it.
| Property | NeMo / Lakera | Aperture Calculus |
|---|---|---|
| Classification method | Probabilistic (LLM-based) | Fully deterministic |
| Model calls | Yes | Zero |
| State | Stateful | Stateless |
| Auditability | Configurable | Hash-frozen, immutable |
| Evaluation basis | Semantic intent | Explicit structural properties |
| Refusal reason | Probabilistic score | Single typed reason code |
The calculus is industry-agnostic. Structural fit exists wherever:
- Audit trails require provably frozen, verifiable logic
- Multi-intent or out-of-scope instructions are a known risk
- A deterministic pre-execution gate is preferable to probabilistic classification
- Model pinning is a compliance or safety requirement
Specific domain fit: legal AI, clinical documentation, aviation decision support, financial services, regulated enterprise AI deployments, multi-agent pipeline handoffs.
This repository represents three months of research, abstraction, and verification work following the original authoring window.
The calculus is ready for:
- Review by AI safety researchers
- Real-world deployment testing against production inputs
It is not ready for:
- Production deployment without domain-specific validation
- Claims of completeness as a safety solution
Outreach is underway to Anthropic's safety team and potential deployment partners for functional validation.
This repository was originally committed as Keystone — a local-first UI testing harness used to author and harden the execution calculus during development. Keystone is the environment; the Aperture Calculus (src/v2/calculus.ts) is the artifact.
The commit history reflects that development sequence:
9e04d4b— Initial source commit (calculus frozen here)62ba714— Governance documentation addedb04bee2— Lean shadow specification and build evidence added
Keystone the UI is not the subject of this research. It is not intended for external deployment. The calculus is the sole artifact of interest.
Anthony Gutierrez aperture-systems.com anthony@aperture-systems.com 702.278.0873
Not an AI engineer. This work emerged from production workflow problems, not academic ML research.