Git‑first prompt hub for teams and individual developers.
- Prompts are governed artifacts: versioned in Git, validated in CI, and loaded locally at runtime.
- Teams ship prompt changes as safely as code without added latency or vendor lock‑in.
- A small, auditable core with deterministic evals, release tags, and optional policy hooks.
InstructVault is a Git‑first “prompt‑as‑code” system. Prompts live in your repo, are validated and evaluated in CI, released via tags/SHAs, and loaded locally at runtime directly from Git or via a bundle artifact.
- Prompts live in Git as YAML/JSON files
- CI validates + evaluates prompts on every change
- Releases are tags/SHAs, reproducible by design
- Runtime stays lightweight (local read or bundle artifact)
flowchart LR
A[Prompt files<br/>YAML/JSON] --> B[PR Review]
B --> C[CI: validate + eval]
C --> D{Release?}
D -- tag/SHA --> E[Bundle artifact]
D -- tag/SHA --> F[Deploy app]
E --> F
F --> G[Runtime render<br/>local or bundle]
Enterprises already have Git + PR reviews + CI/CD. Prompts usually don’t. InstructVault brings prompt‑as‑code without requiring a server, database, or platform.
Short version: Git‑first prompts with CI governance and zero‑latency runtime.
Full vision: docs/vision.md
- ✅ Git‑native versioning (tags/SHAs = releases)
- ✅ CLI‑first (
init,validate,render,eval,diff,resolve,bundle) - ✅ LLM‑framework agnostic (returns standard
{role, content}messages) - ✅ CI‑friendly reports (JSON + optional JUnit XML)
- ✅ No runtime latency tax (local read or bundle)
- ✅ Optional playground (separate package)
pip install instructvaultgit clone <your-repo>
cd instructvault
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
pytest- Install
instructvaultin your app repo (or a dedicated prompts repo) - Run
ivault initonce to scaffoldprompts/,datasets/, and CI - Add or edit prompt files under
prompts/ - Validate and eval locally (
ivault validate,ivault eval) - Commit prompt changes and create a tag (e.g.,
prompts/v1.0.0) - In your app, render by git ref (tag/branch/SHA) or ship a bundle artifact
pip install instructvault- Create a
prompts/folder (or pick an existing one) - Add prompt files under
prompts/and at least one inline test per prompt - Add CI checks (copy from
docs/ci.mdor runivault initto scaffold workflow) - Validate/eval locally:
ivault validate prompts,ivault eval prompts/<file>.prompt.yml --report out/report.json - Commit prompts and optionally tag:
git tag prompts/v1.0.0 - At runtime, load by ref or bundle artifact
flowchart LR
A[Install ivault] --> B[ivault init]
B --> C["Add/edit prompts"]
C --> D["ivault validate + eval"]
D --> E["Commit + tag"]
E --> F{Runtime path}
F -->|Load by ref| G["InstructVault(repo_root)"]
F -->|Bundle artifact| H[ivault bundle]
H --> I["InstructVault(bundle_path)"]
flowchart LR
A[Install instructvault] --> B["Create/choose prompts/ + datasets/"]
B --> C["Add/edit prompt files"]
C --> D[Add CI checks]
D --> E["Local validate + eval"]
E --> F["Commit + tag (optional)"]
F --> G{Runtime path}
G -->|Load by ref| H["InstructVault(repo_root)"]
G -->|Bundle artifact| I[ivault bundle]
I --> J["InstructVault(bundle_path)"]
ivault initprompts/support_reply.prompt.yml (YAML or JSON)
spec_version: "1.0"
name: support_reply
description: Respond to a support ticket with empathy and clear steps.
model_defaults:
temperature: 0.2
variables:
required: [ticket_text]
optional: [customer_name]
messages:
- role: system
content: |
You are a support engineer. Be concise, empathetic, and action-oriented.
- role: user
content: |
Customer: {{ customer_name | default("there") }}
Ticket:
{{ ticket_text }}
tests:
- name: must_contain_customer_and_ticket
vars:
ticket_text: "My order arrived damaged."
customer_name: "Alex"
assert:
contains_all: ["Customer:", "Ticket:"]ivault validate prompts
ivault render prompts/support_reply.prompt.yml --vars '{"ticket_text":"My app crashed.","customer_name":"Sam"}'- Add
--safeto scan rendered output for common secret patterns. - Use
--strict-varsto forbid unknown vars and--redactto mask detected secrets. - Use
--policy /path/to/policy.pyto enforce custom compliance rules.
datasets/support_cases.jsonl
{"vars":{"ticket_text":"Order arrived damaged","customer_name":"Alex"},"assert":{"contains_any":["Ticket:"]}}
{"vars":{"ticket_text":"Need refund"},"assert":{"contains_all":["Ticket:"]}}ivault eval prompts/support_reply.prompt.yml --dataset datasets/support_cases.jsonl --report out/report.json --junit out/junit.xmlMigration tip: if you need to render a prompt that doesn’t yet include tests, use ivault render --allow-no-tests or add a minimal test first.
ivault migrate promptsgit add prompts datasets
git commit -m "Add support prompts + eval dataset"
git tag prompts/v1.0.0from instructvault import InstructVault
vault = InstructVault(repo_root=".")
msgs = vault.render(
"prompts/support_reply.prompt.yml",
vars={"ticket_text":"My order is delayed", "customer_name":"Ava"},
ref="prompts/v1.0.0",
)Troubleshooting: if you pass a ref and see FileNotFoundError from store.read_text,
the prompt file must exist at that ref and be committed in the same repo. Tags/branches
must point to commits that include the prompt file.
If your prompts live in a separate repo, point repo_root to that repo (not your app repo),
or bundle prompts at build time and ship the bundle with your app.
from instructvault import InstructVault
vault = InstructVault(repo_root="/path/to/prompts-repo")
msgs = vault.render(
"prompts/support_reply.prompt.yml",
vars={"ticket_text":"My order is delayed"},
ref="prompts/v1.0.0",
)FileNotFoundError ... read_textwithref: prompt not committed at that ref, or wrong repo_rootNo prompt files found: path passed toivault validatedoesn’t contain*.prompt.yml|jsonprompt must include at least one test: add a minimal inline test or use--allow-no-testsfor render
ivault bundle --prompts prompts --out out/ivault.bundle.json --ref prompts/v1.0.0from instructvault import InstructVault
vault = InstructVault(bundle_path="out/ivault.bundle.json")examples/notebooks/instructvault_colab.ipynbexamples/notebooks/instructvault_rag_colab.ipynbexamples/notebooks/instructvault_openai_colab.ipynb
examples/ivault_demo_template/README.md
examples/policies/policy_example.pyexamples/policies/policy_pack.py
- Prompt changes go through PRs
- CI runs
validate+eval - Tags or bundles become the deployable artifact
- Apps load by tag or bundle (no runtime network calls)
Datasets are deterministic eval inputs checked into Git. This makes CI reproducible and audit‑friendly.
For cloud datasets, use a CI pre‑step (e.g., download from S3) and then run ivault eval on the local file.
A minimal playground exists under playground/ for local or org‑hosted use.
It lists prompts, renders with variables, and runs evals — without touching production prompts directly.
For local dev, run from the repo root:
export IVAULT_REPO_ROOT=/path/to/your/repo
PYTHONPATH=. uvicorn ivault_playground.app:app --reloadOptional auth:
export IVAULT_PLAYGROUND_API_KEY=your-secretThen send x-ivault-api-key in requests (or keep it behind your org gateway).
If you don’t set the env var, no auth is required.
docs/dropin_guide.md— minimal setup if you already have CIdocs/cookbooks.md— workflows (tags, bundles, multi‑repo, RAG)docs/spec.md— prompt spec and validation rulesdocs/ci.md— CI setup and reportsdocs/governance.md— CODEOWNERS and release guardrailsdocs/playground.md— optional local/hosted playgrounddocs/audit_logging.md— audit fields and patternsdocs/vision.md— product vision and guiding principlesdocs/release_checklist.md— release checklist for maintainersdocs/ci_templates/gitlab-ci.yml— GitLab CI exampledocs/ci_templates/Jenkinsfile— Jenkins exampleCHANGELOG.mdCODE_OF_CONDUCT.md
Apache‑2.0
