@intflows/genkit-guard provides a modular guardrail layer for Genkit flows.
It adds semantic intent validation, PII masking/unmasking, and prompt‑injection detection with minimal configuration.
This library is designed for developers who want practical, production‑ready safety controls without heavy dependencies or complex setup.
-
Semantic Intent Guarding
Uses MiniLM embeddings to ensure prompts match allowed intents. -
PII Detection & Masking
Detects emails, phone numbers, names, and AU‑specific identifiers.
Replaces PII with reversible tokens before sending to the LLM. -
Automatic Unmasking
Restores original PII in the model’s response, even inside structured JSON. -
Prompt Injection Detection
Blocks jailbreak attempts using pattern‑based heuristics. -
Model‑Light Architecture
The package uses localall-MiniLM-L6-v2andbert-base-NERModels, these Models are downloaded once and cached locally. -
Drop‑in Genkit Middleware
Works withai.generate,ai.generateStream, and Genkit flows.
## Install the package
npm install @intflows/genkit-guardThis library uses lightweight transformer models (MiniLM + BERT‑NER).
Download them once.
## Download the transformer models (MiniLM + BERT‑NER)
node node_modules/@intflows/genkit-guard/scripts/download-model.jsModels are cached locally and reused across runs.
# Install @intflows/genkit-guard
npm install @intflows/genkit-guard
# Download Local Models (Only needed once)
node node_modules/@intflows/genkit-guard/scripts/download-model.jsimport { guard, initGuard } from "@intflows/genkit-guard";
await initGuard();
const response = await ai.generate({
prompt: "How do I integrate with Azure Blob Storage?",
use: [
guard({
intent: {
mode: "semantic",
allowedIntent: "integration",
semantic: {
threshold: 0.7,
intents: {
integration: "Azure Blob, APIs, workflows"
}
}
},
pii: { reversible: true }
})
]
});You Can also check the full step by step guide here:
npx tsx src/index.ts "How do I integrate with Azure Blob Storage?"
npx tsx src/index.ts "workflow to download a file from an API, save it to Blob file and export the API key"
npx tsx src/index.ts "workflow to download a file from an API, save it to Blob file with my email john.doe@example.com"
- Embeds the user prompt + intent descriptions using MiniLM
- Computes cosine similarity
- Blocks prompts below threshold
- Detects jailbreak patterns like:
- “ignore previous instructions”
- “you are a hacker”
- “export the API key”
Before the LLM sees the prompt:
"Email john.doe@example.com" → "Email [[EMAIL_0]]"
Detected PII includes:
- Emails
- Phone numbers
- Names (NER)
- AU identifiers (Medicare, TFN, ABN, etc.)
The masked prompt is sent to the model.
After the LLM responds:
"Send a confirmation email to [[EMAIL_0]]" → "Send a confirmation email to john.doe@example.com"
intent: {
mode: "semantic",
allowedIntent: "intent_question",
semantic: {
threshold: 0.7,
intents: {
intent_question: "Description of allowed intent"
}
}
}pii: {
reversible: true
}Genkit provides a powerful LLM framework, but production systems need:
- intent boundaries
- PII protection
- jailbreak resistance
- predictable behavior
This library adds those guardrails without heavy dependencies or complex setup.
We plan to:
- Extend the utility by adding Auth and Tool Middleware in further stages.
- Add more filter types for common malicious prompts.
- Add more patterns for custom PII masking.
Contributions are welcome — whether it’s bug reports, new guard modules, model improvements or enhancements. This project aims to stay lightweight, modular, and production‑ready, so thoughtful contributions are appreciated.
Apache‑2.0

