Selvedge is a functional toolkit for TypeScript developers that makes working with AI language models simpler and more reliable. Instead of wrestling with unpredictable AI responses and complex API calls, Selvedge gives you a clean, consistent way to integrate AI into your applications.
Selvedge is named after the distinctive finished edge on premium denim jeans that prevents fraying. It rethinks how to write computer programs with LLMs in a consistent way
Selvedge creates a consistent interface for working with language models, allowing you to:
- Write specifications that LLMs translate into working code
- Define typed prompts that generate predictable data structures
- Compose both into robust processing pipelines
This structured approach eliminates the chaos of prompt engineering and the tedium of boilerplate code. You focus on what you want to accomplish, and Selvedge creates the bridge between your intentions and executable solutions.
See examples/ for how to get started.
npm install selvedge
# or
yarn add selvedge
# or
bun add selvedgeSelvedge is a TypeScript-first DSL that wraps the moving parts of LLM applications—prompt templates, typed schemas, model routing, and orchestration—behind a single selvedge namespace. Core capabilities include:
- Typed prompts via helpers in
lib/promptsandlib/schema, making model inputs and outputs explicit. - Programs that generate and execute code from specifications in
lib/programs, allowing intent-driven automation. - Flows that compose prompts, transforms, validators, and filters from
lib/flowinto reusable pipelines. - Model registry and providers in
lib/modelsandlib/providersto register OpenAI, Anthropic, or mock backends with friendly aliases. - Optimization helpers (e.g., few-shot tuning) in
lib/optimizeto iteratively improve prompts. - Storage and management layers in
lib/storageandlib/managerfor sharing state and coordinating runs.
- Typed artifacts everywhere: Prompts (
lib/prompts) pair template strings with TypeScript input/output generics, and schemas (lib/schema) expose runtime validators that are reused by flows and programs. This keeps model I/O consistent from authoring time through runtime execution. - Execution kernel: Flows (
lib/flow) normalize every step (prompt, transform, filter, validator) into a uniform async function signature. The kernel sequences these steps, threads context between them, and short-circuits on failed guards or validations. - Program synthesis loop: Program builders (
lib/programs) translate high-level specifications into generated code, then invoke providers via the same flow engine. Optimizers (lib/optimize) can re-run flows with modified prompts/models and compare scored outcomes. - Model provider interface: Providers in
lib/providersimplement a small contract (invoke(prompt, options) -> result) that the model registry wraps with metadata (e.g., name, capabilities, cost). Swapping OpenAI vs. Anthropic vs. mock backends is a registry change, not a code change. - State and coordination: The manager (
lib/manager) tracks run metadata and exposes hooks for logging/telemetry. Storage drivers (lib/storage) offer pluggable persistence for caching intermediate artifacts, allowing reproducible runs and offline fixtures. - Isolation via namespaces:
src/index.tsmerges the subsystems behind a single namespace so consumers can opt into only the pieces they need while keeping internal modules loosely coupled.
User input
|
v
[Flow / Program]
| (normalizes steps, applies validators)
v
[Execution kernel]
| (resolves model alias -> provider)
v
[Provider]
| (LLM call, streaming, tooling hooks)
v
[Manager & Storage]
| (persist, log, reuse artifacts)
v
Structured result back to caller
The library layers are small and composable. At runtime you usually interact with the merged selvedge export from src/index.ts, while each subsystem stays focused on a single responsibility:
+--------------------+
| selvedge namespace|
| (src/index.ts) |
+---------+----------+
|
+---------------------------+----------------------------+
| | |
+------+------+ +--------+--------+ +-------+-------+
| Prompts & | | Programs & | | Flow engine |
| Schemas | | Optimizers | | (lib/flow) |
| (lib/prompts| | (lib/programs, | | compose steps |
| lib/schema)| | lib/optimize) | | & run filters)|
+------+------+ +--------+--------+ +-------+-------+
| | |
+----------------------------+---------------------------+
|
+-------+-------+
| Model registry|
| (lib/models & |
| providers) |
+-------+-------+
|
+-------+-------+
| Storage & |
| Manager |
| (lib/storage, |
| lib/manager) |
+---------------+
- Building structured generation features where LLM outputs must conform to typed schemas.
- Creating multi-step reasoning pipelines that validate, branch, or filter intermediate results.
- Prototyping agentic behaviors by turning high-level specifications into executable programs.
- Running A/B experiments or prompt optimizations with interchangeable model backends.
- Developing tests and mocks for LLM-dependent code paths via the mock provider.
import selvedge from "selvedge";
// 1) Register models
selvedge.models({
fast: selvedge.openai("gpt-4o-mini"),
smart: selvedge.anthropic("claude-3-opus"),
});
// 2) Define a typed prompt template
const summarize = selvedge.prompt<{ text: string }, { bullets: string[] }>`
Summarize the following article into 3 bullet points.
Text: {{text}}
Return JSON with a "bullets" array of strings.
`;
// 3) Compose a flow with validation
const pipeline = selvedge.flow([
summarize,
selvedge.validate(({ bullets }) => Array.isArray(bullets) && bullets.length === 3),
]);
// 4) Run the flow against a registered model
const result = await pipeline({ text: "Long-form content" }, { model: "fast" });
console.log(result.bullets);