Semantic search routing for SPAs
Find the best route by meaning, not keywords. Powered by HuggingFace embeddings and Orama hybrid search.
ai-router runs a lightweight AI embedding model inside a Web Worker so the main thread stays fast. It indexes your routes into a hybrid (text + vector) search engine and matches user queries by semantic similarity — typos, synonyms, and natural language all work out of the box.
User query: "how much does it cost?"
--> SmartRouter --> { path: "/pricing", score: 0.87 }
- Semantic matching — understands meaning, not just keywords
- Web Worker — model runs off the main thread, zero UI jank
- SSR-safe — no-op on the server, works with Next.js/Nuxt/etc.
- Tiny API —
new SmartRouter(),.search(),.destroy() - Progressive model loading — start with a fast model, upgrade to a better one in the background
- Instance caching —
SmartRouter.create()and.preload()reuse the same worker across mounts - Framework plugins — auto-scan routes for Next.js, Vite, and webpack
- Cached model — ~22 MB on first load, then instant from browser cache
| Package | Description |
|---|---|
@van1s1mys/ai-router |
Core library — SmartRouter class |
@van1s1mys/ai-router-plugin-vite |
Vite plugin — auto-scan routes |
@van1s1mys/ai-router-plugin-next |
Next.js plugin — wraps webpack with sensible defaults |
@van1s1mys/ai-router-plugin-webpack |
webpack plugin — auto-scan routes |
npm install @van1s1mys/ai-routerimport { SmartRouter } from '@van1s1mys/ai-router';
const router = new SmartRouter({
routes: [
{ path: '/pricing', title: 'Pricing', description: 'cost, plans, subscription' },
{ path: '/contact', title: 'Contact', description: 'support, phone, address' },
{ path: '/docs', title: 'Docs', description: 'documentation, API, guides' },
],
threshold: 0.5,
});
await router.ready; // model loads (~22 MB, cached after first run)
const result = await router.search('how to reach support?');
// { path: '/contact', score: 0.91 }
router.destroy(); // cleanup when donePlugins auto-scan your pages directory and expose routes via a virtual module — no manual config needed.
npm install @van1s1mys/ai-router ai-router-plugin-vite// vite.config.ts
import { aiRouter } from '@van1s1mys/ai-router-plugin-vite';
export default defineConfig({
plugins: [aiRouter()],
});// app.ts
import { SmartRouter } from '@van1s1mys/ai-router';
import { routes } from 'virtual:ai-router';
const router = new SmartRouter({ routes });npm install @van1s1mys/ai-router ai-router-plugin-next// next.config.ts
import { withAiRouter } from '@van1s1mys/ai-router-plugin-next';
export default withAiRouter({
/* your Next.js config */
});npm install @van1s1mys/ai-router ai-router-plugin-webpack// webpack.config.ts
import { AiRouterPlugin } from '@van1s1mys/ai-router-plugin-webpack';
export default {
plugins: [new AiRouterPlugin()],
};Start with a lightweight model for instant results, then seamlessly upgrade to a heavier one in the background:
const router = new SmartRouter({
routes,
model: ['Xenova/all-MiniLM-L6-v2', 'Xenova/multilingual-e5-small'],
onModelUpgrade: (modelId) => console.log(`Upgraded to ${modelId}`),
});
await router.ready; // resolves when the first (fast) model is readyThe router re-indexes all routes automatically when each subsequent model finishes loading. Searches keep working during the upgrade — no downtime.
SmartRouter.create() returns a cached instance — safe to call on every React mount without re-downloading the model:
// Returns the same instance for the same model config
const router = SmartRouter.create({ routes, model: 'Xenova/all-MiniLM-L6-v2' });SmartRouter.preload() starts downloading the model at page load so it's already warm when the user needs it:
// At page load — fire and forget
SmartRouter.preload({ routes, model: ['Xenova/all-MiniLM-L6-v2', 'Xenova/multilingual-e5-small'] });
// Later, when the user opens search — instant
const router = SmartRouter.create({ routes, model: ['Xenova/all-MiniLM-L6-v2', 'Xenova/multilingual-e5-small'] });
await router.ready; // resolves immediately if preload finishedThe default model (Xenova/all-MiniLM-L6-v2) works best for English. For other languages, use the multilingual model:
const router = new SmartRouter({
routes,
model: 'Xenova/multilingual-e5-small',
});Add @ai-route comments to your page files for richer metadata:
// @ai-route title="Pricing" description="plans, cost, billing, subscription"
export default function PricingPage() { ... }| Option | Type | Default | Description |
|---|---|---|---|
routes |
RouteConfig[] |
required | Routes to index |
model |
string | string[] |
"Xenova/all-MiniLM-L6-v2" |
Model ID or ordered array for progressive loading |
threshold |
number |
0.5 |
Minimum similarity score (0-1) |
onModelUpgrade |
(modelId: string) => void |
— | Called when the router switches to the next model |
Returns a cached instance for the given model config. Safe to call on every component mount.
Same as create(), but intended to be called at page load to pre-warm the model.
Resolves when the first model is loaded and routes are indexed.
Returns { path, score } or null if no route meets the threshold.
Terminates the worker and removes the instance from cache.
pnpm install
pnpm build # build all packages
pnpm dev # watch mode for all packages