Blazing-fast, zero-cost local LLM router. Classify and route prompts to specialized AI models with <1ms latency using heuristic rules.
-
Updated
Feb 25, 2026 - TypeScript
Blazing-fast, zero-cost local LLM router. Classify and route prompts to specialized AI models with <1ms latency using heuristic rules.
Route prompts to local large language models instantly to cut costs and reduce latency with smart, zero-overhead classification under 1 millisecond.
Add a description, image, and links to the openclaw-router topic page so that developers can more easily learn about it.
To associate your repository with the openclaw-router topic, visit your repo's landing page and select "manage topics."