Route LLM requests to the best model for the task at hand.
-
Updated
Sep 22, 2025 - Jupyter Notebook
Route LLM requests to the best model for the task at hand.
Lightweight & fast AI inference proxy for self-hosted LLMs backends like Ollama, LM Studio and others. Designed for speed, simplicity and local-first deployments.
Unified management and routing for llama.cpp, MLX and vLLM models with web dashboard.
Intelligent routing automatically selects the optimal model (GPT-4/Claude/Llama) for each prompt based on complexity. Production-ready with streaming, caching, and A/B testing.
Local LLM proxy, DevOps friendly
Securing markets of dLLMs (Decentralized LLMs) utilizing cryptographic technologies.
Fully agentic LLM orchestrator with autonomous decision-making. This agentic system self-discovers models, learns from usage, and adapts routing strategies. Save 67% on API costs through intelligent agentic behavior. Production-ready with monitoring and self-healing.
Successfully developed an LLM application that provides AI-powered, structured insights based on user queries. The app features a dynamic response generator with progress indicators, interactive upvote/downvote options, and a clean, engaging user interface built using Streamlit. Ideal for personalized meal, fitness, and health-related advice.
A lightweight AI model router for seamlessly switching between multiple AI providers (OpenAI, Anthropic, Google AI) with unified API interface.
Hybrid LLM router: local+cloud models with meta-routing,memory, FastAPI UI, and FAISS; integrates Ollama and OpenAI.
A dynamic input-based LLM routing AI-agent built with n8n. It selects the most suitable language model based on user queries, and uses the selected model to answer, solve, or address the input accordingly.
Add a description, image, and links to the llm-router topic page so that developers can more easily learn about it.
To associate your repository with the llm-router topic, visit your repo's landing page and select "manage topics."