A powerful collection of custom pipelines, filters, and actions for Open WebUI
| Function | Type | Description | |
|---|---|---|---|
| 🎨 | Auto Image | Filter | Auto-detects image requests in conversation and generates via RunPod ComfyUI |
| 🖼️ | Image Button | Action | Manual image generation button with the same robust pipeline |
| 🔍 | Auto Web Search | Filter | Detects factual queries and auto-searches the web with terminal-style logs |
| 👁️ | Vision Proxy | Filter | Adds vision to non-multimodal models via image-to-text conversion |
| 🧠 | Auto Memory | Filter | Automatically extracts and stores valuable info from conversations |
| 🤖 | NanoGPT Integration | Manifold | Connects to NanoGPT models with subscription-aware fetching |
| 🔌 | Antigravity Pipe | Manifold | Bridges to Anthropic-compatible API endpoints with image support |
| 📊 | Usage Dashboard | Filter | Visual API usage tracking with cyberpunk-styled graphs |
The system uses a hybrid agentic approach — lightweight models handle detection and routing before engaging heavier resources:
flowchart TD
A["💬 User Message"] --> B["🤖 Auto-Detection Filter\n(lightweight LLM decides action)"]
B --> C["🎨 Image Gen"]
B --> D["🔍 Web Search"]
B --> E["🧠 Memory"]
C --> C1["NanoGPT Prompt\nEnhancement"]
C1 --> C2["RunPod ComfyUI"]
C2 --> C3["🖼️ Image"]
D --> D1["SearXNG Query"]
D1 --> D2["Formatted Results"]
E --> E1["Extract & Store"]
- In Open WebUI, go to Workspace → Functions
- Click ➕ Create a Function
- Copy-paste the contents of any function file
- Configure the Valves (settings) for your setup
- Enable the function ✅
The included docker-compose.yml sets up a complete environment:
| Service | Purpose | Port |
|---|---|---|
| 🌐 Open WebUI | Main application | 3000 |
| 🔎 SearXNG | Private search engine | 8080 (local) |
| 📁 Nginx | Image server | internal |
| 🔒 Nginx Proxy Manager | SSL & reverse proxy | 80 443 81 |
# 1️⃣ Create required volume
docker volume create open-webui
# 2️⃣ Set your secret key
export WEBUI_SECRET_KEY="your-secret-key-here"
# 3️⃣ Start everything
docker compose up -dType: Filter — automatically intercepts messages and generates images when requested
Detects when a user is requesting an image and generates it via RunPod's ComfyUI serverless endpoint.
How it works:
💬 User asks for image
→ 🤖 Lightweight LLM confirms it's an image request
→ ✍️ NanoGPT generates optimized Stable Diffusion prompt
→ ⚡ RunPod ComfyUI generates the image
→ 🖼️ Image saved & served via your domain
⚙️ Key Valves
| Valve | Description |
|---|---|
nanogpt_api_key |
API key for the prompt enhancement LLM |
nanogpt_model |
Model for prompt generation (default: Qwen/Qwen2.5-72B-Instruct) |
runpod_api_key |
Your RunPod API key |
runpod_full_url |
Full RunPod serverless endpoint URL |
image_base_url |
Base URL where images are served (e.g. https://your-domain.com) |
workflow_json |
Your ComfyUI workflow in JSON format |
physical_appearance |
Character appearance tags always included in prompts |
positive_style / negative_style |
Global style tags for generation |
enable_spontaneous |
Enable contextually-triggered spontaneous generation |
Type: Action — manual trigger button in the chat interface
Uses the same image generation pipeline as Auto Image, but triggered manually via a button. Same Valve configuration applies.
Type: Filter — automatically detects factual queries
Detects when a user asks a factual question and triggers a web search using Open WebUI's built-in SearXNG integration.
Highlights:
- 🧠 Smart detection via lightweight LLM to avoid unnecessary searches
- 💻 Syntax-highlighted "terminal-style" status logs
- 🧹 Scrubs internal metadata from responses
Type: Filter — enables image understanding for text-only models
When an image is sent, it's processed by a fast vision-capable LLM to generate a text description, which is then passed to the main model.
⚙️ Key Valves
| Valve | Description |
|---|---|
vision_model |
The vision-capable model for image descriptions |
nanogpt_api_key |
API key for the vision model |
Type: Filter — by @nokodo
Automatically identifies valuable information from conversations and stores it as Open WebUI memories.
Highlights:
- 📝 Automatic extraction of long-term information
- 🔄 Smart consolidation to prevent duplicates
- 🔗 Cross-message context linking
- 🗑️ Handles memory updates and deletions
Type: Manifold — connects Open WebUI to NanoGPT
Access NanoGPT's full model catalog directly from Open WebUI. Supports filtering by subscription tier and handles timeouts gracefully.
⚙️ Key Valves
| Valve | Description |
|---|---|
nanogpt_api_key |
Your NanoGPT API key |
subscription_only |
Only show models included in your subscription |
Type: Manifold — originally by justinh-rahb & christian-taillon (MIT)
Bridges Open WebUI to Anthropic-compatible API endpoints. Supports text and image inputs with automatic base64 encoding.
⚙️ Key Valves
| Valve | Description |
|---|---|
ANTHROPIC_API_KEY |
API key for the Anthropic-compatible endpoint |
PROXY_BASE_URL |
Base URL for the API endpoint |
Type: Filter — visual API consumption tracking
Injects a sleek, cyberpunk-styled usage dashboard into chat responses showing API consumption for NanoGPT and RunPod.
| Requirement | Purpose |
|---|---|
🌐 Open WebUI v0.3.17+ |
Base application |
| ⚡ RunPod account | Image generation (serverless ComfyUI) |
| 🤖 NanoGPT API key | Prompt enhancement & model access |
| 🐳 Docker & Docker Compose | Full stack deployment |
Individual function licenses are noted in their file headers. The Antigravity Pipe is MIT licensed. All other functions by J3tze are provided as-is for personal use.
Built with ❤️ for the Open WebUI community