Skip to content

J3tze/Open-WebUI-Functions-Public

Repository files navigation

🧩 Open-WebUI Functions

A powerful collection of custom pipelines, filters, and actions for Open WebUI

Open WebUI Python Docker


✨ Features at a Glance

Function Type Description
🎨 Auto Image Filter Auto-detects image requests in conversation and generates via RunPod ComfyUI
🖼️ Image Button Action Manual image generation button with the same robust pipeline
🔍 Auto Web Search Filter Detects factual queries and auto-searches the web with terminal-style logs
👁️ Vision Proxy Filter Adds vision to non-multimodal models via image-to-text conversion
🧠 Auto Memory Filter Automatically extracts and stores valuable info from conversations
🤖 NanoGPT Integration Manifold Connects to NanoGPT models with subscription-aware fetching
🔌 Antigravity Pipe Manifold Bridges to Anthropic-compatible API endpoints with image support
📊 Usage Dashboard Filter Visual API usage tracking with cyberpunk-styled graphs

🏗️ Architecture

The system uses a hybrid agentic approach — lightweight models handle detection and routing before engaging heavier resources:

flowchart TD
    A["💬 User Message"] --> B["🤖 Auto-Detection Filter\n(lightweight LLM decides action)"]

    B --> C["🎨 Image Gen"]
    B --> D["🔍 Web Search"]
    B --> E["🧠 Memory"]

    C --> C1["NanoGPT Prompt\nEnhancement"]
    C1 --> C2["RunPod ComfyUI"]
    C2 --> C3["🖼️ Image"]

    D --> D1["SearXNG Query"]
    D1 --> D2["Formatted Results"]

    E --> E1["Extract & Store"]
Loading

🚀 Quick Start

📋 Individual Functions

  1. In Open WebUI, go to Workspace → Functions
  2. Click ➕ Create a Function
  3. Copy-paste the contents of any function file
  4. Configure the Valves (settings) for your setup
  5. Enable the function ✅

🐳 Full Stack (Docker)

The included docker-compose.yml sets up a complete environment:

Service Purpose Port
🌐 Open WebUI Main application 3000
🔎 SearXNG Private search engine 8080 (local)
📁 Nginx Image server internal
🔒 Nginx Proxy Manager SSL & reverse proxy 80 443 81
# 1️⃣ Create required volume
docker volume create open-webui

# 2️⃣ Set your secret key
export WEBUI_SECRET_KEY="your-secret-key-here"

# 3️⃣ Start everything
docker compose up -d

📖 Function Details

🎨 Auto Image Generator v5.7.0

Type: Filter — automatically intercepts messages and generates images when requested

Detects when a user is requesting an image and generates it via RunPod's ComfyUI serverless endpoint.

How it works:

💬 User asks for image
    → 🤖 Lightweight LLM confirms it's an image request
    → ✍️ NanoGPT generates optimized Stable Diffusion prompt
    → ⚡ RunPod ComfyUI generates the image
    → 🖼️ Image saved & served via your domain
⚙️ Key Valves
Valve Description
nanogpt_api_key API key for the prompt enhancement LLM
nanogpt_model Model for prompt generation (default: Qwen/Qwen2.5-72B-Instruct)
runpod_api_key Your RunPod API key
runpod_full_url Full RunPod serverless endpoint URL
image_base_url Base URL where images are served (e.g. https://your-domain.com)
workflow_json Your ComfyUI workflow in JSON format
physical_appearance Character appearance tags always included in prompts
positive_style / negative_style Global style tags for generation
enable_spontaneous Enable contextually-triggered spontaneous generation

🖼️ Image Button v5.4.0

Type: Action — manual trigger button in the chat interface

Uses the same image generation pipeline as Auto Image, but triggered manually via a button. Same Valve configuration applies.


🔍 Auto Web Search v5.6.0

Type: Filter — automatically detects factual queries

Detects when a user asks a factual question and triggers a web search using Open WebUI's built-in SearXNG integration.

Highlights:

  • 🧠 Smart detection via lightweight LLM to avoid unnecessary searches
  • 💻 Syntax-highlighted "terminal-style" status logs
  • 🧹 Scrubs internal metadata from responses

👁️ Vision Proxy v1.1.1

Type: Filter — enables image understanding for text-only models

When an image is sent, it's processed by a fast vision-capable LLM to generate a text description, which is then passed to the main model.

⚙️ Key Valves
Valve Description
vision_model The vision-capable model for image descriptions
nanogpt_api_key API key for the vision model

🧠 Auto Memory v1.1.0-alpha1

Type: Filter — by @nokodo

Automatically identifies valuable information from conversations and stores it as Open WebUI memories.

Highlights:

  • 📝 Automatic extraction of long-term information
  • 🔄 Smart consolidation to prevent duplicates
  • 🔗 Cross-message context linking
  • 🗑️ Handles memory updates and deletions

🤖 NanoGPT Integration v3.2.1

Type: Manifold — connects Open WebUI to NanoGPT

Access NanoGPT's full model catalog directly from Open WebUI. Supports filtering by subscription tier and handles timeouts gracefully.

⚙️ Key Valves
Valve Description
nanogpt_api_key Your NanoGPT API key
subscription_only Only show models included in your subscription

🔌 Antigravity Pipe v0.3.5

Type: Manifold — originally by justinh-rahb & christian-taillon (MIT)

Bridges Open WebUI to Anthropic-compatible API endpoints. Supports text and image inputs with automatic base64 encoding.

⚙️ Key Valves
Valve Description
ANTHROPIC_API_KEY API key for the Anthropic-compatible endpoint
PROXY_BASE_URL Base URL for the API endpoint

📊 Usage Dashboard v2.0.0

Type: Filter — visual API consumption tracking

Injects a sleek, cyberpunk-styled usage dashboard into chat responses showing API consumption for NanoGPT and RunPod.


📋 Requirements

Requirement Purpose
🌐 Open WebUI v0.3.17+ Base application
RunPod account Image generation (serverless ComfyUI)
🤖 NanoGPT API key Prompt enhancement & model access
🐳 Docker & Docker Compose Full stack deployment

📄 License

Individual function licenses are noted in their file headers. The Antigravity Pipe is MIT licensed. All other functions by J3tze are provided as-is for personal use.


Built with ❤️ for the Open WebUI community

About

Collection of custom pipelines, filters, and integrations for Open WebUI

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages