Generate detailed Stable Diffusion prompts using Qwen3-8B via Ollama, directly within ComfyUI.
- 7 Style Presets: Cinematic, Anime, Photorealistic, Fantasy, Abstract, Cyberpunk, Sci-Fi
- Temperature & Top-P Controls: Fine-tune generation creativity
- Focus Area (Emphasis): Direct the prompt to emphasize specific aspects
- Mood/Atmosphere: Set the emotional tone of the generated prompt
- Reasoning Toggle: Show or hide the model's thinking process
-
Ollama installed and running: https://ollama.ai
-
Qwen3-8B model pulled:
ollama pull qwen3:8b
cd /path/to/ComfyUI/custom_nodes
git clone https://github.com/Limbicnation/ComfyUI-PromptGenerator.git
cd ComfyUI-PromptGenerator
pip install -r requirements.txtSearch for "Prompt Generator" in the ComfyUI Manager and install.
Note: If you see "With the current security level configuration, only custom nodes from the 'default channel' can be installed", temporarily set security_level = weak in your ComfyUI Manager's config.ini file, then restore it to normal after installation. See PUBLISHING.md for details.
- Restart ComfyUI after installation
- Right-click → Add Node →
text/generation→ 🎨 Prompt Generator (Qwen) - Connect the output
promptto your text encoder or save node
| Input | Type | Description |
|---|---|---|
description |
STRING | Your image concept (required) |
style |
COMBO | Style preset (cinematic, anime, etc.) |
emphasis |
STRING | Focus area (optional) |
mood |
STRING | Atmosphere/mood (optional) |
temperature |
FLOAT | Creativity (0.1-1.0, default: 0.7) |
top_p |
FLOAT | Sampling threshold (0.1-1.0, default: 0.9) |
include_reasoning |
BOOLEAN | Show model's thinking process |
model |
STRING | Ollama model (default: qwen3:8b) |
Download the example workflow: workflow/workflow.png
Edit config/templates.yaml to add or modify style templates. Templates use Jinja2 syntax:
my_custom_style:
name: "My Style"
description: "Description for UI"
template: |
Write a prompt for: {{ description }}
{% if emphasis %}Focus on: {{ emphasis }}{% endif %}
{% if mood %}Mood: {{ mood }}{% endif %}- Ensure Ollama is installed and the
ollamacommand is in your PATH - Start the Ollama server:
ollama serve
- Pull the model:
ollama pull qwen3:8b
- Install the Ollama Python package:
pip install ollama
Apache 2.0 - See LICENSE for details.
- Prompt Generator (Gradio App) - Standalone Gradio web UI for prompt generation
This ComfyUI node is based on the prompt-gen project.

