This is a personal, Docker-based experimental environment for ComfyUI.
It utilizes a high-performance base image with PyTorch 2.9.1 and CUDA 13.0, pre-integrated with essential and powerful custom node dependencies.
This build is specifically optimized to resolve complex dependency conflicts (“dependency hell”), providing an out-of-the-box experience.
Before running this container, ensure your host machine meets the following requirements:
- NVIDIA GPU with up-to-date drivers installed
- Docker and Docker Compose installed
- NVIDIA Container Toolkit installed and configured
- This is strictly required for the container to access your GPU.
- NVIDIA Container Toolkit – Install Guide
The most powerful and modular Stable Diffusion GUI.
Essential extension for managing custom nodes and models directly from the UI.
Base environment running PyTorch 2.9.1 with CUDA 13.0 runtime.
Important
This Docker image pre-installs the heavy Python dependencies (requirements) for the following nodes to ensure stability and avoid build errors (such asrsaorgoogle-cloudconflicts).You still need to search for and install these nodes themselves via ComfyUI-Manager inside the UI to complete the setup.
Pre-resolved Python requirements are included for:
-
comfyui_controlnet_aux
Comprehensive set of preprocessors for ControlNet. -
ComfyUI-KJNodes
A suite of useful utility nodes for various workflows. -
comfyui-tensorops
Tools for advanced tensor operations. -
was-node-suite-comfyui
Extensive node suite for image processing and auxiliary tools. -
ComfyUI-Crystools
Hardware monitoring (VRAM/RAM/GPU usage) and metadata readers. -
DZ-FaceDetailer
Essential tool for face restoration and detailing. -
ComfyUI-Easy-Use
Easy control for mutiple nodes. -
comfyui_LLM_party
Integration for Large Language Models (OpenAI, Claude, etc.).
Note
Theaisuitedependency has been modified to use the core package instead of the fullaisuite[all]variant to avoid conflicts with Google Cloud SDKs.
You can organize your host machine like this:
.
├── docker-compose.yml
├── comfyui/ # ComfyUI core from container
└── ai/
├── models/ # Models: checkpoints, LoRA, VAE, etc.
├── output/ # Generated images
├── input/ # Source images / assets
└── user/ # Workflows, custom configs, user data
These paths are mapped into the container via Docker volumes (see below).
Use Docker Compose to spin up the service.
services:
comfyui:
# Always use the latest stable build
image: wclu6/comfyui_docker:latest
container_name: comfyui
ports:
- "8188:8188"
volumes:
# Map the ComfyUI core directory
- ./comfyui:/comfyui
# Models storage (Checkpoints, LoRAs, VAEs, etc.)
- ./ai/models:/comfyui/models
# Output directory for generated images
- ./ai/output:/comfyui/output
# Input directory for source images
- ./ai/input:/comfyui/input
# User directory for custom settings and workflows
- ./ai/user:/comfyui/user
deploy:
resources:
reservations:
devices:
- capabilities:
- gpu
environment:
- NVIDIA_VISIBLE_DEVICES=all
# Set to 'true' for multi-user mode, or 'false' for single user. Configurable.
- MULTI_USER=true
restart: unless-stoppedRun the following command in your terminal:
docker compose up -dOnce started, access the interface at:
http://localhost:8188
This image has been customized to address specific dependency issues found in comfyui_LLM_party and several heavy third-party libraries.
- Switched
aisuite[all]→aisuitecore package - Avoids version conflicts involving:
rsagoogle-authgoogle-cloud-*
- Ensures a successful build and more stable runtime.
- Required: NVIDIA drivers and
nvidia-container-toolkitmust be correctly installed on the host, otherwise GPU access will not work. - Required: The
./comfyui:/comfyuibind mount must be present for this setup, as the container expects ComfyUI to live at/comfyuiinside the container. - You can still install additional custom nodes via ComfyUI-Manager inside the UI.
- Ensure you have enough VRAM for your chosen models and workflows.