Skip to content

[BUG] !!! Exception during processing !!! #18

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Myka88 opened this issue Sep 19, 2024 · 3 comments
Open

[BUG] !!! Exception during processing !!! #18

Myka88 opened this issue Sep 19, 2024 · 3 comments
Assignees
Labels
bug Something isn't working

Comments

@Myka88
Copy link

Myka88 commented Sep 19, 2024

Despite following all the steps given in repository and updated dependencies i still got errors

Actual Behavior:
AssertionError Prompt Generator
C:\Users\K\AppData\Local\Programs\Python\Python310\lib\distutils\core.py

Steps to Reproduce:
Run the workflow

Debug Logs

2024-09-20 00:35:41,025 - root - INFO - got prompt
2024-09-20 00:35:41,058 - root - INFO - Using xformers attention in VAE
2024-09-20 00:35:41,059 - root - INFO - Using xformers attention in VAE
2024-09-20 00:35:41,354 - root - INFO - model weight dtype torch.float16, manual cast: None
2024-09-20 00:35:41,355 - root - INFO - model_type EPS
2024-09-20 00:35:42,169 - root - INFO - Using xformers attention in VAE
2024-09-20 00:35:42,171 - root - INFO - Using xformers attention in VAE
2024-09-20 00:35:42,992 - root - INFO - Requested to load SDXLClipModel
2024-09-20 00:35:42,992 - root - INFO - Loading 1 new model
2024-09-20 00:35:43,286 - root - INFO - loaded completely 0.0 1560.802734375 True
2024-09-20 00:35:43,553 - root - ERROR - !!! Exception during processing !!! C:\Users\K\AppData\Local\Programs\Python\Python310\lib\distutils\core.py
2024-09-20 00:35:43,558 - root - ERROR - Traceback (most recent call last):
  File "E:\ComfyUI\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "E:\ComfyUI\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "E:\ComfyUI\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "E:\ComfyUI\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "E:\ComfyUI\ComfyUI\custom_nodes\prompt-generator-comfyui\prompt_generator.py", line 279, in generate
    generator = Generator(model_path, is_accelerate, quantize)
  File "E:\ComfyUI\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\generate.py", line 47, in __init__
    self.model, self.tokenizer = get_model_tokenizer(
  File "E:\ComfyUI\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\model.py", line 149, in get_model_tokenizer
    model = get_model(model_path, quant_type)
  File "E:\ComfyUI\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\model.py", line 117, in get_model
    model = get_model_from_base(model_name, req_torch_dtype, type)
  File "E:\ComfyUI\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\model.py", line 76, in get_model_from_base
    from optimum.quanto import qfloat8, qint8, qint4, quantize, freeze
  File "C:\Users\K\AppData\Local\Programs\Python\Python310\lib\site-packages\optimum\quanto\__init__.py", line 18, in <module>
    from .library import *
  File "C:\Users\K\AppData\Local\Programs\Python\Python310\lib\site-packages\optimum\quanto\library\__init__.py", line 15, in <module>
    from .extensions import *
  File "C:\Users\K\AppData\Local\Programs\Python\Python310\lib\site-packages\optimum\quanto\library\extensions\__init__.py", line 17, in <module>
    from .cpp import *
  File "C:\Users\K\AppData\Local\Programs\Python\Python310\lib\site-packages\optimum\quanto\library\extensions\cpp\__init__.py", line 19, in <module>
    from ..extension import Extension
  File "C:\Users\K\AppData\Local\Programs\Python\Python310\lib\site-packages\optimum\quanto\library\extensions\extension.py", line 7, in <module>
    from torch.utils.cpp_extension import load
  File "C:\Users\K\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\cpp_extension.py", line 10, in <module>
    import setuptools
  File "C:\Users\K\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\__init__.py", line 8, in <module>
    import _distutils_hack.override  # noqa: F401
  File "C:\Users\K\AppData\Local\Programs\Python\Python310\lib\site-packages\_distutils_hack\override.py", line 1, in <module>
    __import__('_distutils_hack').do_override()
  File "C:\Users\K\AppData\Local\Programs\Python\Python310\lib\site-packages\_distutils_hack\__init__.py", line 77, in do_override
    ensure_local_distutils()
  File "C:\Users\K\AppData\Local\Programs\Python\Python310\lib\site-packages\_distutils_hack\__init__.py", line 64, in ensure_local_distutils
    assert '_distutils' in core.__file__, core.__file__
AssertionError: C:\Users\K\AppData\Local\Programs\Python\Python310\lib\distutils\core.py

error_1

@Myka88 Myka88 added the bug Something isn't working label Sep 19, 2024
@alpertunga-bile
Copy link
Owner

alpertunga-bile commented Sep 20, 2024

Hello @Myka88, thanks for reporting. There seems to be a problem with the package optimum-quanto. I am using version 0.2.4, you can check yours with the command pip show optimum-quanto.

In previous versions of this package, the required torch version was >= 2.2.0 and the script checks against this version. It seems that they updated the requirement to >= 2.4.0, but this is not updated in the installation script, which may be causing the problem.

I may have missed this issue since my torch package is currently at version 2.4.0+cu124. Could you check the version of your optimum-quanto package and torch package and update them to the latest versions? If the problem persists after the updates, please let me know.

Edit: I upgraded the packages which are used by the repository to the latest versions and check with the workflow. The node seems working.

@Myka88
Copy link
Author

Myka88 commented Sep 21, 2024

Hello @alpertunga-bile, thank you for your quick response to my issue.

I've checked the optimum-quanto and also have version 0.2.4.
My current torch version : 2.4.1+cu124.
Both seems to be the latest version.

E:\ComfyUI_windows_portable\ComfyUI>python -m pip show optimum-quanto
Name: optimum-quanto
Version: 0.2.4
Summary: A pytorch quantization backend for optimum.
Home-page: https://github.com/huggingface/optimum-quanto
Author: David Corvoysier
Author-email:
License: Apache-2.0
Location: c:\users\k\appdata\local\programs\python\python310\lib\site-packages
Requires: ninja, numpy, safetensors, torch
Required-by:

E:\ComfyUI_windows_portable\ComfyUI>python -m pip show torch
Name: torch
Version: 2.4.1+cu124
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: packages@pytorch.org
License: BSD-3
Location: c:\users\k\appdata\local\programs\python\python310\lib\site-packages
Requires: filelock, fsspec, jinja2, networkx, sympy, typing-extensions
Required-by: accelerate, bitsandbytes, clip-interrogator, fairscale, kornia, lpips, open_clip_torch, optimum, optimum-quanto, peft, pytorch-lightning, sentence-transformers, spandrel, stanza, timm, torchaudio, torchmetrics, torchsde, torchvision, transparent-background, ultralytics, ultralytics-thop, xformers

Perhaps i should downgrade torch version to 2.0.4+cu124 ? Will try if something change.

In regard of the node requirements and because i had an ComfyUi installed in virtual environment, i reinstalled it completely with the original package that has python_embeded folder. It still give me the same issue.

I did a git pull from prompt-generator-comfyui folder and had 3 files updated, but got same error :

error_1

got prompt
Using pytorch attention in VAE
Using pytorch attention in VAE
model weight dtype torch.float16, manual cast: None
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
Requested to load SDXLClipModel
Loading 1 new model
loaded completely 0.0 1560.802734375 True
!!! Exception during processing !!! E:\ComfyUI_windows_portable\python_embeded\python311.zip\distutils\core.pyc
Traceback (most recent call last):
  File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\prompt-generator-comfyui\prompt_generator.py", line 279, in generate
    generator = Generator(model_path, is_accelerate, quantize)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\generate.py", line 53, in __init__
    self.model, self.tokenizer = get_model_tokenizer(
                                 ^^^^^^^^^^^^^^^^^^^^
  File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\model.py", line 149, in get_model_tokenizer
    model = get_model(model_path, quant_type)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\model.py", line 117, in get_model
    model = get_model_from_base(model_name, req_torch_dtype, type)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\model.py", line 76, in get_model_from_base
    from optimum.quanto import qfloat8, qint8, qint4, quantize, freeze
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\optimum\quanto\__init__.py", line 18, in <module>
    from .library import *
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\optimum\quanto\library\__init__.py", line 15, in <module>
    from .extensions import *
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\optimum\quanto\library\extensions\__init__.py", line 17, in <module>
    from .cpp import *
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\optimum\quanto\library\extensions\cpp\__init__.py", line 19, in <module>
    from ..extension import Extension
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\optimum\quanto\library\extensions\extension.py", line 7, in <module>
    from torch.utils.cpp_extension import load
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\cpp_extension.py", line 10, in <module>
    import setuptools
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\setuptools\__init__.py", line 22, in <module>
    import _distutils_hack.override  # noqa: F401
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\_distutils_hack\override.py", line 1, in <module>
    __import__('_distutils_hack').do_override()
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\_distutils_hack\__init__.py", line 90, in do_override
    ensure_local_distutils()
  File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\_distutils_hack\__init__.py", line 77, in ensure_local_distutils
    assert '_distutils' in core.__file__, core.__file__
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: E:\ComfyUI_windows_portable\python_embeded\python311.zip\distutils\core.pyc

@alpertunga-bile
Copy link
Owner

alpertunga-bile commented Sep 21, 2024

I checked with clear installations with the manual and portable versions and the node seems working.

After searching, it seems the issue is not from the required packages or the node. I found that this is a bug in the setuptools package. Some suggest that to overcome this issue, you can run this command before starting the ComfyUI (link to source):

set SETUPTOOLS_USE_DISTUTILS=stdlib

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants