-
Notifications
You must be signed in to change notification settings - Fork 8
Description
C:\Users\rtm\Documents>gputopia-worker-opencl-win-64.exe --ln_url silentmeadow15054@getalby.com --force_layers 33 --test_model TheBloke/CodeLlama-7B-Instruct-GGUF:Q4_K_M --debug --main_gpu 0 --tensor_split 0,1,0,1,0,1,0,1,0,1
2023-10-07 10:52:12,482 - ai_worker.main - DEBUG - no nvidia: NVML Shared Library Not Found
{'auth_key': '',
'cl_driver_version': 'OpenCL 3.0 ',
'cl_gpus': [{'clock': 2400,
'clock_unit': 'mhz',
'memory': 8319.0,
'name': 'Intel(R) Arc(TM) A750 Graphics',
'uuid': None},
{'clock': 2400,
'clock_unit': 'mhz',
'memory': 8319.0,
'name': 'Intel(R) Arc(TM) A750 Graphics',
'uuid': None},
{'clock': 2400,
'clock_unit': 'mhz',
'memory': 8319.0,
'name': 'Intel(R) Arc(TM) A750 Graphics',
'uuid': None},
{'clock': 2400,
'clock_unit': 'mhz',
'memory': 8319.0,
'name': 'Intel(R) Arc(TM) A750 Graphics',
'uuid': None},
{'clock': 2400,
'clock_unit': 'mhz',
'memory': 8319.0,
'name': 'Intel(R) Arc(TM) A750 Graphics',
'uuid': None}],
'cpu_count': 16,
'disk_space': 428351,
'ln_address': 'silentmeadow15054@getalby.com',
'ln_url': 'silentmeadow15054@getalby.com',
'nv_driver_version': None,
'nv_gpu_count': None,
'nv_gpus': [],
'vram': 5996646400,
'web_gpus': [],
'worker_id': '',
'worker_version': '0.1.9'}
2023-10-07 10:52:13,051 - ai_worker.main - DEBUG - loading model: TheBloke/CodeLlama-7B-Instruct-GGUF:Q4_K_M
ggml_opencl: selecting platform: 'Intel(R) OpenCL Graphics'
ggml_opencl: selecting device: 'Intel(R) Arc(TM) A750 Graphics'
ggml_opencl: device FP16 support: true
Traceback (most recent call last):
File "ai_worker_main_.py", line 4, in
File "ai_worker\main.py", line 385, in main
File "asyncio\runners.py", line 190, in run
File "asyncio\runners.py", line 118, in run
File "asyncio\base_events.py", line 653, in run_until_complete
File "ai_worker\main.py", line 144, in run
File "ai_worker\main.py", line 114, in test_model
File "ai_worker\main.py", line 201, in load_model
File "llama_cpp\server\app.py", line 342, in create_app
File "llama_cpp\llama.py", line 312, in init
IndexError: invalid index
[12344] Failed to execute script 'main' due to unhandled exception!
--tensor_split not working either
