-
Notifications
You must be signed in to change notification settings - Fork 14
Description
操作系统及版本
openEuler 24.03 (LTS-SP2)
安装工具的python环境
docker容器中的python环境
python版本
3.11
AISBench工具版本
3.0.20251103
AISBench执行命令
ais_bench --models vllm_api_stream_chat --datasets textvqa_gen --mode perf --debug --num-prompts 4
模型配置文件或自定义配置文件内容
from ais_bench.benchmark.openicl.icl_prompt_template import PromptTemplate
from ais_bench.benchmark.openicl.icl_retriever import ZeroRetriever
from ais_bench.benchmark.openicl.icl_inferencer import GenInferencer
from ais_bench.benchmark.datasets import TEXTVQADataset, TEXTEvaluator, math_postprocess_v2
textvqa_reader_cfg = dict(
input_columns=['question'],
output_column='answer'
)
textvqa_infer_cfg = dict(
prompt_template=dict(
type=PromptTemplate,
template={'type': "image_text", 'data': ['image_url', 'text'], 'prompt': " Answer the question using a single word or phrase."}
# template=dict(
# round=[
# dict(role="HUMAN", prompt_mm={
# "text": {"type": "text", "text": "{question} Answer the question using a single word or phrase."},
# "image": {"type": "image_url", "image_url": {"url": "file://{image}"}},
# })
# ]
# )
),
retriever=dict(type=ZeroRetriever),
inferencer=dict(type=GenInferencer)
)
textvqa_eval_cfg = dict(
evaluator=dict(type=TEXTEvaluator)
)
textvqa_datasets = [
dict(
abbr='textvqa',
type=TEXTVQADataset,
# path='ais_bench/datasets/textvqa/textvqa_json/textvqa_val.jsonl', # 数据集路径,使用相对路径时相对于源码根路径,支持绝对路径
path='/home/datasets/textvqa/textvqa_json/textvqa_val.jsonl',
reader_cfg=textvqa_reader_cfg,
infer_cfg=textvqa_infer_cfg,
eval_cfg=textvqa_eval_cfg
)
]
预期行为
正确测试
实际行为
PIServer pid=9993) INFO: 127.0.0.1:60358 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] Error in preprocessing prompt inputs
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] Traceback (most recent call last):
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] File "/vllm-workspace/vllm/vllm/entrypoints/openai/serving_chat.py", line 239, in create_chat_completion
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] ) = await self._preprocess_chat(
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] File "/vllm-workspace/vllm/vllm/entrypoints/openai/serving_engine.py", line 764, in _preprocess_chat
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] conversation, mm_data_future, mm_uuids = parse_chat_messages_futures(
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] File "/vllm-workspace/vllm/vllm/entrypoints/chat_utils.py", line 1544, in parse_chat_messages_futures
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] sub_messages = _parse_chat_message_content(
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] File "/vllm-workspace/vllm/vllm/entrypoints/chat_utils.py", line 1447, in _parse_chat_message_content
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] result = _parse_chat_message_content_parts(
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] File "/vllm-workspace/vllm/vllm/entrypoints/chat_utils.py", line 1318, in _parse_chat_message_content_parts
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] parse_res = _parse_chat_message_content_part(
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] File "/vllm-workspace/vllm/vllm/entrypoints/chat_utils.py", line 1359, in _parse_chat_message_content_part
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] part_type, content = _parse_chat_message_content_mm_part(part)
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] File "/vllm-workspace/vllm/vllm/entrypoints/chat_utils.py", line 1229, in _parse_chat_message_content_mm_part
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] content = MM_PARSER_MAPpart_type
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] File "/vllm-workspace/vllm/vllm/entrypoints/chat_utils.py", line 1185, in
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] "image_url": lambda part: _ImageParser(part)
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] ^^^^^^^^^^^^^^^^^^
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] File "/usr/local/python3.11.13/lib/python3.11/site-packages/pydantic/type_adapter.py", line 421, in validate_python
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] return self.validator.validate_python(
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] pydantic_core._pydantic_core.ValidationError: 1 validation error for ChatCompletionContentPartImageParam
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] image_url
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] Input should be a valid dictionary [type=dict_type, input_value='/home/l00838764/datasets...es/831bcec304a17054.jpg', input_type=str]
(APIServer pid=9993) ERROR 02-03 02:21:38 [serving_chat.py:263] For further information visit https://errors.pydantic.dev/2.11/v/dict_type
前置检查
- 我已读懂主页文档的快速入门,无法解决问题
- 我已检索过FAQ,无重复问题
- 我已搜索过现有Issue,无重复问题
- 我已更新到最新版本,问题仍存在