Skip to content

[Bug] qwen3_32b_awq 模型进行 convert 转换后无法正常工作,直接在线转换正常 #3518

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
3 tasks done
KevinLiuMY opened this issue May 5, 2025 · 5 comments

Comments

@KevinLiuMY
Copy link

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.

Describe the bug

qwen3 32b awq 来自 qwen 官方量化模型

如果先进行 convert, 然后运行就会无法正常工作。 返回看不懂的字符和句号

但是直接运行,使用在线转换就能正常工作

Reproduction

lmdeploy convert --tp 4 qwen3_32b Qwen3-32B-AWQ

lmdeploy serve api_server --model-name qwen3_32b --model-format awq --tp 4 --api-keys 123456 --log-level INFO --cache-max-entry-count 0.92 --max-batch-size 4 --max-concurrent-requests 2 --quant-policy 8 --reasoning-parser deepseek-r1 ./workspace

Environment

lmdeploy check_env
sys.platform: linux
Python: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0,1,2,3: Tesla T10
CUDA_HOME: /usr/local/cuda-12.4
NVCC: Cuda compilation tools, release 12.4, V12.4.131
GCC: gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
PyTorch: 2.6.0+cu124
PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.5.3 (Git Hash 66f0cb9eb66affd2da3bf5f8d897376f04aae6af)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 12.4
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  - CuDNN 90.1
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, COMMIT_SHA=2236df1770800ffea5697b11b0bb0d910b2e59e1, CUDA_VERSION=12.4, CUDNN_VERSION=9.1.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, TORCH_VERSION=2.6.0, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF,

TorchVision: 0.21.0+cu124
LMDeploy: 0.8.0+
transformers: 4.51.3
gradio: Not Found
fastapi: 0.115.12
pydantic: 2.11.4
triton: 3.2.0
NVIDIA Topology:
        GPU0    GPU1    GPU2    GPU3    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      PHB     PHB     SYS     0,2,4,6,8,10    0               N/A
GPU1    PHB      X      PHB     SYS     0,2,4,6,8,10    0               N/A
GPU2    PHB     PHB      X      SYS     0,2,4,6,8,10    0               N/A
GPU3    SYS     SYS     SYS      X      1,3,5,7,9,11    1               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks


模型为 qwen3 32b awq,来自qwen官方

Error traceback

@lvhan028
Copy link
Collaborator

lvhan028 commented May 6, 2025

因为 convert 后的模型比较难表达模型并行策略,而且随着模型尺寸的增大,尤其是大的moe模型,convert后的文件数量暴涨,对于 io 有很大的压力,所以,自 0.8.0,我们废弃了 lmdeploy convert
下一个版本,我们会在 CLI 中移除

@KevinLiuMY
Copy link
Author

KevinLiuMY commented May 6, 2025

感谢你的回复 @lvhan028

如果后续版本总是执行在线转换,那么0号显卡总会额外占用一些显存,是设计预期还是一个bug呢? #3395

对于小显存的多卡环境,非常不友好。

@lvhan028
Copy link
Collaborator

lvhan028 commented May 6, 2025

忘记这一茬了。
@irexyc @lzhangzz 关于 0 号卡多占显存的事情,有什么解决思路么?

@irexyc
Copy link
Collaborator

irexyc commented May 6, 2025

@KevinLiuMY
Copy link
Author

@irexyc

确认有效果,内存按照预期占用。
需要长期运行后反馈状态吗?我的应用场景下,并发不多,可能覆盖不足。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants