Skip to content

onnx fp8 quantization print error!!!! #828

@byun1016

Description

@byun1016

Before submitting an issue, please make sure it hasn't been already addressed by searching through the existing and past issues.

Describe the bug

-modelopt/onnx/quantization/fp8.py

  • logger.info(f"Starting INT8 quantization with '{calibration_method}' calibration")

Steps/Code to reproduce bug

  • ?

Expected behavior

Who can help?

  • ?

System information

  • Container used (if applicable): ?
  • OS (e.g., Ubuntu 22.04, CentOS 7, Windows 10): ?
  • CPU architecture (x86_64, aarch64): ?
  • GPU name (e.g. H100, A100, L40S): ?
  • GPU memory size: ?
  • Number of GPUs: ?
  • Library versions (if applicable):
    • Python: ?
    • ModelOpt version or commit hash: ?
    • CUDA: ?
    • PyTorch: ?
    • Transformers: ?
    • TensorRT-LLM: ?
    • ONNXRuntime: ?
    • TensorRT: ?
  • Any other details that may help: ?

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions