Skip to content

Conversation

@namgyu-youn
Copy link
Contributor

@namgyu-youn namgyu-youn commented Nov 30, 2025

Summary:
Build a benchmark module for affine (base) vs. HQQ. Both workflows don't require calibration flow, which means there is no need to compute element-wise ops. The HQQ unit test is added because there was only an e2e test.

Test Plan:

test/quantization/test_quant_primitives.py
benchmarks/benchmark_intx.py

PERF result:

> python benchmarks/benchmark_intx.py --model_id Qwen/Qwen3-14B --limit 50

Method             Size(GB)   Comp     Quant(s)   Fwd(ms)    Tok/s      Mem(GB)    Accuracy
----------------------------------------------------------------------------------------------
INT8-INT4          9.465      1.73     0.8        404.58     3.5        18.87      0.88
INT8-INT4-HQQ      9.465      1.73     12.6       403.42     3.5        26.35      0.91
  • Size(GB): Model size after quantization
  • Comp: Compression ratio for model size (vs. origin)
  • Quant(s): Quantization overhead
  • Fwd(ms): Forward pass
  • Tok/s (i.e., throughout): Real token generation speed, involving dequantization
  • Mem(GB): Memory consumption while quantization
  • Accuracy (i.e., ppl): Accuracy for GSM8K task

Related Issue/PR: #3156

@pytorch-bot
Copy link

pytorch-bot bot commented Nov 30, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3403

Note: Links to docs will display an error until the docs builds have been completed.

❗ 2 Active SEVs

There are 2 currently active SEVs. If your PR is affected, please view them below:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 30, 2025
@pytorch-bot
Copy link

pytorch-bot bot commented Nov 30, 2025

❌ 🤖 pytorchbot command failed:

Got EOF while in a quoted string```
Try `@pytorchbot --help` for more info.

@namgyu-youn
Copy link
Contributor Author

@pytorchbot label "topic: performance"

@pytorch-bot pytorch-bot bot added the topic: performance Use this tag if this PR improves the performance of a feature label Nov 30, 2025
@property
def max_gen_toks(self):
return 50
return 512
Copy link
Contributor Author

@namgyu-youn namgyu-youn Nov 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is updated because the GSM8K task is CoT reasoning (long-context)

@namgyu-youn
Copy link
Contributor Author

namgyu-youn commented Nov 30, 2025

cc @metascroy can you take a look into this? I will move to E2E support for SINQ after this PR as we talked with #3156 (comment); we can add SINQ to this e2e benchmark I feel.

@metascroy
Copy link
Contributor

Thanks! The unit test looks good to me. I'll let @jainapurva review the benchmark script, as I'm less familiar with that part of the codebase and if there are coding standards that are usually followed.

@jerryzh168
Copy link
Contributor

intx is targeting mobile right? so testing performance on server doesn't seem to be helpful?

@namgyu-youn
Copy link
Contributor Author

namgyu-youn commented Dec 3, 2025

intx is targeting mobile right? so testing performance on server doesn't seem to be helpful?

@jerryzh168 Even though we don't test in AP device, testing performance would be helpful because there are only outdated CUDA benchmarks for HQQ:

. For the future plan, I only used up-to-date functions like TransformerEvalWrapper and version=2 in this PR. Ideally, we can go with comprehensive PERF for calibration-free (e.g., basic affine, HQQ) vs. calibration-based (e.g., AWQ, SmoothQuant) or something I guess.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: performance Use this tag if this PR improves the performance of a feature

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants