-
Notifications
You must be signed in to change notification settings - Fork 248
Add awq activation fp8 support in loss compute #1873
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: bdellabe/awq-w4a8
Are you sure you want to change the base?
Add awq activation fp8 support in loss compute #1873
Conversation
Summary of ChangesHello @Bluedyson, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the AWQ modifier by integrating FP8 activation quantization support, which allows for the simulation of FP8 quantization errors directly into the loss calculation. This change involves adding new configuration options, implementing robust validation to enforce float-type activation quantization, and updating the core scaling computation to reflect FP8 precision. The goal is to improve the accuracy of loss calculations in quantized models and expand the capabilities of the AWQ modifier. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces support for FP8 activation quantization within the AWQ modifier. The changes primarily involve updating the validation logic to accommodate activation quantization settings and modifying the loss computation in _compute_best_scale
to simulate FP8 precision loss. Additionally, new tests are included to verify the new configuration options. My review focuses on enhancing the robustness of the validation logic to prevent non-deterministic behavior when multiple, differing activation quantization configurations are provided.
bb9391a
to
1fdbfa9
Compare
Signed-off-by: Bluedyson <97047955+Bluedyson@users.noreply.github.com>
1fdbfa9
to
245df35
Compare
@Bluedyson thanks for this! I will take a look after our release this week. |
SUMMARY:
Introduced FP8 quantization error into loss calculation with new config options, validation checks, and tests.
See #1657 (comment)
TEST PLAN:
I'll quickly add general dataset tests result to this PR.
Keep tracing: