-
Notifications
You must be signed in to change notification settings - Fork 651
Qualcomm AI Engine Direct - Improve GA Static Phi-4-mini accuracy #13573
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Qualcomm AI Engine Direct - Improve GA Static Phi-4-mini accuracy #13573
Conversation
Summary: - Refactor custom annotation for R3 - Fix warning message in quantization - Add phi-4-mini setting into README - Fixed segmemtation fault when run the model with sharding - Add a test case for phi-4 in test_qnn_delegate.py - Add new parameter "group_size" in llama.py to set block size in block quantization
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13573
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New FailuresAs of commit fc31b75 with merge base 3dac421 ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
Hi @cccclai , |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thank you!
…torch#13573) Summary: - Refactor custom annotation for R3 - Fix warning message in quantization - Add phi-4-mini setting into README - Fixed segmemtation fault when run the model with sharding - Add a test case for phi-4 in test_qnn_delegate.py - Add new parameter "group_size" in llama.py to set block size in block quantization ## Sample Script ``` python examples/qualcomm/oss_scripts/llama/llama.py -b build-android -s ${SERIAL_NUM} -m ${SOC_MODEL} \ --ptq 16a4w_block --group_size 16 --checkpoint consolidated.00.pth --params params.json --num_sharding 4 \ --tokenizer_model tokenizer.model --decoder_model phi_4_mini --model_mode hybrid --prefill_ar_len 128 \ --max_seq_len 1024 --prompt "I would like to learn python, could you teach me with a simple example?" ``` ## Result Stats with QNN2.37.0 on SM8750 Accuracy: 10.82 Token Rate: 22.727273 Results: --prompt "I would like to learn python, could you teach me with a simple example?" ``` <|user|>I would like to learn python, could you teach me with one simple program?<|end|><|assistant|>Of course! Let's get started with a simple Python program. We'll create a simple program that asks for your name and then greets you. ```python # Ask for the user's name name = input("Please enter your name: ") # Greet the user print(f"Hello, {name}! Welcome to the world of Python.") ``` To run this program, you would need to copy the code into a Python environment (like an IDE or a Python interpreter). When you run the program, it will prompt you to enter your name, and then it will greet you by name. Enjoy learning Python!<|end|> ``` ## Test plan Added E2E test to test_qnn_delegate.py cc: @haowhsu-quic
Summary:
Sample Script
Result
Stats with QNN2.37.0 on SM8750
Accuracy: 10.82
Token Rate: 22.727273
Results:
--prompt "I would like to learn python, could you teach me with a simple example?"
To run this program, you would need to copy the code into a Python environment (like an IDE or a Python interpreter). When you run the program, it will prompt you to enter your name, and then it will greet you by name. Enjoy learning Python!<|end|>