Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
125 changes: 125 additions & 0 deletions docs/qnn_backend/aot_execute.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
QNN AOT Execution Flow
================================================================

.. note::
Please refer to the `Environment Setup <setup_env.html>`_ documentation to configure the QNN and Hexagon SDK environments before proceeding.

This document aims to explain the main execution flow of QNN AOT (Ahead-of-Time). This implementation is designed to fully leverage the offline compilation capabilities of the Qualcomm QNN framework to achieve efficient inference of fully integer-quantized Large Language Models (LLMs) on mobile devices, which is the de facto workflow for LLM execution on the Hexagon NPU.

Specifically, our implementation employs a W4A16 quantization scheme. The Key-Value (KV) Cache is quantized to ``uint8``, and the linear weights are quantized using Low-Power Blockwise Quantization (LPBQ).

The implementation of this module was inspired by the `PyTorch ExecuTorch`_ project, especially its `Hybrid Execution Mode`_ designed for the Qualcomm backend, for which we are grateful.

.. _PyTorch ExecuTorch: https://pytorch.org/executorch/
.. _Hybrid Execution Mode: https://github.com/pytorch/executorch/blob/main/examples/qualcomm/oss_scripts/llama/README.md

Overall Flow
----------------------------------------------------------------

The QNN AOT execution flow is mainly divided into three stages:

1. **Model Quantization and Export (Python)**: On the host machine, a Python script is used to quantize the pre-trained floating-point model and export it to the MLLM IR (``.mir``) format.
2. **Offline Compilation (C++)**: On the host machine, a C++ compiler program loads the ``.mir`` file, invokes the QNN toolchain for model compilation, graph optimization, and quantization parameter adjustment, and finally generates a QNN Context Binary.
3. **On-Device Execution (C++)**: On the target device (e.g., a mobile phone), the AOT runner program loads the pre-compiled context binary and executes inference.


Detailed Steps
----------------------------------------------------------------

Taking ``qwen3_qnn_aot`` as an example, the detailed steps are as follows.

1. **Model Quantization and Export**

First, we need to run a Python script on the host to quantize the model and export it as a ``.safetensors`` file.

.. code-block:: shell

cd ./pymllm/backends/qualcomm/transformers/qwen3
python train.py --model_path "/your/qwen3/model/path/" --max_length 1024 --num_samples 128 --output_dir "/path/to/output"

This step generates a key file:

* ``model.safetensors``: The quantized model file, saved in the specified output directory.

Next, convert the exported ``.safetensors`` model to the MLLM format (``.mllm``) using the ``mllm-convertor`` script.

.. code-block:: shell
pip install pymllm

mllm-convertor --input_path /path/to/output/model.safetensors --output_path /path/to/output/qwen3_1.7b.mllm

This will generate the ``qwen3_1.7b.mllm`` file, which will be used in the subsequent compilation step.

2. **Offline Compilation to Generate QNN Context**

Next, we use a C++ compiler program (``compile.cpp``) on the host to generate the QNN context. This process invokes the QNN SDK to convert the MLLM IR into a QNN-supported format and performs optimizations.

Compile and run the ``compile`` program:

.. code-block:: shell

# In the mllm-v2 project root directory
python task.py tasks/build_x86_qnn_aot.yaml

# Run the compiler program
./build-qnn-aot/bin/mllm-qwen3-aot-sha-c \
-m /path/to/output/qwen3_1.7b.mllm \
-c ./examples/qwen3_qnn_aot/config_1.7B.json \
--aot_config ./examples/qwen3_qnn_aot/qnn_aot_cfg_1.7B.json


This program reads the ``.mllm`` model file and the quantization recipe, and finally generates a QNN context binary file named ``qwen3-1.7B-lpbq-sha.bin``. This file contains all the information needed to execute inference on the target device.

.. note::
The ``HtpSignedPd`` config in qnn_aot_cfg_1.7B.json will specify ``QNN_HTP_DEVICE_CONFIG_OPTION_SIGNEDPD`` during QNN initialization, which may cause an "Unsupported config option 2" error in older QNN versions. It is recommended to change the config in the json file to ``HtpUnsignedPd``.

3. **On-Device AOT Inference**

Finally, we push the generated ``qwen3-1.7B-lpbq-sha.bin`` file and other resources like the tokenizer to the target device. The on-device AOT runner program (``aot_run.cpp``) will load this binary file and execute inference.

Compile and run the ``aot_run`` program:

.. code-block:: shell

# Cross-compile the aot_run program for the target device (e.g., Android)
python task.py tasks/build_android_qnn.yaml

# Push compiled context file to the device
adb push qwen3-1.7B-lpbq-sha.bin /data/local/tmp/

# Push QNN libraries and Op Packages
ANDR_LIB=$QNN_SDK_ROOT/lib/aarch64-android
OP_PATH=mllm/backends/qnn/custom-op-package/LLaMAPackage/build

adb push $ANDR_LIB/libQnnHtp.so /data/local/tmp
adb push $ANDR_LIB/libQnnHtpV75Stub.so /data/local/tmp
adb push $ANDR_LIB/libQnnHtpPrepare.so /data/local/tmp
adb push $ANDR_LIB/libQnnHtpProfilingReader.so /data/local/tmp
adb push $ANDR_LIB/libQnnHtpOptraceProfilingReader.so /data/local/tmp
adb push $ANDR_LIB/libQnnHtpV75CalculatorStub.so /data/local/tmp
adb push $QNN_SDK_ROOT/lib/hexagon-v75/unsigned/libQnnHtpV75Skel.so /data/local/tmp
adb push $QNN_SDK_ROOT/lib/aarch64-android/libQnnSystem.so /data/local/tmp

adb push $OP_PATH/aarch64-android/libQnnLLaMAPackage.so /data/local/tmp/libQnnLLaMAPackage_CPU.so
adb push $OP_PATH/hexagon-v75/libQnnLLaMAPackage.so /data/local/tmp/libQnnLLaMAPackage_HTP.so

# Push mllm runner and libs to device
adb push build-android-arm64-v8a-qnn/bin/*.so /data/local/tmp
adb push build-android-arm64-v8a-qnn/bin/mllm-qwen3-aot-runner /data/local/tmp

# Execute on the device
adb shell "cd /data/local/tmp && export LD_LIBRARY_PATH=. &&
./mllm-qwen3-aot-runner -m qwen3-1.7B-lpbq-sha.bin
-t qwen3-tokenizer.json -c config_1.7B.json --ar_len 32"

The AOT runner program loads the ``.bin`` file to initialize the QNN context, then receives input tokens, performs model inference, and outputs the next token, thus realizing the language model generation process.

Hybrid Mode Explanation
----------------------------------------------------------------

Our QNN AOT implementation adopts a Hybrid mode similar to `executorch` to optimize the efficiency of Prompt processing and Token generation.

* **Prefill Phase**: When processing the user's input (Prompt) for the first time, the model calculates and caches the Key-Value (KV) states for all input tokens at once. This phase is computationally intensive but is performed only once.
* **Decode Phase**: When generating subsequent tokens, the model takes only the previously generated token as input and uses the cached KV state for computation. This process is computationally light and fast, suitable for token-by-token generation.

In this way, we combine the advantages of batch processing and stream processing to improve overall throughput while ensuring low latency.
2 changes: 1 addition & 1 deletion docs/qnn_backend/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ QNN Backend

setup_env
core_design
qnn_model_convert
aot_execute
4 changes: 4 additions & 0 deletions docs/qnn_backend/setup_env.rst
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,10 @@ Compilation Commands

This will build the necessary QNN op packages for both AArch64 and HVX v75 targets.

.. note::
The Hexagon tools version in the Makefile may change. If compilation fails, please update the version number in the Makefile accordingly.


Development Tips
----------------

Expand Down
5 changes: 4 additions & 1 deletion examples/llama_qnn_aot/compile.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,9 @@ MLLM_MAIN({
auto& model_path = Argparse::add<std::string>("-m|--model_path").help("Model file path.");
auto& model_cfg_path = Argparse::add<std::string>("-c|--config").help("Model config file path.");
auto& qnn_aot_cfg_files = Argparse::add<std::string>("-aot_cfg|--aot_config").help("AOT Config file path.");
auto& qnn_env_path = Argparse::add<std::string>("-qnn_env|--qnn_env_path")
.def("/opt/qcom/aistack/qairt/2.41.0.251128/lib/x86_64-linux-clang/")
.help("QNN AOT Environment path.");

Argparse::parse(argc, argv);

Expand Down Expand Up @@ -47,7 +50,7 @@ MLLM_MAIN({
model.load(params);

// Create Qnn AOT Model
auto qnn_aot_env = mllm::qnn::aot::QnnAOTEnv("/opt/qcom/aistack/qairt/2.41.0.251128/lib/x86_64-linux-clang/",
auto qnn_aot_env = mllm::qnn::aot::QnnAOTEnv(qnn_env_path.get(),
mllm::qnn::aot::parseQcomTargetMachineFromJSONFile(qnn_aot_cfg_files.get()));

// Model length 32.
Expand Down
5 changes: 4 additions & 1 deletion examples/llama_qnn_aot/compile_sha.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,9 @@ MLLM_MAIN({
auto& model_path = Argparse::add<std::string>("-m|--model_path").help("Model file path.");
auto& model_cfg_path = Argparse::add<std::string>("-c|--config").help("Model config file path.");
auto& qnn_aot_cfg_files = Argparse::add<std::string>("-aot_cfg|--aot_config").help("AOT Config file path.");
auto& qnn_env_path = Argparse::add<std::string>("-qnn_env|--qnn_env_path")
.def("/opt/qcom/aistack/qairt/2.41.0.251128/lib/x86_64-linux-clang/")
.help("QNN AOT Environment path.");

Argparse::parse(argc, argv);

Expand Down Expand Up @@ -73,7 +76,7 @@ MLLM_MAIN({
model.load(params);

// Create Qnn AOT Model
auto qnn_aot_env = mllm::qnn::aot::QnnAOTEnv("/opt/qcom/aistack/qairt/2.41.0.251128/lib/x86_64-linux-clang/",
auto qnn_aot_env = mllm::qnn::aot::QnnAOTEnv(qnn_env_path.get(),
mllm::qnn::aot::parseQcomTargetMachineFromJSONFile(qnn_aot_cfg_files.get()));

// Model length 32.
Expand Down
5 changes: 4 additions & 1 deletion examples/qwen2_qnn_aot/compile.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,9 @@ MLLM_MAIN({
auto& model_path = Argparse::add<std::string>("-m|--model_path").help("Model file path.");
auto& model_cfg_path = Argparse::add<std::string>("-c|--config").help("Model config file path.");
auto& qnn_aot_cfg_files = Argparse::add<std::string>("-aot_cfg|--aot_config").help("AOT Config file path.");
auto& qnn_env_path = Argparse::add<std::string>("-qnn_env|--qnn_env_path")
.def("/opt/qcom/aistack/qairt/2.41.0.251128/lib/x86_64-linux-clang/")
.help("QNN AOT Environment path.");

Argparse::parse(argc, argv);

Expand Down Expand Up @@ -47,7 +50,7 @@ MLLM_MAIN({
model.load(params);

// Create Qnn AOT Model
auto qnn_aot_env = mllm::qnn::aot::QnnAOTEnv("/opt/qcom/aistack/qairt/2.41.0.251128/lib/x86_64-linux-clang/",
auto qnn_aot_env = mllm::qnn::aot::QnnAOTEnv(qnn_env_path.get(),
mllm::qnn::aot::parseQcomTargetMachineFromJSONFile(qnn_aot_cfg_files.get()));

// Model length 32.
Expand Down
5 changes: 4 additions & 1 deletion examples/qwen2_qnn_aot/compile_sha.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,9 @@ MLLM_MAIN({
auto& model_path = Argparse::add<std::string>("-m|--model_path").help("Model file path.");
auto& model_cfg_path = Argparse::add<std::string>("-c|--config").help("Model config file path.");
auto& qnn_aot_cfg_files = Argparse::add<std::string>("-aot_cfg|--aot_config").help("AOT Config file path.");
auto& qnn_env_path = Argparse::add<std::string>("-qnn_env|--qnn_env_path")
.def("/opt/qcom/aistack/qairt/2.41.0.251128/lib/x86_64-linux-clang/")
.help("QNN AOT Environment path.");

Argparse::parse(argc, argv);

Expand Down Expand Up @@ -73,7 +76,7 @@ MLLM_MAIN({
model.load(params);

// Create Qnn AOT Model
auto qnn_aot_env = mllm::qnn::aot::QnnAOTEnv("/opt/qcom/aistack/qairt/2.41.0.251128/lib/x86_64-linux-clang/",
auto qnn_aot_env = mllm::qnn::aot::QnnAOTEnv(qnn_env_path.get(),
mllm::qnn::aot::parseQcomTargetMachineFromJSONFile(qnn_aot_cfg_files.get()));

// Model length 32.
Expand Down
4 changes: 0 additions & 4 deletions examples/qwen3_qnn_aot/aot_run.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -43,10 +43,6 @@ MLLM_MAIN({

auto input_tensor = tokenizer.convertMessage({.prompt = prompt_text});

// DBG:
mllm::print(input_tensor["sequence"].shape());
mllm::print(input_tensor["sequence"]);

Runner runner(config, &tokenizer);
if (!runner.load()) {
std::cerr << "Failed to load model\n";
Expand Down
5 changes: 4 additions & 1 deletion examples/qwen3_qnn_aot/compile.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,9 @@ MLLM_MAIN({
auto& model_path = Argparse::add<std::string>("-m|--model_path").help("Model file path.");
auto& model_cfg_path = Argparse::add<std::string>("-c|--config").help("Model config file path.");
auto& qnn_aot_cfg_files = Argparse::add<std::string>("-aot_cfg|--aot_config").help("AOT Config file path.");
auto& qnn_env_path = Argparse::add<std::string>("-qnn_env|--qnn_env_path")
.def("/opt/qcom/aistack/qairt/2.41.0.251128/lib/x86_64-linux-clang/")
.help("QNN AOT Environment path.");

Argparse::parse(argc, argv);

Expand Down Expand Up @@ -47,7 +50,7 @@ MLLM_MAIN({
model.load(params);

// Create Qnn AOT Model
auto qnn_aot_env = mllm::qnn::aot::QnnAOTEnv("/opt/qcom/aistack/qairt/2.41.0.251128/lib/x86_64-linux-clang/",
auto qnn_aot_env = mllm::qnn::aot::QnnAOTEnv(qnn_env_path.get(),
mllm::qnn::aot::parseQcomTargetMachineFromJSONFile(qnn_aot_cfg_files.get()));

// Model length 32.
Expand Down
5 changes: 4 additions & 1 deletion examples/qwen3_qnn_aot/compile_sha.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,9 @@ MLLM_MAIN({
auto& model_path = Argparse::add<std::string>("-m|--model_path").help("Model file path.");
auto& model_cfg_path = Argparse::add<std::string>("-c|--config").help("Model config file path.");
auto& qnn_aot_cfg_files = Argparse::add<std::string>("-aot_cfg|--aot_config").help("AOT Config file path.");
auto& qnn_env_path = Argparse::add<std::string>("-qnn_env|--qnn_env_path")
.def("/opt/qcom/aistack/qairt/2.41.0.251128/lib/x86_64-linux-clang/")
.help("QNN AOT Environment path.");

Argparse::parse(argc, argv);

Expand Down Expand Up @@ -73,7 +76,7 @@ MLLM_MAIN({
model.load(params);

// Create Qnn AOT Model
auto qnn_aot_env = mllm::qnn::aot::QnnAOTEnv("/opt/qcom/aistack/qairt/2.41.0.251128/lib/x86_64-linux-clang/",
auto qnn_aot_env = mllm::qnn::aot::QnnAOTEnv(qnn_env_path.get(),
mllm::qnn::aot::parseQcomTargetMachineFromJSONFile(qnn_aot_cfg_files.get()));

// Model length 32.
Expand Down
20 changes: 5 additions & 15 deletions mllm/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -56,17 +56,6 @@ if(CMAKE_CXX_COMPILER_ID STREQUAL "Clang" OR CMAKE_CXX_COMPILER_ID STREQUAL "App
endif()
endif()

# FIXME: @oreomaker Need to remove comma features in slice!
# Suppress comma-subscript warnings (deprecated C++ feature that will be removed in C++26)
# This flag is only available in Clang 13+ and GCC 10+
if(CMAKE_CXX_COMPILER_ID STREQUAL "Clang" OR CMAKE_CXX_COMPILER_ID STREQUAL "AppleClang")
target_compile_options(MllmRT PUBLIC -Wno-comma-subscript)
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
if(CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL "10.0")
target_compile_options(MllmRT PUBLIC -Wno-comma-subscript)
endif()
endif()

# ONLY APPLE CAN DO !
# Processing OpenMP
if(MLLM_KERNEL_USE_THREADS AND MLLM_KERNEL_THREADS_VENDOR_OPENMP)
Expand Down Expand Up @@ -125,16 +114,17 @@ if(MLLM_BUILD_OPENCL_BACKEND)
)
endif()

if(MLLM_QUALCOMM_QNN_AOT_ON_X86_ENABLE OR MLLM_BUILD_QNN_BACKEND)
add_subdirectory(backends/qnn)
endif()

# add definition before including qnn
if(MLLM_QUALCOMM_QNN_AOT_ON_X86_ENABLE)
add_compile_definitions(
MLLM_QUALCOMM_QNN_AOT_ON_X86_ENABLE
)
endif()

if(MLLM_QUALCOMM_QNN_AOT_ON_X86_ENABLE OR MLLM_BUILD_QNN_BACKEND)
add_subdirectory(backends/qnn)
endif()

if(MLLM_BUILD_QNN_BACKEND)
add_compile_definitions(
MLLM_QNN_BACKEND
Expand Down
4 changes: 0 additions & 4 deletions mllm/backends/qnn/QNNModel.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -134,8 +134,6 @@ ModelError_t QNNModel::loadGraphTensorInfo(const Qnn_Tensor_t* inputTensors, uin

outputTensorWrappers_.push_back(wrapper);
tensorWrapperMap_[tensorName] = wrapper;
// Record QNN output order (index in outputTensorWrappers_)
qnnOutputNameToIndex_[tensorName] = static_cast<int>(outputTensorWrappers_.size() - 1);
}

MLLM_INFO("QNNModel::loadGraphTensorInfo() loaded {} input tensors and {} output tensors for graph: {}", numInputTensors,
Expand Down Expand Up @@ -182,8 +180,6 @@ ModelError_t QNNModel::addTensorWrapper(const std::shared_ptr<QNNTensorWrapper>&
inputTensorWrappers_.push_back(tensorWrapper);
} else if (QNN_TENSOR_GET_TYPE(nativeTensor) == QNN_TENSOR_TYPE_APP_READ) {
outputTensorWrappers_.push_back(tensorWrapper);
// Record QNN output order (index in outputTensorWrappers_)
qnnOutputNameToIndex_[tensorName] = static_cast<int>(outputTensorWrappers_.size() - 1);
}

return MODEL_NO_ERROR;
Expand Down
19 changes: 0 additions & 19 deletions mllm/backends/qnn/QNNModel.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -76,21 +76,6 @@ class QNNModel {

std::map<std::string, std::vector<std::string>> getOutputTensorMap() { return modelOutputTensorMap_; }

// Set expected output order (MLLM order)
void setExpectedOutputOrder(const std::vector<std::string>& expectedOrder) { expectedOutputOrder_ = expectedOrder; }

// Get expected output order
[[nodiscard]] const std::vector<std::string>& getExpectedOutputOrder() const { return expectedOutputOrder_; }

// Get QNN output index by tensor name
[[nodiscard]] int getQnnOutputIndex(const std::string& tensorName) const {
auto it = qnnOutputNameToIndex_.find(tensorName);
if (it != qnnOutputNameToIndex_.end()) {
return it->second;
}
return -1; // Not found
}

// Load input/output tensor information from existing graph
ModelError_t loadGraphTensorInfo(const Qnn_Tensor_t* inputTensors, uint32_t numInputTensors,
const Qnn_Tensor_t* outputTensors, uint32_t numOutputTensors);
Expand Down Expand Up @@ -118,10 +103,6 @@ class QNNModel {

std::map<std::string, std::vector<std::string>> modelOutputTensorMap_;

// Output order mapping: MLLM expected order and QNN actual order
std::vector<std::string> expectedOutputOrder_; // MLLM expected output order (tensor names)
std::map<std::string, int> qnnOutputNameToIndex_; // QNN output tensor name -> index in outputTensorWrappers_

// Storage for node string parameters to ensure lifetime
struct NodeStringStorage {
std::string name;
Expand Down
7 changes: 6 additions & 1 deletion mllm/backends/qnn/QNNUtils.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -455,7 +455,9 @@ std::shared_ptr<QNNTensorWrapper> QNNTensorWrapper::create(const std::string& na
// it will be allocated to QNN shared buffer via QNNTensorWrapper::alloc() later
MLLM_RT_ASSERT(!name.empty());
// in AOT case, the tensor is all on CPU (TODO: handle this)
// if (type != QNN_TENSOR_TYPE_STATIC) { MLLM_RT_ASSERT(tensor.device() == kQNN); }
#ifndef MLLM_QUALCOMM_QNN_AOT_ON_X86_ENABLE
if (type != QNN_TENSOR_TYPE_STATIC) { MLLM_RT_ASSERT(tensor.device() == kQNN); }
#endif

Qnn_DataType_t dataType = mllmDataTypeToQnnDataType(tensor.dtype());

Expand All @@ -466,6 +468,9 @@ std::shared_ptr<QNNTensorWrapper> QNNTensorWrapper::create(const std::string& na

tensorWrapper->dataContainer_ = tensor;

// when passed allocated tensor, mark isAlloc_ = true
if (!tensor.isNil()) tensorWrapper->isAlloc_ = true;

return tensorWrapper;
}

Expand Down
Loading