Skip to content

Conversation

edgchen1
Copy link

  1. Use CMAKE_C_COMPILER_ARCHITECTURE_ID to determine target processor when compiler is MSVC. The existing logic only handles the VS generator. Added a case to support other generators like Ninja.

  2. Replace existing usages of CMAKE_SYSTEM_PROCESSOR with CPUINFO_TARGET_PROCESSOR. One of these was required in order to build for ARM64EC with vcpkg (where it was hitting this warning).

cpuinfo/CMakeLists.txt

Lines 91 to 97 in 8a92100

IF(NOT CMAKE_SYSTEM_PROCESSOR)
IF(NOT IOS)
MESSAGE(WARNING
"Target processor architecture is not specified. "
"cpuinfo will compile, but cpuinfo_initialize() will always fail.")
SET(CPUINFO_SUPPORTED_PLATFORM FALSE)
ENDIF()

Context: CMAKE_SYSTEM_PROCESSOR is known to have issues on Windows. See this issue. This comment suggests using CMAKE_<LANG>_COMPILER_ARCHITECTURE_ID.

@meta-cla meta-cla bot added the cla signed label Aug 27, 2025
@edgchen1 edgchen1 marked this pull request as draft August 28, 2025 20:29
@edgchen1 edgchen1 marked this pull request as ready for review August 29, 2025 22:25
edgchen1 added a commit to microsoft/onnxruntime that referenced this pull request Aug 29, 2025
### Description
<!-- Describe your changes. -->

Re-enable cpuinfo for ARM64EC build and fix `CPUIDINFO_ARCH_ARM` so it
is actually used.

Patch cpuinfo to support vcpkg ARM64EC build. See
pytorch/cpuinfo#324.

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Fix for workaround in #25831.
preetha-intel added a commit to intel/onnxruntime that referenced this pull request Sep 1, 2025
* [CPU] Optimize GQA attention bias application for FP16 (microsoft#25871)

### Description

When using attention bias input for GQA op with FP16, on the platforms
that don't natively support FP16 math a cast to fp32 needs to be
performed, and thus a temporary buffer needs to be created to store the
fp32 values. The issue is that this temporary buffer was being allocated
/ deallocated inside of a loop for every token being processed.
Refactored the implementation so that the allocation takes place only
once.

Phi model throughput increased by 15%.

* Fixes for DynamicQuantizeMatMul and Attention3D tests (microsoft#25814)

### Description
This change fixes correctness issues in two areas that were causing
failures in onnxruntime_test_all:

- DynamicQuantizeMatMul.WithConstantBInputs
- AttentionTest.Attention3DDefault
- AttentionTest.Attention3DWithPastAndPresentQkMatmul

What was wrong and how it’s fixed
1) DynamicQuantizeMatMul.WithConstantBInputs
- Root cause: The Kleidi dynamic quantization GEMM path could be
selected even when the B scales contained values such as (zero,
negative, or non-finite). That violates kernel assumptions and can lead
to incorrect results.
- Fix: In
`onnxruntime/contrib_ops/cpu/quantization/dynamic_quantize_matmul.cc`,
we now explicitly validate that all B scales are finite and strictly
positive before enabling the Kleidi/MLAS dynamic path. If any scale is
invalid, we disable that path.

2) Attention tests (Attention3DDefault,
Attention3DWithPastAndPresentQkMatmul)
- Root causes in
`onnxruntime/core/mlas/lib/kleidiai/sgemm_kleidiai.cpp`:
- Incorrect handling of GEMM corner cases for alpha/beta and K==0 (e.g.,
not respecting C = beta*C when alpha==0 or K==0).
  - Unnecessary or premature fallbacks for small shapes.
- Fixes:
- Add early-outs for degenerate sizes: if M==0 or N==0, return handled.
  - Correctly implement alpha/beta semantics:

---------

Signed-off-by: Jonathan Clohessy <jonathan.clohessy@arm.com>

* Fix MoE CPP tests (microsoft#25877)

This change adds skip test for QMoE CPU tests when running on TensorRT
or CUDA EP.
In the QMoE kernel there was a memory overwrite bug in the accumulate
part, updated that and this fixed the python tests back

* [c++] Eliminate dynamic initialization of static Ort::Global<void>::api_ (microsoft#25741)

### Description

Delay the call to `OrtGetApiBase()` until the first call to
`Ort::GetApi()` so that `OrtGetApiBase()` is typically called after
dynamic library loading.

### Motivation and Context

When ORT_API_MANUAL_INIT is not defined (which is the default), the
static `Ort::Global<void>::api_` has a dynamic initializer that calls
`OrtGetApiBase()->GetApi(ORT_API_VERSION)` This dynamic initialization
can cause problems when it interacts with other global/static
initialization. On Windows in particular, it can also cause deadlocks
when used in a dynamic library if OrtGetApiBase()->GetApi() attempts to
load any other libraries.

* Replace the templated `Global<void>::api_` with an inline static
initialized to nullptr.
* `Ort::GetApi()` now calls `detail::Global::GetApi()` which calls
`detail::Global::DefaultInit()` if initialization is needed.
* When `ORT_API_MANUAL_INIT` is defined, `DefaultInit()` returns
nullptr, which will eventually cause the program to crash. The callers
have violated the initialization contract by not calling one of the
`Ort::InitApi` overloads.
* When `ORT_API_MANUAL_INIT` is not defined, `DefaultInit()` uses a
function-level static to compute the result of
`OrtGetApiBase()->GetApi(ORT_API_VERSION)` once and return it.
* `Ort::Global<void>` has been replaced with a non-templated type and
moved inside a `detail` namespace. Since the `Global<void>` object was
documented as being used internally, it is believed that these changes
here are non-breaking, as they do not impact a public API. The public
APIs, `Ort::InitApi()` and `Ort::InitApi(const OrtApi*)` remain
unchanged.
* Add `#pragma detect_mismatch` to surface issues with compilation units
that disagree on how ORT_API_MANUAL_INIT is defined. (MSVC only.)

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* python GPU IO Bindings for NVIDIA  (microsoft#25776)

### Description
<!-- Describe your changes. -->
1. A Small change to use the shared allocator in Python binding. 
2. Remove the FP64 support from the EP. 


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

The Python GPU IO binding is necessary for performance. The change will
enable the shared allocator for GPU allocation.
The FP64 was using the FP32 inference—aligned WRT TRT RTX support.

---------

Co-authored-by: Gaurav Garg <gaugarg@nvidia.com>

* [CANN] Add a `enable_cann_subgraph` feature parameter (microsoft#25867)

### Description

Add a `enable_cann_subgraph` feature parameter. this parameter controls
whether graph splitting is performed and can help quickly identify
issues in certain scenarios.

* [EP ABI] Add OpAttr_GetTensorAttributeAsOrtValue and replace the existing Node_GetTensorAttributeAsOrtValue (microsoft#25886)

### Description
Replace `Node_GetTensorAttributeAsOrtValue` with
`OpAttr_GetTensorAttributeAsOrtValue`.
Change the API signature to make it one of the `OpAttr` interfaces
instead of the `OrtNode` interface.

The original API was added
[here](microsoft#25566).

* Language bindings for model compatibility API (microsoft#25878)

### Description
This change builds on top of microsoft#25841 , and adds the scaffolding necessary
to call into this API from C++ / C# / Python.

### Motivation and Context
microsoft#25454 talks more about the broader notion of precompiled model
compatibility. This change is directed at app developers whose apps may
want to determine if a particular precompiled model (e.g. on a server
somewhere) is compatible with the device where the application is
running. There is functionality in `OrtEpFactory` for making this
determination, which was exposed as a C API in microsoft#25841, and this change
makes the API more broadly available in other languages.

### Testing and Validation
Introduced new unit test cases across each language, and verified that
the API was being called and returned the correct result for the default
CPU EP.

---------

Co-authored-by: Aditya Rastogi <adityar@ntdev.microsoft.com>

* [QNN-EP] Introduce Level1 Transformer into qnn.preprocess (microsoft#25883)

### Description
- Introduce Level1 Transformer into qnn.preprocess to support various optimizations.

### Motivation and Context
- This change brings in several useful optimizations such as `ConvBnFusion` and `ConstantFolding`, which are part of
`TransformerLevel::Level1` and can benefit QNNEP.
- The goal is to optimize the ONNX model before quantization by integrating these passes into the Python tooling workflow.

* [QNN EP] Minor fix weight name missing when not valid QDQ node group (microsoft#25887)

### Description
Minor fix weight name missing when not valid QDQ node group

### Motivation and Context
Some quantized model failed QDQ node group validation, the weights then won't be folded as initializer. QNN EP failed to handle the dynamic weights here due to the transpose op input name look up. This change make sure we process the weights tensor before adding transposes.

* Add custom ops library_path to EP metadata (microsoft#25830)

## Summary
Adds EP metadata library path support to enable custom ops DLL
registration with proper path resolution.

## Changes
- Added `library_path` metadata key to EP metadata infrastructure
- Pass resolved library path directly to `EpLibraryProviderBridge`
constructor
- Simplified implementation per reviewer feedback (removed virtual
method complexity)
- Added `#include <utility>` for std::move compliance

## Purpose
Enables downstream applications (like onnxruntime-genai) to resolve
relative custom ops library paths using EP metadata, improving DLL
registration reliability.

## Files Modified
- `plugin_ep/ep_factory_provider_bridge.h`
- `plugin_ep/ep_library.h` 
- `plugin_ep/ep_library_plugin.h`
- `plugin_ep/ep_library_provider_bridge.cc`
- `plugin_ep/ep_library_provider_bridge.h`
- `utils.cc`

* [OVEP] OpenVINO EP Features and bug-fixes for ORT-1.23  (microsoft#25884)

### Description
This update introduces multiple improvements, fixes, and feature enhancements to the OpenVINO Execution Provider (OVEP) and related components in ONNX Runtime:

#### Configuration & Properties

- Updated load_config mapping to act as a passthrough to OpenVINO properties.
- Added support for providing layout information to inputs/outputs in OpenVINO.

#### Inference & Tensor Handling

- Improved OVInferRequest::SetTensor to correctly handle cached binding shape mismatches.
- Added support for self-detecting on-the-fly bfloat16 → float16 conversion.
- Fixed issues with input ONNX models when used with shared execution contexts.

#### Model Handling & Operator Support

- Fixed model copying behavior for QDQ stripping.
- Updated operator support status for OpenVINO 2025.2.

#### Platform & Integration Fixes

- Applied multiple PSU Lora fixes and related updates.
- Resolved filename confusion issues with wrapped OVIRs in EPCtx.
- Enabled memory-mapped native binaries for OpenVINO 2025.3.

#### Quality & Maintenance

- Addressed linting issues.
- Fixed coverage gaps in OVEP.
- Added a new test script for OpenVINO with ORT ABI integration.

---------

Co-authored-by: Ankit Maheshkar <ankit.maheshkar@intel.com>
Co-authored-by: Ryan Metcalfe <107415876+RyanMetcalfeInt8@users.noreply.github.com>
Co-authored-by: Klimenko, Mikhail <mikhail.klimenko@intel.com>
Co-authored-by: sfatimar <sahar.fatima@intel.com>
Co-authored-by: Garth Long <garth.long@intel.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: MayureshV1 <47039074+MayureshV1@users.noreply.github.com>
Co-authored-by: Eric Crawford <eric.r.crawford@intel.com>
Co-authored-by: jatinwadhwa921 <110383850+jatinwadhwa921@users.noreply.github.com>
Co-authored-by: Vishnudas Thaniel S <vishnudas.thaniel.s@intel.com>
Co-authored-by: Javier Martinez <javier.e.martinez@intel.com>

* [java] Auto EP and compile model support (microsoft#25131)

### Description
Java API for compile model and EP discovery APIs. Roughly equivalent to
the C# version in microsoft#24604.

cc: @skottmckay.

I haven't quite got the CMake configured so the Java tests for the ep
registration only run when the ONNX Runtime shared provider support is
built, but everything else works. I expect that to be a quick fix, but
I'm not sure in what conditions it should be built and how we should
handle it so I don't know where/when to plumb it through.

### Motivation and Context
API parity for Java.

* Add error handling to extract_nuget_files.ps1 (microsoft#25866)

### Description
1. Check process exit code when running 7z.exe . Currently the errors
were silently ignored.
2. Add snld20 flag to the 7z.exe commands, which is needed to be
compatible with the latest 7z release.

* [Fix] illegal memory access in GetInputIndices with optional inputs (microsoft#25881)

### Description
Fix illegal memory access in GetInputIndices with optional inputs

### Motivation and Context
When an input is optional, its ValueInfo may be nullptr. 
The current implementation directly calls InputValueInfo->GetName(), leading to illegal memory access.

Update logic to skip optional inputs when valueInfo is nullptr .

* Re-enable cpuinfo for ARM64EC (microsoft#25863)

### Description
<!-- Describe your changes. -->

Re-enable cpuinfo for ARM64EC build and fix `CPUIDINFO_ARCH_ARM` so it
is actually used.

Patch cpuinfo to support vcpkg ARM64EC build. See
pytorch/cpuinfo#324.

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Fix for workaround in microsoft#25831.

---------

Signed-off-by: Jonathan Clohessy <jonathan.clohessy@arm.com>
Co-authored-by: derdeljan-msft <derdeljan@microsoft.com>
Co-authored-by: Jonathan Clohessy <jonathan.clohessy@arm.com>
Co-authored-by: Akshay Sonawane <111780983+apsonawane@users.noreply.github.com>
Co-authored-by: Christopher Warrington <chwarr@microsoft.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Ishwar Raut <iraut@nvidia.com>
Co-authored-by: Gaurav Garg <gaugarg@nvidia.com>
Co-authored-by: Xinpeng Dou <15529241576@163.com>
Co-authored-by: Chi Lo <54722500+chilo-ms@users.noreply.github.com>
Co-authored-by: adrastogi <aditya.rastogi@microsoft.com>
Co-authored-by: Aditya Rastogi <adityar@ntdev.microsoft.com>
Co-authored-by: qti-hungjuiw <hungjuiw@qti.qualcomm.com>
Co-authored-by: qti-yuduo <yuduow@qti.qualcomm.com>
Co-authored-by: Pradeep Sakhamoori <psakhamoori@microsoft.com>
Co-authored-by: Preetha Veeramalai <preetha.veeramalai@intel.com>
Co-authored-by: Ankit Maheshkar <ankit.maheshkar@intel.com>
Co-authored-by: Ryan Metcalfe <107415876+RyanMetcalfeInt8@users.noreply.github.com>
Co-authored-by: Klimenko, Mikhail <mikhail.klimenko@intel.com>
Co-authored-by: sfatimar <sahar.fatima@intel.com>
Co-authored-by: Garth Long <garth.long@intel.com>
Co-authored-by: MayureshV1 <47039074+MayureshV1@users.noreply.github.com>
Co-authored-by: Eric Crawford <eric.r.crawford@intel.com>
Co-authored-by: jatinwadhwa921 <110383850+jatinwadhwa921@users.noreply.github.com>
Co-authored-by: Vishnudas Thaniel S <vishnudas.thaniel.s@intel.com>
Co-authored-by: Javier Martinez <javier.e.martinez@intel.com>
Co-authored-by: Adam Pocock <adam.pocock@oracle.com>
Co-authored-by: Changming Sun <chasun@microsoft.com>
Co-authored-by: mingyue <131847423+mingyueliuh@users.noreply.github.com>
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
Jaswanth51 pushed a commit to intel/onnxruntime that referenced this pull request Sep 3, 2025
### Description
<!-- Describe your changes. -->

Re-enable cpuinfo for ARM64EC build and fix `CPUIDINFO_ARCH_ARM` so it
is actually used.

Patch cpuinfo to support vcpkg ARM64EC build. See
pytorch/cpuinfo#324.

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Fix for workaround in microsoft#25831.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant