Skip to content

Conversation

@taimur-10x
Copy link

This PR extends the existing RISC-V Vector (RVV) floating-point support introduced introduced in (PR# 15075), adding new kernels.

Summary

  • Adds a BF16 RVV Flag to ggml-cpu/CMakeLists.txt to enable the zvfbfwma extension
  • Adds 6 new kernels for floating-point operations.

Newly Added Kernels

  • ggml_vec_dot_bf16
  • ggml_vec_mad_f16
  • ggml_vec_scale_f16
  • ggml_vec_dot_f16_unroll
  • ggml_cpu_bf16_to_fp32
  • ggml_cpu_fp16_to_fp32

Testing

Kernels were functionally tested on QEMU for VLENs (128-bit, 256-bit, 512-bit and 1024-bit) for a range of input sizes.

@taimur-10x taimur-10x changed the title [RISC-V] Extend support for RVV floating-point kernels ggml-cpu: extend support for RVV floating-point kernels Nov 17, 2025
@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label Nov 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant