RVXV: Generate Verification Infrastructure for Custom RISC-V AI Instructions #83
jyrj
started this conversation in
Show and Tell your RISC-V Work
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
I'm Jayaraj, working from Santa Cruz, California. I've been building a tool called RVXV that generates verification infrastructure for custom RISC-V AI vector instructions from YAML specifications. I started it as a hobby project.
The Problem
If you're adding custom vector instructions to a RISC-V core (think INT8 dot products for inference, BF16 FMA for training, quantized MAC for edge AI), you need to build a bunch of verification infrastructure for each one:
Most of this is repetitive boilerplate -- the same vector loop structure, the same mask/vstart handling, the same MATCH/MASK encoding logic -- just with different opcodes and element types. Writing it by hand for every new instruction is tedious and error-prone.
What RVXV Does
You write one YAML spec describing your instruction's encoding, operands, and semantics. RVXV generates 13 files from it:
From this, it generates Spike C++ extensions, self-checking assembly tests (directed + random with golden values), SVA assertion modules, coverage models, and markdown documentation.
Architecture
graph LR A["YAML Spec"] --> B["Parser &<br/>Validator"] B --> C["Instruction IR"] C --> D["Semantics<br/>Engine"] C --> E["Spike C++<br/>Generator"] C --> F["Assembly Test<br/>Generator"] C --> G["SVA Assertion<br/>Generator"] C --> H["Doc<br/>Generator"] D --> F D --> |"golden values"| F style A fill:#4a90d9,color:#fff style C fill:#e8a838,color:#fff style D fill:#50b050,color:#fffThe semantics engine is a Python reference model that computes bit-accurate golden values using a standalone numeric library (FP8 E4M3/E5M2, BFloat16, INT4, MX block formats). These golden values end up embedded in the generated assembly tests, so they're self-checking.
What Actually Works (v0.1.0)
Being honest about where things stand:
g++ -fsyntax-only)riscv64-linux-gnu-gcc -march=rv64gcvWhat Hasn't Been Proven Yet
Supported Operations
dot_product, fma, mac, multiply, add, fused_exp, convert, compare, outer_product, reduction_sum, reduction_max -- covering the common AI instruction patterns across INT8, INT4, FP8 (E4M3/E5M2), BFloat16, and standard integer/float types.
Links
git clone --recurse-submodules https://github.com/jyrj/rvxv.git && cd rvxv && pip install -e .This is v0.1.0. I'd welcome feedback on the generated code quality, especially from anyone who works with Spike extensions or SVA in their verification flow. If you spot issues or have suggestions, feel free to open an issue on the repo or reach out at jayarajevur@gmail.com.
Tags:
AI/MLDesign ToolsSoftwareHardwareISABeta Was this translation helpful? Give feedback.
All reactions