Skip to content

Fix KWS NODE_MEMREQ instance header underallocation#95

Open
goyalpalak18 wants to merge 1 commit intoeembc:mainfrom
goyalpalak18:fix/kws-memreq-instance-size
Open

Fix KWS NODE_MEMREQ instance header underallocation#95
goyalpalak18 wants to merge 1 commit intoeembc:mainfrom
goyalpalak18:fix/kws-memreq-instance-size

Conversation

@goyalpalak18
Copy link

While looking at the recent ABF memory patch (commit edc44cf), I noticed the exact same under-allocation bug exists in the KWS component.

The Bug

In src/ee_kws.c, NODE_MEMREQ allocates memory using a magic number: sizeof(mfcc_instance_t) + 8. The + 8 was probably meant to cover two 32-bit pointers, but it ignores the chunk_idx field entirely and breaks on 64-bit platforms (where pointers are 8 bytes). There's literally a /* TODO : justift this */ comment sitting right next to it.

Because of this, the kws_instance_t struct overflows its heap block. This eats into the 12-byte CMSIS vectorized-read safety padding, leading to undefined behavior. On 64-bit RISC-V evaluation targets, this overflow can silently corrupt heap metadata or cause sliding window drifts that completely invalidate benchmark scores.

The Fix

I mirrored the ABF fix by replacing the manual calculation with a standard sizeof(kws_instance_t). Since the struct already includes mfcc_instance_t as its first member, this naturally accounts for all pointers, chunk_idx, and any compiler-inserted alignment padding across all architectures and pointer widths.

// Before:
uint32_t size = (3 * 4) // See note above
                + sizeof(mfcc_instance_t)
                + 8; /* TODO : justift this */

// After:
uint32_t size = (3 * 4) // See note above
                + sizeof(kws_instance_t);

This restores the full CMSIS 12-byte guard padding, eliminates the undefined behavior risk, and brings KWS up to the same correctness baseline as ABF.

Signed-off-by: goyalpalak18 <goyalpalak1806@gmail.com>
@goyalpalak18
Copy link
Author

Hey @llefaucheur @joseph-yiu, could you take a look at this when you have a chance? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant