Skip to content

fix(training): WASM contrastive loss + NAPI optimizer step#339

Merged
ruvnet merged 1 commit intomainfrom
fix/wasm-training-pipeline
Apr 7, 2026
Merged

fix(training): WASM contrastive loss + NAPI optimizer step#339
ruvnet merged 1 commit intomainfrom
fix/wasm-training-pipeline

Conversation

@ruvnet
Copy link
Copy Markdown
Owner

@ruvnet ruvnet commented Apr 7, 2026

Summary

Fixes training pipeline issues across WASM and NAPI bindings (ADR-145).

  • WASM: WasmInfoNCELoss::compute() failed on Float32Array[] input — replaced serde_wasm_bindgen deserialization with explicit js_sys::Float32Array conversion
  • NAPI: Added stepInPlace() to all 3 optimizers (SGD, Adam, AdamW) for zero-copy in-place parameter mutation via AsMut<[f32]>. Documented that step() returns a new array.
  • LoRA: Confirmed B=0 initialization is correct LoRA design (Hu et al. 2021) — no code change, documented in ADR-145

Test plan

  • cargo check -p ruvector-attention-wasm --target wasm32-unknown-unknown passes
  • cargo check -p ruvector-attention-node passes
  • npm publish triggered by next release tag (v-prefixed)

🤖 Generated with claude-flow

…tep semantics

ADR-145: Fix training pipeline issues across WASM and NAPI bindings.

WASM (ruvector-attention-wasm):
- Replace serde_wasm_bindgen deserialization of negatives param with
  explicit js_sys::Float32Array conversion. TypedArrays don't
  deserialize via serde — use js_sys::Array iteration instead.

NAPI (ruvector-attention-node):
- Add stepInPlace() to SGD, Adam, AdamW optimizers for zero-copy
  in-place parameter mutation via Float32Array's AsMut<[f32]>
- Document that step() returns a NEW array (callers must use return)

Note: LoRA B=0 initialization in learning-wasm is correct by design
(Hu et al. 2021) — documented in ADR-145, no code change needed.

Co-Authored-By: claude-flow <ruv@ruv.net>
@ruvnet ruvnet merged commit 3e67c72 into main Apr 7, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant