Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 11 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,20 @@
# π RuView

<p align="center">
<a href="https://ruvnet.github.io/RuView/">
<a href="https://x.com/rUv/status/2037556932802761004">
<img src="assets/ruview-small-gemini.jpg" alt="RuView - WiFi DensePose" width="100%">
</a>
</p>

> **Alpha Software** — This project is under active development. APIs, firmware behavior, and documentation may change. Known limitations:
> - Multi-node person counting may show identical output regardless of the number of people (#249)
> - Training pipeline on MM-Fi dataset may plateau at low PCK (#318) — hyperparameter tuning in progress
> - No pre-trained model weights are provided; training from scratch is required
> - ESP32-C3 and original ESP32 are not supported (single-core, insufficient for CSI DSP)
> - Single ESP32 deployments have limited spatial resolution
>
> Contributions and bug reports welcome at [Issues](https://github.com/ruvnet/RuView/issues).

## **See through walls with WiFi + Ai** ##

**Perceive the world through signals.** No cameras. No wearables. No Internet. Just physics.
Expand All @@ -14,7 +23,7 @@

Instead of relying on cameras or cloud models, it observes whatever signals exist in a space such as WiFi, radio waves across the spectrum, motion patterns, vibration, sound, or other sensory inputs and builds an understanding of what is happening locally.

Built on top of [RuVector](https://github.com/ruvnet/ruvector/), the project became widely known for its implementation of WiFi DensePose — a sensing technique first explored in academic research such as Carnegie Mellon University's *DensePose From WiFi* work. That research demonstrated that WiFi signals can be used to reconstruct human pose.
Built on top of [RuVector](https://github.com/ruvnet/ruvector/) Self Learning Vector Memory system and [Cognitum.One](https://Cognitum.One) , the project became widely known for its implementation of WiFi DensePose — a sensing technique first explored in academic research such as Carnegie Mellon University's *DensePose From WiFi* work. That research demonstrated that WiFi signals can be used to reconstruct human pose.

RuView extends that concept into a practical edge system. By analyzing Channel State Information (CSI) disturbances caused by human movement, RuView reconstructs body position, breathing rate, heart rate, and presence in real time using physics-based signal processing and machine learning.

Expand Down
151 changes: 151 additions & 0 deletions docs/adr/ADR-067-ruvector-v2.0.5-upgrade.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@
# ADR-067: RuVector v2.0.4 to v2.0.5 Upgrade + New Crate Adoption

**Status:** Proposed
**Date:** 2026-03-23
**Deciders:** @ruvnet
**Related:** ADR-016 (RuVector training pipeline integration), ADR-017 (RuVector signal + MAT integration), ADR-029 (RuvSense multistatic sensing)

## Context

RuView currently pins all five core RuVector crates at **v2.0.4** (from crates.io) plus a vendored `ruvector-crv` v0.1.1 and optional `ruvector-gnn` v2.0.5. The upstream RuVector workspace has moved to **v2.0.5** with meaningful improvements to the crates we depend on, and has introduced new crates that could benefit RuView's detection pipeline.

### Current Integration Map

| RuView Module | RuVector Crate | Current Version | Purpose |
|---------------|----------------|-----------------|---------|
| `signal/subcarrier.rs` | ruvector-mincut | 2.0.4 | Graph min-cut subcarrier partitioning |
| `signal/spectrogram.rs` | ruvector-attn-mincut | 2.0.4 | Attention-gated spectrogram denoising |
| `signal/bvp.rs` | ruvector-attention | 2.0.4 | Attention-weighted BVP aggregation |
| `signal/fresnel.rs` | ruvector-solver | 2.0.4 | Fresnel geometry estimation |
| `mat/triangulation.rs` | ruvector-solver | 2.0.4 | TDoA survivor localization |
| `mat/breathing.rs` | ruvector-temporal-tensor | 2.0.4 | Tiered compressed breathing buffer |
| `mat/heartbeat.rs` | ruvector-temporal-tensor | 2.0.4 | Tiered compressed heartbeat spectrogram |
| `viewpoint/*` (4 files) | ruvector-attention | 2.0.4 | Cross-viewpoint fusion with geometric bias |
| `crv/` (optional) | ruvector-crv | 0.1.1 (vendored) | CRV protocol integration |
| `crv/` (optional) | ruvector-gnn | 2.0.5 | GNN graph topology |

### What Changed Upstream (v2.0.4 → v2.0.5 → HEAD)

**ruvector-mincut:**
- Flat capacity matrix + allocation reuse — **10-30% faster** for all min-cut operations
- Tier 2-3 Dynamic MinCut (ADR-124): Gomory-Hu tree construction for fast global min-cut, incremental edge insert/delete without full recomputation
- Source-anchored canonical min-cut with SHA-256 witness hashing
- Fixed: unsafe indexing removed, WASM Node.js panic from `std::time`

**ruvector-attention / ruvector-attn-mincut:**
- Migrated to workspace versioning (no API changes)
- Documentation improvements

**ruvector-temporal-tensor:**
- Formatting fixes only (no API changes)

**ruvector-gnn:**
- Panic replaced with `Result` in `MultiHeadAttention` and `RuvectorLayer` constructors (breaking improvement — safer)
- Bumped to v2.0.5

**sona (new — Self-Optimizing Neural Architecture):**
- v0.1.6 → v0.1.8: state persistence (`loadState`/`saveState`), trajectory counter fix
- Micro-LoRA and Base-LoRA for instant and background learning
- EWC++ (Elastic Weight Consolidation) to prevent catastrophic forgetting
- ReasoningBank pattern extraction and similarity search
- WASM support for edge devices

**ruvector-coherence (new):**
- Spectral coherence scoring for graph index health
- Fiedler eigenvalue estimation, effective resistance sampling
- HNSW health monitoring with alerts
- Batch evaluation of attention mechanism quality

**ruvector-core (new):**
- ONNX embedding support for real semantic embeddings
- HNSW index with SIMD-accelerated distance metrics
- Quantization (4-32x memory reduction)
- Arena allocator for cache-optimized operations

## Decision

### Phase 1: Version Bump (Low Risk)

Bump the 5 core crates from v2.0.4 to v2.0.5 in the workspace `Cargo.toml`:

```toml
ruvector-mincut = "2.0.5" # was 2.0.4 — 10-30% faster, safer
ruvector-attn-mincut = "2.0.5" # was 2.0.4 — workspace versioning
ruvector-temporal-tensor = "2.0.5" # was 2.0.4 — fmt only
ruvector-solver = "2.0.5" # was 2.0.4 — workspace versioning
ruvector-attention = "2.0.5" # was 2.0.4 — workspace versioning
```

**Expected impact:** The mincut performance improvement directly benefits `signal/subcarrier.rs` which runs subcarrier graph partitioning every tick. 10-30% faster partitioning reduces per-frame CPU cost.

### Phase 2: Add ruvector-coherence (Medium Value)

Add `ruvector-coherence` with `spectral` feature to `wifi-densepose-ruvector`:

**Use case:** Replace or augment the custom phase coherence logic in `viewpoint/coherence.rs` with spectral graph coherence scoring. The current implementation uses phasor magnitude for phase coherence — spectral Fiedler estimation would provide a more robust measure of multi-node CSI consistency, especially for detecting when a node's signal quality degrades.

**Integration point:** `viewpoint/coherence.rs` — add `SpectralCoherenceScore` as a secondary coherence metric alongside existing phase phasor coherence. Use spectral gap estimation to detect structural changes in the multi-node CSI graph (e.g., a node dropping out or a new reflector appearing).

### Phase 3: Add SONA for Adaptive Learning (High Value)

Replace the logistic regression adaptive classifier in the sensing server with a SONA-backed learning engine:

**Current state:** The sensing server's adaptive training (`POST /api/v1/adaptive/train`) uses a hand-rolled logistic regression on 15 CSI features. It requires explicit labeled recordings and provides no cross-session persistence.

**Proposed improvement:** Use `sona::SonaEngine` to:
1. **Learn from implicit feedback** — trajectory tracking on person-count decisions (was the count stable? did the user correct it?)
2. **Persist across sessions** — `saveState()`/`loadState()` replaces the current `adaptive_model.json`
3. **Pattern matching** — `find_patterns()` enables "this CSI signature looks like room X where we learned Y"
4. **Prevent forgetting** — EWC++ ensures learning in a new room doesn't overwrite patterns from previous rooms

**Integration point:** New `adaptive_sona.rs` module in `wifi-densepose-sensing-server`, behind a `sona` feature flag. The existing logistic regression remains the default.

### Phase 4: Evaluate ruvector-core for CSI Embeddings (Exploratory)

**Current state:** The person detection pipeline uses hand-crafted features (variance, change_points, motion_band_power, spectral_power) with fixed normalization ranges.

**Potential:** Use `ruvector-core`'s ONNX embedding support to generate learned CSI embeddings that capture room geometry, person count, and activity patterns in a single vector. This would enable:
- Similarity search: "is this CSI frame similar to known 2-person patterns?"
- Transfer learning: embeddings learned in one room partially transfer to similar rooms
- Quantized storage: 4-32x memory reduction for pattern databases

**Status:** Exploratory — requires training data collection and embedding model design. Not a near-term target.

## Consequences

### Positive
- **Phase 1:** Free 10-30% performance gain in subcarrier partitioning. Security fixes (unsafe indexing, WASM panic). Zero API changes required.
- **Phase 2:** More robust multi-node coherence detection. Helps with the "flickering persons" issue (#292) by providing a second opinion on signal quality.
- **Phase 3:** Fundamentally improves the adaptive learning pipeline. Users no longer need to manually record labeled data — the system learns from ongoing use.
- **Phase 4:** Path toward real ML-based detection instead of heuristic thresholds.

### Negative
- **Phase 1:** Minimal risk — semver minor bump, no API breaks.
- **Phase 2:** Adds a dependency. Spectral computation has O(n) cost per tick for Fiedler estimation (n = number of subcarriers, typically 56-128). Acceptable.
- **Phase 3:** SONA adds ~200KB to the binary. The learning loop needs careful tuning to avoid adapting to noise.
- **Phase 4:** Requires significant research and training data. Not guaranteed to outperform tuned heuristics for WiFi CSI.

### Risks
- `ruvector-gnn` v2.0.5 changed constructors from panic to `Result` — any existing `crv` feature users need to handle the `Result`. Our vendored `ruvector-crv` may need updates.
- SONA's WASM support is experimental — keep it behind a feature flag until validated.

## Implementation Plan

| Phase | Scope | Effort | Priority |
|-------|-------|--------|----------|
| 1 | Bump 5 crates to v2.0.5 | 1 hour | High — free perf + security |
| 2 | Add ruvector-coherence | 1 day | Medium — improves multi-node stability |
| 3 | SONA adaptive learning | 3 days | Medium — replaces manual training workflow |
| 4 | CSI embeddings via ruvector-core | 1-2 weeks | Low — exploratory research |

## Vendor Submodule

The `vendor/ruvector` git submodule has been updated from commit `f8f2c60` (v2.0.4 era) to `51a3557` (latest `origin/main`). This provides local reference for the full upstream source when developing Phases 2-4.

## References

- Upstream repo: https://github.com/ruvnet/ruvector
- ADR-124 (Dynamic MinCut): `vendor/ruvector/docs/adr/ADR-124*.md`
- SONA docs: `vendor/ruvector/crates/sona/src/lib.rs`
- ruvector-coherence spectral: `vendor/ruvector/crates/ruvector-coherence/src/spectral.rs`
- ruvector-core embeddings: `vendor/ruvector/crates/ruvector-core/src/embeddings.rs`
182 changes: 182 additions & 0 deletions docs/adr/ADR-068-per-node-state-pipeline.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,182 @@
# ADR-068: Per-Node State Pipeline for Multi-Node Sensing

| Field | Value |
|------------|-------------------------------------|
| Status | Accepted |
| Date | 2026-03-27 |
| Authors | rUv, claude-flow |
| Drivers | #249, #237, #276, #282 |
| Supersedes | — |

## Context

The sensing server (`wifi-densepose-sensing-server`) was originally designed for
single-node operation. When multiple ESP32 nodes send CSI frames simultaneously,
all data is mixed into a single shared pipeline:

- **One** `frame_history` VecDeque for all nodes
- **One** `smoothed_person_score` / `smoothed_motion` / vital sign buffers
- **One** baseline and debounce state

This means the classification, person count, and vital signs reported to the UI
are an uncontrolled aggregate of all nodes' data. The result: the detection
window shows identical output regardless of how many nodes are deployed, where
people stand, or how many people are in the room (#249 — 24 comments, the most
reported issue).

### Root Cause Verified

Investigation of `AppStateInner` (main.rs lines 279-367) confirmed:

| Shared field | Impact |
|---------------------------|--------------------------------------------|
| `frame_history` | Temporal analysis mixes all nodes' CSI data |
| `smoothed_person_score` | Person count aggregates all nodes |
| `smoothed_motion` | Motion classification undifferentiated |
| `smoothed_hr` / `br` | Vital signs are global, not per-node |
| `baseline_motion` | Adaptive baseline learned from mixed data |
| `debounce_counter` | All nodes share debounce state |

## Decision

Introduce **per-node state tracking** via a `HashMap<u8, NodeState>` in
`AppStateInner`. Each ESP32 node (identified by its `node_id` byte) gets an
independent sensing pipeline with its own temporal history, smoothing buffers,
baseline, and classification state.

### Architecture

```
┌─────────────────────────────────────────┐
UDP frames │ AppStateInner │
───────────► │ │
node_id=1 ──► │ node_states: HashMap<u8, NodeState> │
node_id=2 ──► │ ├── 1: NodeState { frame_history, │
node_id=3 ──► │ │ smoothed_motion, vitals, ... }│
│ ├── 2: NodeState { ... } │
│ └── 3: NodeState { ... } │
│ │
│ ┌── Per-Node Pipeline ──┐ │
│ │ extract_features() │ │
│ │ smooth_and_classify() │ │
│ │ smooth_vitals() │ │
│ │ score_to_person_count()│ │
│ └────────────────────────┘ │
│ │
│ ┌── Multi-Node Fusion ──┐ │
│ │ Aggregate person count │ │
│ │ Per-node classification│ │
│ │ All-nodes WebSocket msg│ │
│ └────────────────────────┘ │
│ │
│ ──► WebSocket broadcast (sensing_update) │
└─────────────────────────────────────────┘
```

### NodeState Struct

```rust
struct NodeState {
frame_history: VecDeque<Vec<f64>>,
smoothed_person_score: f64,
prev_person_count: usize,
smoothed_motion: f64,
current_motion_level: String,
debounce_counter: u32,
debounce_candidate: String,
baseline_motion: f64,
baseline_frames: u64,
smoothed_hr: f64,
smoothed_br: f64,
smoothed_hr_conf: f64,
smoothed_br_conf: f64,
hr_buffer: VecDeque<f64>,
br_buffer: VecDeque<f64>,
rssi_history: VecDeque<f64>,
vital_detector: VitalSignDetector,
latest_vitals: VitalSigns,
last_frame_time: Option<std::time::Instant>,
edge_vitals: Option<Esp32VitalsPacket>,
}
```

### Multi-Node Aggregation

- **Person count**: Sum of per-node `prev_person_count` for active nodes
(seen within last 10 seconds).
- **Classification**: Per-node classification included in `SensingUpdate.nodes`.
- **Vital signs**: Per-node vital signs; UI can render per-node or aggregate.
- **Signal field**: Generated from the most-recently-updated node's features.
- **Stale nodes**: Nodes with no frame for >10 seconds are excluded from
aggregation and marked offline (consistent with PR #300).

### Backward Compatibility

- The simulated data path (`simulated_data_task`) continues using global state.
- Single-node deployments behave identically (HashMap has one entry).
- The WebSocket message format (`sensing_update`) remains the same but the
`nodes` array now contains all active nodes, and `estimated_persons` reflects
the cross-node aggregate.
- The edge vitals path (#323 fix) also uses per-node state.

## Scaling Characteristics

| Nodes | Per-Node Memory | Total Overhead | Notes |
|-------|----------------|----------------|-------|
| 1 | ~50 KB | ~50 KB | Identical to current |
| 3 | ~50 KB | ~150 KB | Typical home setup |
| 10 | ~50 KB | ~500 KB | Small office |
| 50 | ~50 KB | ~2.5 MB | Building floor |
| 100 | ~50 KB | ~5 MB | Large deployment |
| 256 | ~50 KB | ~12.8 MB | Max (u8 node_id) |

Memory is dominated by `frame_history` (100 frames x ~500 bytes each = ~50 KB
per node). This scales linearly and fits comfortably in server memory even at
256 nodes.

## QEMU Validation

The existing QEMU swarm infrastructure (ADR-062, `scripts/qemu_swarm.py`)
supports multi-node simulation with configurable topologies:

- `star`: Central coordinator + sensor nodes
- `mesh`: Fully connected peer network
- `line`: Sequential chain
- `ring`: Circular topology

Each QEMU instance runs with a unique `node_id` via NVS provisioning. The
swarm health validator (`scripts/swarm_health.py`) checks per-node UART output.

Validation plan:
1. QEMU swarm with 3-5 nodes in mesh topology
2. Verify server produces distinct per-node classifications
3. Verify aggregate person count reflects multi-node contributions
4. Verify stale-node eviction after timeout

## Consequences

### Positive
- Each node's CSI data is processed independently — no cross-contamination
- Person count scales with the number of deployed nodes
- Vital signs are per-node, enabling room-level health monitoring
- Foundation for spatial localization (per-node positions + triangulation)
- Scales to 256 nodes with <13 MB memory overhead

### Negative
- Slightly more memory per node (~50 KB each)
- `smooth_and_classify_node` function duplicates some logic from global version
- Per-node `VitalSignDetector` instances add CPU cost proportional to node count

### Risks
- Node ID collisions (mitigated by NVS persistence since v0.5.0)
- HashMap growth without cleanup (mitigated by stale-node eviction)

## References

- Issue #249: Detection window same regardless (24 comments)
- Issue #237: Same display for 0/1/2 people (12 comments)
- Issue #276: Only one can be detected (8 comments)
- Issue #282: Detection fail (5 comments)
- PR #295: Hysteresis smoothing (partial mitigation)
- PR #300: ESP32 offline detection after 5s
- ADR-062: QEMU Swarm Configurator
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,7 @@ package-dir = {"" = "."}

[tool.setuptools.packages.find]
where = ["."]
include = ["src*"]
include = ["wifi_densepose*", "src*"]
exclude = ["tests*", "docs*", "scripts*"]

[tool.setuptools.package-data]
Expand Down
Loading