-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Summary
Introduce an epistemic memory system — the inverse of knowledge primitives — that models temporal decay of grounded knowledge and influences agentic action decisions before grounding. As memory of a concept decays from higher depths to lower depths, confidence in grounded concepts is affected, creating a natural pressure to re-validate stale knowledge.
Problem Statement
VRE currently treats knowledge as static. Once a concept is grounded to D3, it remains at D3 indefinitely — there is no mechanism to express that the agent's confidence in that grounding may have degraded over time. An agent that grounded Permission to D3 six months ago with no subsequent exercise has the same epistemic standing as one that grounded it moments ago.
This is epistemically dishonest in practice. Real knowledge fades. Constraints change. An agent that hasn't exercised a concept recently should have less confidence in its grounding, and this reduced confidence should influence whether the agent proceeds with an action or seeks re-validation.
Additionally, there is currently no pre-grounding signal. The agent must perform a full check() to know anything about its epistemic state. Epistemic memory would provide a fast, local signal — "I remember knowing this well" vs. "this feels stale" — that informs whether to act, re-check, or pause.
Proposed Solution
A depth-based memory model that mirrors the knowledge primitive structure but operates on a temporal axis:
Memory Depths (Inverse of Knowledge Depths)
Memory decays from higher depths to lower depths over time:
- A concept grounded at D3 today has full memory at D3
- After a period of non-exercise, memory decays to D2 — the agent remembers what it can do but is less certain about constraints
- Further decay to D1 — the agent remembers what it is but not capabilities or constraints
- Eventually D0 — the agent knows the concept exists but little else
Decay is driven by non-exercise (time since last_exercised from #35), not by calendar time alone. A concept that is frequently grounded refreshes its memory; one that is never exercised decays.
Confidence Impact
Epistemic memory directly relates to the per-primitive confidence metric (#2). As memory depth decays:
- Confidence in the concept's grounding decreases proportionally
- Policy evaluation may reference memory-adjusted confidence
- The agent can use memory state as a pre-grounding heuristic: "my memory of this is stale — I should re-check before acting"
Agentic Action Influence
Before calling vre.check(), an agent can consult epistemic memory:
- Fresh memory (high depth): proceed with confidence, grounding is likely to succeed
- Stale memory (decayed depth): re-validate — ground again and potentially trigger learning if the graph has changed
- No memory: treat as novel — expect gaps, prepare for learning
This creates a natural feedback loop: acting reinforces memory, inaction lets it decay, and decay drives re-validation.
VRE Design Alignment
- Epistemic honesty: CLAUDE.md Section 2.2 requires that the agent never appear more knowledgeable than it is. Static grounding violates this in spirit when knowledge is stale. Memory decay makes staleness explicit.
- Intentional learning: Section 7 states learning must be intentional, not opportunistic. Memory decay creates intentional re-validation pressure rather than relying on random failures to surface gaps.
- Separate meta-epistemic system: Section 7 specifies that automated learning may be handled by a separate meta-epistemic system. Epistemic memory operates outside the core grounding loop — it advises the agent but does not override VRE's grounding contract.
- Knowledge provenance alignment: Section 7.2 establishes provenance with timestamps. Memory decay leverages these timestamps as the temporal signal for staleness.
Acceptance Criteria
- Epistemic memory model with per-concept memory depth that decays over time
- Decay driven by
last_exercisedand related metrics from Add aggregate usage metrics to Primitive nodes #35 - Memory depth influences per-primitive confidence score (Implement Per-Primitive Confidence-Based Epistemic Metric #2)
- API for agents to consult memory state before grounding (fast, local, no graph traversal required)
- Grounding operations refresh memory (reset decay)
- Configurable decay rates (different domains may have different staleness thresholds)
- Memory state is inspectable — surfaced in traces and visible in the workstation (VRE Workstation — interactive graph exploration UI #36)
- Unit tests for decay progression and confidence impact
Open Questions
- What is the right decay function — linear, exponential, step-based (discrete depth drops)?
- Should decay rates be global, per-concept, or per-depth-level?
- Should memory be persisted (survives agent restarts) or ephemeral (resets each session)?
- How does memory interact with provenance source? Should
authoredknowledge decay slower thanlearned? - Should memory decay trigger automatic re-grounding, or only advise the agent to re-ground?
- What is the relationship between memory and the confidence metric (Implement Per-Primitive Confidence-Based Epistemic Metric #2) — is memory an input to confidence, or are they the same thing computed differently?
Dependencies
- Implement Per-Primitive Confidence-Based Epistemic Metric #2 (Epistemic confidence metric) — memory decay feeds into confidence scoring
- Add aggregate usage metrics to Primitive nodes #35 (Aggregate usage metrics) —
last_exercised,grounding_count, etc. provide the temporal signal for decay
Metadata
Metadata
Assignees
Labels
Projects
Status