Skip to content

UI Context

GD2BK1NG edited this page Jan 27, 2026 · 1 revision

UI Context

Structured representation of the user interface and interaction environment

The UI Context subsystem maintains a cognitive representation of the current user interface.
It provides the kernel with a structured understanding of:

  • layout
  • hierarchy
  • affordances
  • interactive elements
  • navigation paths
  • user state

It is the artificial analogue of the brain’s visuospatial and contextual awareness systems.


🧠 Purpose

UI Context answers the question:

“What does the interface look like right now, and how can we interact with it?”

It provides:

  • DOM structure
  • element metadata
  • interaction affordances
  • navigation cues
  • contextual tags

🔍 Responsibilities

1. UI State Modeling

Maintains a structured model of:

  • pages
  • components
  • widgets
  • forms
  • navigation flows

2. Affordance Detection

Identifies:

  • clickable elements
  • input fields
  • scrollable regions
  • actionable components

3. Contextual Tagging

Adds semantic labels:

  • “checkout page”
  • “search results”
  • “login form”

4. Navigation Support

Provides:

  • next‑step predictions
  • UI traversal paths
  • element relationships

5. Integration with World‑Model

Maps UI elements to world‑model entities.


🧩 Inputs

  • Perception Lobe events
  • DOM snapshots
  • browser subsystem signals
  • Nav Lobe queries

📤 Outputs

  • UI structure graphs
  • affordance maps
  • contextual metadata
  • navigation hints

🔗 Interactions

  • Nav Lobe — provides UI traversal data
  • Planning Lobe — informs plan generation
  • Action Lobe — provides actionable targets
  • World‑Model Runtime — synchronizes UI entities
  • ThoughtStream — logs UI context changes

🧭 Why It Matters

UI Context gives Syntra Kernel:

  • spatial awareness
  • interaction intelligence
  • contextual grounding
  • navigation capability

It is the system’s cognitive map of the interface.

Clone this wiki locally