You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Autocomplete suggestions appear briefly but vanish before the user can press Tab to accept. Two interacting timing issues make the feature unusable in practice, especially in terminal apps.
Problem
What happens: The suggestion overlay flashes for a fraction of a second, then disappears. Even if the user reacts quickly, Tab-accept rarely works because the suggestion is already being regenerated.
Expected: Suggestions should remain visible and Tab-acceptable for a reasonable duration (3-5 seconds).
Steps to reproduce:
Open Terminal (or any app) with autocomplete enabled
Type some text and wait for a suggestion to appear
Try to press Tab to accept — the overlay disappears before you can react
Version / platform: v0.51.19, macOS (Apple Silicon)
Root cause analysis
Two issues compound each other in src/openhuman/autocomplete/core/engine.rs:
1. Overlay TTL too short (1100ms)
The overlay auto-dismiss TTL defaults to 1100ms (config/schema/autocomplete.rs:47). This means the ghost text badge disappears after barely 1 second, which is not enough time to read the suggestion and decide to press Tab.
2. No ready-state grace period — refresh cycle immediately invalidates suggestions
The engine loop runs every 24ms (line 251) with a 120ms debounce (line 152). After a suggestion reaches ready state (line 712), the very next refresh cycle (~144ms later):
Re-queries AX focus context (focused_text_context_verbose() at line 511)
Compares to stored context (short-circuit at line 581)
In Terminal, the AX text changes constantly (prompt redraws, cursor blink, shell output), so the short-circuit always fails
Engine moves from ready → capturing_context → generating (line 619)
The suggestion object isn't cleared during generation, but:
The overlay is not re-shown because last_overlay_signature dedup (line 719) prevents the same suggestion from triggering a second overlay
Even if the user presses Tab during generating, validate_focused_target() may fail because focus metadata is being updated
Net effect: The suggestion is visible for at most min(overlay_ttl, time_to_next_refresh) ≈ 120-144ms in practice — far too short for any human to react.
The interacting timeline
t=0ms refresh() completes, phase → "ready", overlay shown (TTL=1100ms)
t=24ms loop tick: try_accept_via_tab() — user hasn't reacted yet
t=48ms loop tick: try_accept_via_tab() — still no Tab
...
t=144ms debounce expires, refresh() called again
t=145ms AX context re-queried — Terminal text changed slightly
t=146ms short-circuit fails (context != stored context)
t=147ms phase → "generating", overlay still visible but counting down
t=300ms generation completes, NEW suggestion (or same text, different context)
t=301ms phase → "ready" again BUT last_overlay_signature matches → overlay NOT re-shown
t=1100ms overlay TTL expires, badge hides — user never had a real chance
Solution
Fix 1: Increase default overlay TTL to 3000-5000ms
In src/openhuman/config/schema/autocomplete.rs:47:
fndefault_overlay_ttl_ms() -> u32{3000// was 1100}
Fix 2: Add a ready-state grace period
When the engine enters ready state, don't re-query context for a configurable grace period (e.g., 2-3 seconds). This prevents the refresh cycle from immediately invalidating the suggestion:
// In the main loop, after refresh completes with phase == "ready":if state.phase == "ready"{// Hold the suggestion visible — skip refresh for grace_period_ms
last_refresh = Instant::now() + Duration::from_millis(grace_period_ms);}
Fix 3: Clear overlay signature dedup when context changes
When context changes but the generated suggestion text is the same, the dedup guard at line 719 prevents re-showing the overlay. The signature should include a generation counter or timestamp to allow re-display:
let ready_signature = format!("ready:{}:{}:{}", app_name, suggestion, generation_id);
Fix 4: Expose overlay_ttl_ms in set_style RPC
The autocomplete_set_style RPC already accepts overlay_ttl_ms in its schema but it's not included in the params list, making it impossible to tune at runtime without a config file change.
Acceptance criteria
Repro gone — Suggestions remain visible and Tab-acceptable for at least 3 seconds in Terminal
Regression safety — Autocomplete still works correctly in non-terminal apps (text editors, browsers)
Configurable — overlay TTL and grace period are tunable via config/RPC
Summary
Autocomplete suggestions appear briefly but vanish before the user can press Tab to accept. Two interacting timing issues make the feature unusable in practice, especially in terminal apps.
Problem
What happens: The suggestion overlay flashes for a fraction of a second, then disappears. Even if the user reacts quickly, Tab-accept rarely works because the suggestion is already being regenerated.
Expected: Suggestions should remain visible and Tab-acceptable for a reasonable duration (3-5 seconds).
Steps to reproduce:
Version / platform: v0.51.19, macOS (Apple Silicon)
Root cause analysis
Two issues compound each other in
src/openhuman/autocomplete/core/engine.rs:1. Overlay TTL too short (1100ms)
The overlay auto-dismiss TTL defaults to
1100ms(config/schema/autocomplete.rs:47). This means the ghost text badge disappears after barely 1 second, which is not enough time to read the suggestion and decide to press Tab.2. No ready-state grace period — refresh cycle immediately invalidates suggestions
The engine loop runs every 24ms (line 251) with a 120ms debounce (line 152). After a suggestion reaches
readystate (line 712), the very next refresh cycle (~144ms later):focused_text_context_verbose()at line 511)ready→capturing_context→generating(line 619)last_overlay_signaturededup (line 719) prevents the same suggestion from triggering a second overlaygenerating,validate_focused_target()may fail because focus metadata is being updatedNet effect: The suggestion is visible for at most min(overlay_ttl, time_to_next_refresh) ≈ 120-144ms in practice — far too short for any human to react.
The interacting timeline
Solution
Fix 1: Increase default overlay TTL to 3000-5000ms
In
src/openhuman/config/schema/autocomplete.rs:47:Fix 2: Add a ready-state grace period
When the engine enters
readystate, don't re-query context for a configurable grace period (e.g., 2-3 seconds). This prevents the refresh cycle from immediately invalidating the suggestion:Fix 3: Clear overlay signature dedup when context changes
When context changes but the generated suggestion text is the same, the dedup guard at line 719 prevents re-showing the overlay. The signature should include a generation counter or timestamp to allow re-display:
Fix 4: Expose overlay_ttl_ms in set_style RPC
The
autocomplete_set_styleRPC already acceptsoverlay_ttl_msin its schema but it's not included in the params list, making it impossible to tune at runtime without a config file change.Acceptance criteria
Related
src/openhuman/autocomplete/core/engine.rs— main loop (line 140-253), refresh (line 478-735), short-circuit (line 578-588), overlay display (line 719-731)src/openhuman/autocomplete/core/overlay.rs— overlay badge display + dedup (line 42-50)src/openhuman/config/schema/autocomplete.rs— default TTL (line 47: 1100ms)