What Already Exists
The Processor/UI split is intentional and already clean. The Processor owns everything stateful — queue, settings, providers, LLM processing, extraction, heartbeats. The UI (both Web and TUI) owns nothing except how things look and feel. It reacts to events fired by the Processor via Ei_Interface callbacks and sends user input back upstream.
The TUI was built second and is actually more disciplined about this than the Web app — some logic that Web took on itself got moved into the Processor before TUI was written, so TUI consumes it more purely.
This means we're not starting from scratch. The seam already exists. We just haven't cut along it yet.
What's Missing
We've never run the Processor and the UI on separate machines. That's the whole ticket.
The dream: leave Ei running headless on one always-on box. Every other device — SteamDeck, work laptop, phone — connects as a dumb UI client. One shared state. No more open-it-on-every-device-to-sync shuffle.
Acceptance Criteria
NOTE: Command structure is not prescriptive — research best practices before implementing. These are illustrative.
Server Mode (Headless)
- Run
ei --server 0.0.0.0:11576 --API_KEY=$EI_API_KEY on one machine to start the Processor loop with no UI, accepting network traffic as its input/output channel.
Client Modes (Head-Only)
- Run
ei --client IP_OR_DNS:11576 --API_KEY=$EI_API_KEY on other machines to start the TUI connected to the remote Processor — no local queue, no local heartbeats, just UI.
- On https://ei.flare576.com, be able to choose "self-host" and provide a URL + API Key to point the Web UI at a self-hosted Processor instead of the default backend.
#2 and #3 are the same protocol, different UIs.
Complexities
So. Forking. Many.
Does this communicate over websockets? REST? Do the clients poll? Is there a reverse connection? How do events propagate back to clients? Does flare576.com need CORS changes? Does --server also serve the Web UI on 80/443? How does this feel over a network with latency? Does anyone besides me actually want this?
Not picking this up soon. Writing it down so we can update it as we learn more.
What Already Exists
The Processor/UI split is intentional and already clean. The Processor owns everything stateful — queue, settings, providers, LLM processing, extraction, heartbeats. The UI (both Web and TUI) owns nothing except how things look and feel. It reacts to events fired by the Processor via
Ei_Interfacecallbacks and sends user input back upstream.The TUI was built second and is actually more disciplined about this than the Web app — some logic that Web took on itself got moved into the Processor before TUI was written, so TUI consumes it more purely.
This means we're not starting from scratch. The seam already exists. We just haven't cut along it yet.
What's Missing
We've never run the Processor and the UI on separate machines. That's the whole ticket.
The dream: leave Ei running headless on one always-on box. Every other device — SteamDeck, work laptop, phone — connects as a dumb UI client. One shared state. No more open-it-on-every-device-to-sync shuffle.
Acceptance Criteria
Server Mode (Headless)
ei --server 0.0.0.0:11576 --API_KEY=$EI_API_KEYon one machine to start the Processor loop with no UI, accepting network traffic as its input/output channel.Client Modes (Head-Only)
ei --client IP_OR_DNS:11576 --API_KEY=$EI_API_KEYon other machines to start the TUI connected to the remote Processor — no local queue, no local heartbeats, just UI.#2and#3are the same protocol, different UIs.Complexities
So. Forking. Many.
Does this communicate over websockets? REST? Do the clients poll? Is there a reverse connection? How do events propagate back to clients? Does flare576.com need CORS changes? Does
--serveralso serve the Web UI on 80/443? How does this feel over a network with latency? Does anyone besides me actually want this?Not picking this up soon. Writing it down so we can update it as we learn more.