Skip to content

feat(orchestrator): C5 telemetry SSE source + dashboard server#47

Merged
martinjms merged 1 commit intomainfrom
feat/c5-telemetry-sse
May 2, 2026
Merged

feat(orchestrator): C5 telemetry SSE source + dashboard server#47
martinjms merged 1 commit intomainfrom
feat/c5-telemetry-sse

Conversation

@martinjms
Copy link
Copy Markdown
Collaborator

Implements #25 — orchestrator component C5 (read-side observability plane).

What this does

Two new modules:

  • `telemetry.rs` — `TelemetrySource` builds `TelemetrySnapshot` from live state (`DriverPool` snapshot + `CommandDispatcher` log slice + `StatsCollector` per-cluster latest), broadcasts to subscribers via `tokio::broadcast`.
  • `sse_server.rs` — minimal HTTP/SSE server (no new deps; raw TCP + HTTP/1.1) on `GET /telemetry/stream`. Per-connection task subscribes and writes `data: \n\n` events.

Wire format uses serializable types (no `Instant` on the wire). Internal `Instant` values are converted to ms-age fields relative to snapshot time so subscribers don't depend on the orchestrator's monotonic clock.

Supporting changes

  • `DriverPool::snapshot()` — returns `Vec` so the source can embed fleet state
  • `StatsCollector::latest_per_cluster()` — returns `HashMap<String, ClusterStats>` for per-cluster wire summary

Out of scope (deliberate follow-up)

  • The binary `orchestrator-cli` that consumes the SSE stream and renders a terminal dashboard. The acceptance test for "CLI connects, renders, reconnects" is exercised by a generic SSE client (raw `TcpStream` + HTTP/1.1) that verifies the server-side reconnect tolerance — the CLI binary lands when there's a real terminal UX to ship.

Test plan

  • `cargo test -p arcane-swarm-orchestrator` — 52 passed, 0 failed (4 new C5 + 48 prior)
  • `cargo clippy -p arcane-swarm-orchestrator --all-targets -- -D warnings` clean
  • `cargo fmt --all -- --check` clean

Acceptance tests covered

All 4 from issue #25:

  • `sse_stream_emits_valid_json_events` (raw HTTP/SSE client parses `data:` lines, deserializes `TelemetrySnapshot`)
  • `cli_connects_renders_and_reconnects` (drop + reconnect via fresh TcpStream)
  • `multiple_subscribers_each_receive_events` (two concurrent SSE clients)
  • `stream_continues_across_command_activity` (submit a command, verify it appears in the next snapshot's `recent_commands`)

Closes #25.

🤖 Generated with Claude Code

Implements component C5 — the read-side observability plane.

Two new modules:
- telemetry.rs: TelemetrySource builds TelemetrySnapshot from live state
  (DriverPool snapshot + CommandDispatcher log slice + StatsCollector
  per-cluster latest), broadcasts to subscribers via tokio::broadcast.
- sse_server.rs: minimal HTTP/SSE server (no extra deps; raw TCP +
  HTTP/1.1) on `GET /telemetry/stream`. Per-connection task subscribes
  and writes `data: <json>\n\n` events.

Wire format uses serializable types (no Instant on the wire) — internal
Instants are converted to ms-age fields relative to snapshot time so
subscribers don't depend on the orchestrator's monotonic clock.

Adds DriverPool::snapshot() and StatsCollector::latest_per_cluster()
to expose the data the source needs without breaking existing callers.

Out of scope (follow-up):
- A binary `orchestrator-cli` that consumes the SSE stream and renders
  a terminal dashboard. The acceptance test for "CLI connects, renders,
  reconnects" is exercised by a generic SSE client which verifies the
  server-side reconnect tolerance — the CLI binary lands when there's
  a real terminal UX to ship.

Closes #25.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@martinjms martinjms merged commit 87533da into main May 2, 2026
1 check passed
@martinjms martinjms deleted the feat/c5-telemetry-sse branch May 2, 2026 16:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Orchestrator C5: telemetry SSE source + dashboard CLI

1 participant