Conversation
…p wiring The two changes together close the gap that made yesterday's AWS run silently pass with 0 entities loaded. Orchestrator (arcane-swarm-orchestrator): - CommandDispatcher::submit now OWNS the per-driver allocation for SetPlayers. The controller submits an aggregate target; the dispatcher divides it across active drivers (sorted by DriverId for determinism) and spreads the remainder across the first `rem` drivers. Sum exactly equals the submitted target. - Other commands (SetSpawnDelayMs, Stop) keep broadcast semantics. - 4 new unit tests cover even split, remainder distribution, the headline 13,500/12=1,125 case, and broadcast semantics for non- SetPlayers commands. Driver (arcane-swarm): - run_orchestrated_mode is now a WS↔TCP bridge. It picks a free localhost port, runs the existing run_control_mode on that port, connects to the orchestrator via WS, and translates inbound OrchestratorCommands into the existing TCP control protocol (SET_PLAYERS N / QUIT). This makes orchestrator commands actually spawn real players via the well-tested control-mode machinery. - main.rs spawns the bridge alongside run_control_mode when --orchestrator-url is set. cfg.players defaults to 0 in the control-mode side; the bridge pushes the real initial count once WS registration completes. - The wire-protocol unit tests + a new e2e test verify that a controller-submitted SetPlayers reaches the local TCP control server byte-for-byte. Out of scope (deliberate, called out so it's visible): - SetSpawnDelayMs is recorded in OrchestratedState but not yet pushed to TCP (no SET_SPAWN_DELAY in the existing control protocol). The headline workload sets a single spawn_delay_ms at the first phase and never changes it, so this is fine for first AWS validation. - Bursts still happen on the driver autonomously via --burst-* flags. Moving burst scheduling into the orchestrator (so the controller can issue "burst now" commands) is a follow-up; it's the long-term architecture Martin called out. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Closes the gap that made yesterday's AWS run silently pass with 0 entities loaded.
Two changes that go together
Orchestrator —
CommandDispatcher::submitnow OWNS per-driver allocation forSetPlayers. Controller submits an aggregate target; the dispatcher divides across active drivers (sorted by DriverId for determinism), spreads the remainder across the firstremdrivers. Sum exactly equals submitted target. Other commands (SetSpawnDelayMs,Stop) keep broadcast semantics.Driver —
run_orchestrated_modeis now a WS↔TCP bridge. Picks a free localhost port, runs the existingrun_control_modeon that port, connects to the orchestrator via WS, translates inbound OrchestratorCommands into the existing TCP control protocol (SET_PLAYERS N/QUIT). Real players spawn via well-tested control-mode machinery.Test status
cargo test— 65 + 21 + 2 + 2 = 90 tests pass across both cratescargo clippy --all-targets -- -D warningscleancargo fmt --all -- --checkcleandriver_bridge_relays_set_players_into_local_control_protocol) verifies controller→orchestrator→WS→bridge→TCP round-trips byte-for-byte🤖 Generated with Claude Code