fix(backend): only touch NNTP pool when provider list changes#210
Merged
fix(backend): only touch NNTP pool when provider list changes#210
Conversation
Saving any configuration was triggering UpdateConfig on the NNTP pool because poolConfigChanged also returned true for ConnectionPool MinConnections / HealthCheckInterval changes (which can't be applied to a running nntppool.Client anyway), and a stale nil-pool recovery path in Manager.UpdateConfig replaced both pool fields without closing whatever was there, leaking a distinct verify pool. - Narrow poolConfigChanged to provider-only signals: match servers by canonical providerKey (so reordering is a no-op), delegate field comparison to pool.ServerConfigChanged so the rule lives in one place, and stop comparing ConnectionPool construction options. - Export pool.ServerConfigChanged and pool.ProviderKey so the backend reuses the canonical predicate. - Make Manager.UpdateConfig's nil-pool recovery safe: build clients in place under the lock (no unlock/relock race) and close any leftover upload/verify pools before overwriting. - Convert applyConfigChanges' pool branch to a switch and only create a new pool when poolManager is nil and servers are configured. Provider edits continue to flow through AddProvider/RemoveProvider via the existing diffProviders path — no destroy-and-recreate of the pool.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Saving any configuration was unnecessarily invoking
UpdateConfigon the NNTP pool, and a stale recovery path inManager.UpdateConfigcould leak a distinct verify pool. This restricts pool churn to genuine provider-list changes and hardens the recovery path.What changed
internal/backend/config.go—poolConfigChangednow matches servers by canonicalproviderKey(reordering is a no-op) and delegates field comparison topool.ServerConfigChanged, so the "what counts as a provider change" rule lives in one place. Removed theConnectionPool.MinConnections/HealthCheckIntervalcomparison: those arenntppool.NewClientconstruction options that can't be applied to a running client byAddProvider/RemoveProvider, so including them only caused wasted updates.applyConfigChangesis now a switch and only callspool.NewwhenpoolManager == niland servers are configured.internal/pool/manager.go— ExportedServerConfigChangedandProviderKey. FixedUpdateConfig's nil-pool recovery: it used to unlock, callNew, relock, and overwrite both pool fields without closing whatever was there (leaking any distinct verify pool, and racing with concurrentClose). It now builds the clients in place under the lock and closes any leftover upload/verify pools before overwriting.Why
nntppool'sAddProvider/RemoveProvider, never destroy-and-recreate the running client. The diff path inManager.UpdateConfigalready does this; this PR just stops calling it for unrelated saves.Reviewer notes
ConnectionPool.MinConnectionsandHealthCheckIntervalnow require an app restart to take effect. Documented in a comment onpoolConfigChanged.pool.Newexactly once — that's expected, the manager isnilat that point.Test plan
go build ./...andgo vet ./internal/backend/... ./internal/pool/...are green (verified locally).Posting.MaxRetries) → logs showPool configuration unchanged, skipping pool updateand the runningnntppool.Clientis preserved.Added provider to poolfor the new provider only; nopool.Newinvocation.Removed providerthenAdded providerfor the same key.