-
Notifications
You must be signed in to change notification settings - Fork 3
Open
Labels
ideaJust an IdeaJust an Idea
Description
Problem
The OpenVox Server environment cache is per-process, meaning each Pod builds its cache lazily on first catalog compilation. This causes slow first-compile times for each environment after Pod (re)starts, especially with many environments or large codebases.
There is no upstream API to preload all environments into cache at once.
Proposal
Add an optional sidecar container that warms the environment cache after the server becomes ready. The sidecar would:
- Wait for the OpenVox Server readiness probe to pass
- Query
GET /puppet/v3/environmentsto discover all environments - Request a dummy catalog compilation per environment to populate the cache
- Either exit (with
restartPolicy: Neveron the sidecar, requires K8s 1.28+ native sidecars) or go idle
Since the cache is per-process, this must run locally in the same Pod via localhost.
Alternatives considered
postStartlifecycle hook: Has hard timeout constraints, not suitable for many environments- initContainer: Server is not running yet during init phase, so not viable
- External Job/CronJob: Would hit the Service and only warm a single Pod per request (load balancer distribution), unreliable
- Upstream API: An
environment-cachewarm endpoint in OpenVox Server itself would be the cleanest solution long-term
Open questions
- Should the sidecar also re-warm after a
DELETE /puppet-admin-api/v1/environment-cacheflush (e.g. triggered by code deploy)? - Should this be opt-in via the
OpenvoxConfigCR or a separate mechanism? - Node name for dummy catalog compilation -- does it matter? (e.g.
cache-warmup.internal)
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
ideaJust an IdeaJust an Idea