Skip to content

feat: cache warming sidecar for OpenVox Server pods #113

@slauger

Description

@slauger

Problem

The OpenVox Server environment cache is per-process, meaning each Pod builds its cache lazily on first catalog compilation. This causes slow first-compile times for each environment after Pod (re)starts, especially with many environments or large codebases.

There is no upstream API to preload all environments into cache at once.

Proposal

Add an optional sidecar container that warms the environment cache after the server becomes ready. The sidecar would:

  1. Wait for the OpenVox Server readiness probe to pass
  2. Query GET /puppet/v3/environments to discover all environments
  3. Request a dummy catalog compilation per environment to populate the cache
  4. Either exit (with restartPolicy: Never on the sidecar, requires K8s 1.28+ native sidecars) or go idle

Since the cache is per-process, this must run locally in the same Pod via localhost.

Alternatives considered

  • postStart lifecycle hook: Has hard timeout constraints, not suitable for many environments
  • initContainer: Server is not running yet during init phase, so not viable
  • External Job/CronJob: Would hit the Service and only warm a single Pod per request (load balancer distribution), unreliable
  • Upstream API: An environment-cache warm endpoint in OpenVox Server itself would be the cleanest solution long-term

Open questions

  • Should the sidecar also re-warm after a DELETE /puppet-admin-api/v1/environment-cache flush (e.g. triggered by code deploy)?
  • Should this be opt-in via the OpenvoxConfig CR or a separate mechanism?
  • Node name for dummy catalog compilation -- does it matter? (e.g. cache-warmup.internal)

Metadata

Metadata

Assignees

No one assigned

    Labels

    ideaJust an Idea

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions