feat!: common chart v2.0.0#242
Open
Glenn-Terjesen wants to merge 67 commits intomainfrom
Open
Conversation
…s eviction of unhealthy pods - Add unhealthyPodEvictionPolicy: AlwaysAllow to prevent unhealthy pods from blocking node drains during cluster upgrades - Fix forceReplicas > 1 getting minAvailable 0% (all pods evictable) - Fix replicas=1 with HPA getting minAvailable 0% despite 2+ pods running - PDB logic now checks effective replicas: forceReplicas, HPA min replicas, or configured replicas Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…nnotations - Add missing seccompProfile to cron pod securityContext (matches deployment) - Move postgres proxy outside container loop in cron to prevent duplicate sidecars - Support per-ingress annotations when using ingresses list Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…l Secrets - Upgrade cloud-sql-proxy from v1 (1.33.16) to v2 (2.21.2) - Update image, executable name, and CLI flags for v2 - Fix memoryLimt typo in helpers, deprecate memoryLimit (limit = request) - Replace configmap-based connection config with External Secrets - Add postgres.instances to configure Secret Manager keys for instance connection names - Support multiple SQL databases via indexed CSQL_PROXY_INSTANCE_CONNECTION_NAME_N env vars - Deprecate postgres.connectionConfig with fail message - Fix deployment.prometheus.path not falling back to default (#225) BREAKING CHANGE: postgres.connectionConfig is removed in favor of postgres.instances. Users must migrate their connection config from Kubernetes ConfigMaps to Secret Manager keys via External Secrets (e.g. postgres.instances: [PGINSTANCES]).
Add deployment.cpuUtilization as the preferred location for HPA CPU target utilization. Falls back to top-level cpuUtilization for backwards compatibility, then to default 100%.
- Add StartupCPUBoost CRD resource (enabled by default, 50% increase) - Boost targets pods by app label, reverts when pod becomes Ready - Lower default HPA cpuUtilization from 100% to 70% (best practice when startup CPU spikes are handled by the boost operator) - Requires kube-startup-cpu-boost operator installed in the cluster
When container.probes.startup.path is set, the startup probe uses httpGet instead of tcpSocket. This enables custom startup health checks for apps with long-running startup tasks like cache warming.
Setting grpc: true now uses native K8s gRPC probes with internalPort by default. No need to manually set probe ports for each probe. Removes fallback to exec-based grpc_health_probe which required the binary in the container image.
…ssName (#126) Replace kubernetes.io/ingress.class annotation (deprecated since K8s 1.18) with spec.ingressClassName. Defaults to "traefik", configurable via ingress.ingressClassName or per-ingress ingressClassName field.
Add appId field matching GoogleCloudApplication metadata.id. Falls back to shortname for backwards compatibility. Adds new "appId" label to all resources alongside existing "shortname" label.
Add hpa.metrics list for appending custom metrics (Pods, External, Object) alongside the default CPU utilization metric. Supports Prometheus/GMP gauges, Pub/Sub queue depth, and any Cloud Monitoring metric. The existing hpa.spec override for full control is preserved.
- Remove unused grpcexecprobes helper (native K8s gRPC probes are now default) - Remove shortname value entirely — appId is now required with fail message for migration - Remove top-level cpuUtilization fallback, use only deployment.cpuUtilization - Fix readiness probe comment typo (said "liveness") - Rename shortname to appId in all fixtures and test values BREAKING CHANGE: shortname is removed. Use appId instead.
…theus metrics - Remove container.replicas, container.maxReplicas, container.forceReplicas, container.minAvailable, container.terminationGracePeriodSeconds — these are now only under deployment.* where they belong - Remove container.* fallbacks from deployment.yaml, hpa.yaml, pdb.yaml - Enable prometheus metrics on Cloud SQL proxy v2 (--http-port=9801 --prometheus) exposing metrics at :9801/metrics for monitoring proxy health and connections - Update all tests and fixtures to use deployment.* for scaling fields BREAKING CHANGE: container.replicas, container.maxReplicas, container.forceReplicas, container.minAvailable, and container.terminationGracePeriodSeconds are removed. Use deployment.replicas, deployment.maxReplicas, etc. instead.
When postgres is enabled, adds prometheus.io/scrape-sql-proxy, prometheus.io/sql-proxy-port (9801), and prometheus.io/sql-proxy-path (/metrics) annotations to pods. Allows configuring Prometheus to scrape the SQL proxy sidecar alongside the main application.
Tests were still asserting livenessProbe.exec.command from the removed grpcexecprobes helper. Updated to assert livenessProbe.grpc which is the native K8s gRPC probe now used by default.
Add the StartupCPUBoost Helm chart to the CI kind cluster setup so the StartupCPUBoost CRD is available during helm install tests.
…repo Copy the local common chart into each example's charts/ directory before running helm template. This tests examples against the current branch's chart instead of the published version.
…-backend - Add values-kub-ent-tst.yaml and values-kub-ent-prd.yaml for grpc-app example so CI can validate all environments - Add postgres.instances to typical-backend example (required by v2 proxy)
Add values.schema.json matching v2 values structure. Validates values on helm install/upgrade/template/lint to catch typos and unknown properties early. Updated from PR #222 for v2 changes: appId instead of shortname, scaling fields under deployment only, postgres.instances, hpa.metrics, startupCPUBoost, ingressClassName, startup probe path.
Run unit tests, kind cluster install tests, and example validation against both Helm 3 and Helm 4 using matrix strategy.
…isabled When CPU boost is off, Java startup CPU spikes can trigger unnecessary HPA scale-ups. A 120s stabilization window gives pods time to finish startup before HPA acts on the elevated CPU. When CPU boost is enabled the window is not needed since startup spikes are handled by the boost.
…ionWindowSeconds Defaults to 120s when startupCPUBoost is disabled. Tune to match your application's typical startup time (e.g. 60s for a fast app, 300s for a heavy Spring Boot app with cache warming).
When CPU boost is enabled and no explicit cpuLimit is set, the CPU limit is automatically set to 130% of the CPU request. This gives the boost operator a ceiling to work within. Explicit cpuLimit always takes precedence.
Memory limit is now always equal to memory request. The previous 1.2x multiplier and memoryLimit override are removed. container.memoryLimit is deprecated with a note to use container.memory instead. BREAKING CHANGE: container.memoryLimit is removed. Memory limit now always equals memory request. Set container.memory to the value you need.
HPA is now always enabled (unless forceReplicas is set). The Deployment spec never emits replicas — HPA controls pod count in all environments. Default minReplicas by environment: - sbx/dev/tst: 1 (scales down to single pod in low traffic) - prd: 2 (HA by default) deployment.replicas overrides the default minReplicas for any env. PDB protection follows minReplicas: 0% when minReplicas=1, 50% when >=2. This prevents the v1 bug where helm upgrade would reset HPA-managed replica counts back to the configured value. BREAKING CHANGE: HPA is now enabled by default in all environments. Use deployment.forceReplicas to opt out of HPA.
deployment.replicas is removed. Use deployment.minReplicas to set the HPA minimum replica count, or deployment.forceReplicas to disable HPA. Since HPA is always enabled, the Deployment spec never emits replicas — HPA controls the pod count. This prevents helm upgrade from resetting HPA-managed replica counts. BREAKING CHANGE: deployment.replicas is removed. Use deployment.minReplicas (sets HPA minimum) or deployment.forceReplicas (disables HPA).
…db.minAvailable - Default minReplicas to 2 in all environments (no more env-aware branching) - Remove pdb.minAvailable — use deployment.minAvailable instead (one place to configure) - Change maxSurge and maxUnavailable defaults from 25% to 1 (works correctly with 2 replicas) - PDB automatically 50% when minReplicas >= 2, 0% when minReplicas = 1 BREAKING CHANGE: pdb.minAvailable is removed. Use deployment.minAvailable instead. Default minReplicas is now 2 in all environments (was 1 in dev/tst). Default maxSurge and maxUnavailable changed from 25% to 1.
# Conflicts: # .github/workflows/pull-request.yml # charts/common/Chart.yaml # charts/common/templates/_helpers.tpl # examples/common/cronjob/Chart.yaml # examples/common/grpc-app/Chart.yaml # examples/common/multi-container/Chart.yaml # examples/common/multi-deploy/Chart.yaml # examples/common/simple-app/Chart.yaml # examples/common/typical-backend/Chart.yaml # examples/common/typical-frontend/Chart.yaml
Replace the split credential model (ExternalSecrets for proxy connection
names + Terraform-created K8s secrets for credentials) with a unified
approach where secretKeyPrefix is the single contract between Terraform
and Helm.
Given a prefix (default PG), the chart derives all Secret Manager keys
({prefix}INSTANCES, {prefix}USER, {prefix}PASSWORD) and fetches
everything via ExternalSecrets. The simplest case is just
`postgres.enabled: true`.
Changes:
- postgres.instances now takes objects with secretKeyPrefix instead of
raw Secret Manager key names
- New sql-credentials ExternalSecret fetches {prefix}USER and
{prefix}PASSWORD from Secret Manager
- Chart generates {prefix}HOST=localhost and {prefix}PORT=5432+index
as env vars (no longer fetched from Secret Manager)
- Proxy command adds --port=5432 for deterministic port assignment
- postgres.termTimeout renamed to postgres.maxSigtermDelay to match
the Cloud SQL Proxy v2 flag
- Removed v1 compatibility tests and deprecated fields
(connectionConfig, memoryLimit)
Claude Code skill that automates migrating Helm values files from common chart v1 to v2. Covers all breaking changes: shortname to appId, scaling field moves, postgres secretKeyPrefix integration, memoryLimit removal, and configmap.toEnv removal.
Default container.cpu raised from 0.1 to 0.3 and container.memory from 16 to 512 (Mi). The previous defaults were stub values that would OOMKill any JVM app before it finished booting; the JVM alone needs ~150-250 MiB before app code runs, and ~90% of Entur services are Spring Boot. Existing values that omit container.cpu / container.memory will see 3x CPU and 32x memory requests on next deploy. Override down for non-JVM workloads (sidecars, small Go services, static frontends). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The pull_request branches filter was matching the wrong direction (base instead of head), so the workflow never triggered on release-please PRs. Switch to filtering base on main and gating the job with an if: condition on github.head_ref. Also fix: - checkout uses github.head_ref instead of github.ref (which is the ephemeral merge ref on pull_request events) - drop redundant git switch (checkout already lands on the branch) - move the printf below the VERSION export - replace inline shell substitution in jq and yq filters with --arg / strenv for safer interpolation - quote $CUR_CHART in find - add a concurrency group to avoid races with release-please regenerations Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Add README.md.gotmpl per example so the narrative survives helm-docs regeneration. Each template renders the chart header and badges, then custom what / when / key-values sections, then the auto-generated requirements + values tables. The narrative explains each example's purpose in plain language, with cross-references between them (e.g. cronjob points to multi-deploy for event-driven work; multi-container points to multi-deploy as the preferred alternative when processes can run independently). A short gloss of "ingress" is included where relevant. Also fix a TZx -> TZ typo in typical-frontend/values.yaml so the configmap example matches the standard timezone env var. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Common Chart v2.0.0
Upgrade guide
See UPGRADE.md for the full migration guide.
Paste this into Claude Code, Copilot, Cursor, or any AI coding agent from your application's repo:
Breaking Changes
shortnameappIdmetadata.idcontainer.replicasdeployment.minReplicasdeployment.replicasdeployment.minReplicascontainer.maxReplicasdeployment.maxReplicascontainer.forceReplicasdeployment.forceReplicascontainer.minAvailabledeployment.minAvailablecontainer.memoryLimitpdb.minAvailabledeployment.minAvailablepostgres.connectionConfigpostgres.enabled: truesecretKeyPrefixintegration via External Secretspostgres.instances: [PGINSTANCES]postgres.instances: [{secretKeyPrefix: PG}]enabled: truefor default PG prefixpostgres.memoryLimitpostgres.memorypostgres.termTimeoutpostgres.maxSigtermDelaykubernetes.io/ingress.classingress.ingressClassNamecontainer.cpu: 0.1container.cpu: 0.3container.memory: 16container.memory: 512What's New
Postgres
secretKeyPrefixintegration — ThesecretKeyPrefixis now the single contract between the Helm chart and theentur/terraform-google-sql-dbTerraform module. Given a prefix (defaultPG), the chart derives all Secret Manager keys ({prefix}INSTANCES,{prefix}USER,{prefix}PASSWORD) and fetches everything via External Secrets. The simplest case is justpostgres.enabled: true. Multiple instances are supported via theinstanceslist. The chart generates{prefix}HOST=localhostand{prefix}PORT=5432+indexas env vars. Terraform-created K8s secrets are no longer needed.JVM-friendly resource defaults —
container.cpuraised from0.1to0.3andcontainer.memoryfrom16to512(Mi). The old defaults were stub values that would OOMKill any JVM app before it finished booting. ~90% of Entur services are Spring Boot, so the new defaults match the common case. Lighter workloads (sidecars, small Go services, static frontends) should override down.HPA always enabled — HPA runs in all environments (not just prd). Default
minReplicas: 2everywhere. Deployment spec never emitsreplicas, sohelm upgradecan't reset HPA-managed pod counts. UseforceReplicasto opt out.PDB fixes —
unhealthyPodEvictionPolicy: AlwaysAllowprevents unhealthy pods from blocking cluster upgrades.forceReplicas > 1now correctly gets PDB protection.Cloud SQL Proxy v2 — Upgraded to v2 (2.21.2). Prometheus metrics on port 9801. Configurable shutdown delay via
postgres.maxSigtermDelay.GKE Startup CPU Boost — Alfa version - Optional (
deployment.startupCPUBoost.enabled). Temporarily increases CPU during startup, reverts when pod is Ready. Auto-sets CPU limit to 1.3x request. NB Not ready yet!Native gRPC probes —
grpc: truenow uses K8s native gRPC probes withservice.internalPort. No need for/bin/grpc_health_probebinary or manual port config.Custom HPA metrics —
hpa.metricslist for Pods/External/Object metrics alongside default CPU. ScaleUp stabilization window (120s default) when CPU boost is disabled.JSON Schema validation —
values.schema.jsoncatches typos and unknown properties onhelm lint. IDE autocompletion in VS Code and JetBrains.Helm 3 + 4 — CI tests all run against both Helm v3.20.0 and v4.1.3.
Other Improvements
pathfor httpGet (Allow path for startup probe #237)deployment.cpuUtilization(default 70%) replaces top-levelcpuUtilization(HPA averageUtilization should be placed under deployment #221)annotationsandingressClassNamesupportseccompProfile, fixed postgres proxy placementmaxSurge: 1,maxUnavailable: 1memoryLimttypo in postgres proxy helpergrpcexecprobeshelperREADME.md.gotmpltemplates with hand-written narrative (purpose, when to use, key values) that surviveshelm-docsregeneration; values tables remain auto-generatedCI
helm lintadded to example validationkube-startup-cpu-boostoperator installed in kind clusterhelm-docsworkflow now triggers correctly on release-please PRs (was filtering the wrong branch direction); concurrency group added to avoid races with release-please regenerationsCloses
Closes #101, closes #126, closes #195, closes #221, closes #225, closes #235, closes #237
Generated with Claude Code using Claude Opus 4.6