diff --git a/.claude/skills/upgrade-common-chart/SKILL.md b/.claude/skills/upgrade-common-chart/SKILL.md new file mode 100644 index 0000000..6490cba --- /dev/null +++ b/.claude/skills/upgrade-common-chart/SKILL.md @@ -0,0 +1,146 @@ +--- +name: upgrade-common-chart +description: > + Upgrade Entur common Helm chart dependency from v1 to v2. Use this skill when + the user wants to migrate their Helm values files to the v2 common chart, asks + about upgrading to common chart v2, mentions "common chart upgrade", or has + schema validation errors after bumping the common chart version. Also trigger + when the user mentions deprecated fields like shortname, container.replicas, + connectionConfig, memoryLimit, or postgres.instances with raw string values. +--- + +# Upgrade Entur Common Helm Chart (v1 to v2) + +You are upgrading a Helm chart that depends on `entur/common` from v1 to v2. This is a breaking change that requires migrating values files and updating the chart dependency. + +## Step 1: Understand the project + +Find all relevant files: +1. `Chart.yaml` — contains the `common` dependency version to update +2. All `values*.yaml` files — contain the values to migrate (check `env/` subdirectories too) +3. Any `values-kub-ent-*.yaml` files — environment-specific overrides + +Read each file before making changes. The common chart is typically referenced as a dependency under the `common:` key in values files. + +## Step 2: Update Chart.yaml + +Bump the common chart dependency version to `2.0.0`: + +```yaml +dependencies: + - name: common + version: 2.0.0 + repository: "https://entur.github.io/helm-charts" +``` + +## Step 3: Apply migrations to every values file + +Work through each migration in order. Skip any that don't apply to the file. + +### 3.1 Rename `shortname` to `appId` + +```yaml +# Before +common: + shortname: myapp + +# After +common: + appId: myapp +``` + +### 3.2 Move scaling fields from `container.*` to `deployment.*` + +| Removed (v1) | Replacement (v2) | +|---|---| +| `container.replicas` | `deployment.minReplicas` | +| `deployment.replicas` | `deployment.minReplicas` | +| `container.maxReplicas` | `deployment.maxReplicas` | +| `container.forceReplicas` | `deployment.forceReplicas` | +| `container.minAvailable` | `deployment.minAvailable` | +| `container.terminationGracePeriodSeconds` | `deployment.terminationGracePeriodSeconds` | + +HPA is now always enabled (unless `forceReplicas` is set). The Deployment never emits a `replicas` field — HPA controls pod count. To pin replicas, use `deployment.forceReplicas`. + +### 3.3 Remove `container.memoryLimit` and `postgres.memoryLimit` + +Memory limit now always equals memory request. Remove `memoryLimit` and set `memory` to the value you need for both. + +### 3.4 Migrate postgres configuration + +This is the most significant change. The postgres integration now uses `secretKeyPrefix` as the contract with the `entur/terraform-google-sql-db` Terraform module. + +**Remove deprecated fields:** `postgres.connectionConfig`, `postgres.memoryLimit`, `postgres.termTimeout` + +**Migrate `postgres.instances`:** Items changed from raw Secret Manager key names (strings) to objects with `secretKeyPrefix`. When `enabled: true` with no `instances`, the chart defaults to `[{secretKeyPrefix: PG}]`. + +```yaml +# v1 +common: + postgres: + enabled: true + connectionConfig: my-app-psql-connection + +# v2 (simplest — default PG prefix) +common: + postgres: + enabled: true + +# v2 (explicit prefix) +common: + postgres: + enabled: true + instances: + - secretKeyPrefix: PG + +# v2 (multiple instances) +common: + postgres: + enabled: true + instances: + - secretKeyPrefix: PG + - secretKeyPrefix: ANALYTICS_PG +``` + +If `postgres.termTimeout` was set, rename it to `postgres.maxSigtermDelay` (maps to the Cloud SQL Proxy v2 `--max-sigterm-delay` flag). + +### 3.5 Remove `configmap.toEnv` + +The configmap is automatically mounted via `envFrom` when `configmap.enabled: true`. + +```yaml +# v1 +common: + configmap: + enabled: true + toEnv: true + +# v2 +common: + configmap: + enabled: true +``` + +### 3.6 Update ingress if using `ingress.class` + +The `kubernetes.io/ingress.class` annotation is removed. Ingress now uses `spec.ingressClassName` (defaults to `traefik`). If you had a custom `ingress.class` annotation, use `ingress.ingressClassName` instead. + +### 3.7 Update gRPC probe configuration + +If using gRPC, explicit `probes.*.grpc.port` settings are no longer needed — they default to `service.internalPort`. Remove them unless you need a non-default port. + +## Step 4: Verify + +Run these commands and fix any issues: + +```bash +helm dependency update +helm lint . -f env/values-kub-ent-dev.yaml +helm template . -f env/values-kub-ent-dev.yaml +``` + +If lint reports unknown properties, you likely missed a renamed or removed field. Check the migration steps above. + +## Step 5: Summary + +After completing all changes, provide the user with a summary of what was changed, organized by file. Mention any fields that were removed or renamed. diff --git a/.github/workflows/helm-docs.yml b/.github/workflows/helm-docs.yml index d21e243..57bbf78 100644 --- a/.github/workflows/helm-docs.yml +++ b/.github/workflows/helm-docs.yml @@ -2,13 +2,17 @@ name: helm-docs-and-examples-update on: pull_request: - branches: - - "release-please--branches--**" + branches: [main] workflow_dispatch: +concurrency: + group: helm-docs-${{ github.head_ref || github.ref_name }} + cancel-in-progress: true + jobs: helm-doc-example-update: name: Update helm chart versions in examples and docs + if: github.event_name == 'workflow_dispatch' || startsWith(github.head_ref, 'release-please--branches--') runs-on: ubuntu-24.04 permissions: contents: write @@ -16,40 +20,30 @@ jobs: - name: Checkout source code uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 with: - ref: ${{ github.ref }} + ref: ${{ github.head_ref }} fetch-depth: 0 - - name: Add helm-docs common changes to release branch - env: - RELEASE_BRANCH: ${{ github.ref_name }} + - name: Update examples and regenerate docs run: | CUR_CHART="common" # TODO get from release-please manifest output - - printf "Updating Helm chart %s documentation for version %s\n" $CUR_CHART $VERSION git config --global user.email "actions@github.com" git config --global user.name "GitHub Actions" - git switch $RELEASE_BRANCH - # get version from release-please-manifest.json - export VERSION=$(jq -r '.["charts/'"$CUR_CHART"'"]' .github/release-please-manifest.json) - printf "Version: %s\n" $VERSION + export VERSION=$(jq -r --arg c "$CUR_CHART" '.["charts/" + $c]' .github/release-please-manifest.json) + printf "Updating Helm chart %s documentation for version %s\n" "$CUR_CHART" "$VERSION" - # Update the version in examples directory - all_charts=$(find ./examples/$CUR_CHART -name Chart.yaml) + all_charts=$(find "./examples/$CUR_CHART" -name Chart.yaml) for chart in $all_charts; do - yq -e -i '(.dependencies[] | select(.name == "'$CUR_CHART'") | .version) = env(VERSION)' "${chart}" + CUR_CHART="$CUR_CHART" yq -e -i '(.dependencies[] | select(.name == strenv(CUR_CHART)) | .version) = env(VERSION)' "${chart}" done - # Install and run helm-docs go install github.com/norwoodj/helm-docs/cmd/helm-docs@37d3055fece566105cf8cff7c17b7b2355a01677 # 1.14.2 - export PATH=${PATH}:`go env GOPATH`/bin + export PATH=${PATH}:$(go env GOPATH)/bin helm-docs - if [ -n "$(git status --porcelain '*.md')" ]; then - git add \*README.md - git add \*Chart.yaml + if [ -n "$(git status --porcelain '*.md' '*Chart.yaml')" ]; then + git add \*README.md \*Chart.yaml git commit -m "docs: update Helm chart documentation" git push else echo "Helm versions are up to date" - exit 0 fi diff --git a/.github/workflows/pull-request.yml b/.github/workflows/pull-request.yml index 7eee049..8465afd 100644 --- a/.github/workflows/pull-request.yml +++ b/.github/workflows/pull-request.yml @@ -13,39 +13,62 @@ jobs: uses: entur/gha-meta/.github/workflows/verify-pr.yml@v1 unittest-common-chart: - uses: entur/gha-helm/.github/workflows/unittest.yml@v1 - with: - chart: charts/common + name: unittest (helm ${{ matrix.helm-version }}) + runs-on: ubuntu-24.04 + strategy: + matrix: + helm-version: [v3.20.0, v4.1.3] + steps: + - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + + - name: Set up Helm + uses: azure/setup-helm@dda3372f752e03dde6b3237bc9431cdc2f7a02a2 # v5.0.0 + with: + version: ${{ matrix.helm-version }} + + - name: Install helm-unittest plugin + run: helm plugin install https://github.com/helm-unittest/helm-unittest.git || helm plugin install --verify=false https://github.com/helm-unittest/helm-unittest.git + + - name: Run unit tests + run: helm unittest ./charts/common helm-install-test: - name: helm install + name: helm install (helm ${{ matrix.helm-version }}) runs-on: ubuntu-24.04 needs: unittest-common-chart + strategy: + matrix: + helm-version: [v3.20.0, v4.1.3] steps: - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 - name: Set up Helm uses: azure/setup-helm@dda3372f752e03dde6b3237bc9431cdc2f7a02a2 # v5.0.0 + with: + version: ${{ matrix.helm-version }} - name: Create kind cluster uses: helm/kind-action@ef37e7f390d99f746eb8b610417061a60e82a6cc # v1.14.0 with: - node_image: kindest/node:v1.32.3 + node_image: kindest/node:v1.35.1 - - name: Configure metrics and VPA + - name: Configure metrics, VPA and StartupCPUBoost run: | helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/ + helm repo add cowboysysop https://cowboysysop.github.io/charts/ + helm repo add kube-startup-cpu-boost https://google.github.io/kube-startup-cpu-boost helm repo update helm install --set args={--kubelet-insecure-tls} metrics-server metrics-server/metrics-server --namespace kube-system - helm repo add cowboysysop https://cowboysysop.github.io/charts/ helm -n kube-system install vertical-pod-autoscaler cowboysysop/vertical-pod-autoscaler + helm install kube-startup-cpu-boost kube-startup-cpu-boost/kube-startup-cpu-boost --namespace kube-startup-cpu-boost-system --create-namespace --wait --timeout 5m0s - name: Install helm chart run: | - helm install --generate-name --dependency-update --wait --timeout 5m0s charts/common --values fixture/helm/ci/values-ci-tests.yaml - helm install --generate-name --dependency-update --wait --timeout 5m0s charts/common --values fixture/helm/ci/values-ci-cronjob-tests.yaml + helm install ci-deployment --dependency-update --wait --timeout 5m0s charts/common --values fixture/helm/ci/values-ci-tests.yaml + helm install ci-cronjob --dependency-update --wait --timeout 5m0s charts/common --values fixture/helm/ci/values-ci-cronjob-tests.yaml validate-examples: + name: examples (${{ matrix.example }}/${{ matrix.env }}/helm ${{ matrix.helm-version }}) runs-on: ubuntu-24.04 strategy: matrix: @@ -56,8 +79,11 @@ jobs: typical-frontend, multi-container, multi-deploy, + cronjob, + grpc-app, ] env: [dev, tst, prd] + helm-version: [v3.20.0, v4.1.3] steps: - name: Check out the repo uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 @@ -67,8 +93,13 @@ jobs: - name: Set up Helm uses: azure/setup-helm@dda3372f752e03dde6b3237bc9431cdc2f7a02a2 # v5.0.0 + with: + version: ${{ matrix.helm-version }} - name: Helm verify examples working-directory: examples/common/${{ matrix.example }} run: | - helm template --dependency-update . -f env/values-kub-ent-${{ matrix.env }}.yaml + mkdir -p charts + cp -r ../../../charts/common charts/common + helm lint . -f env/values-kub-ent-${{ matrix.env }}.yaml + helm template . -f env/values-kub-ent-${{ matrix.env }}.yaml diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..f494b1b --- /dev/null +++ b/.gitignore @@ -0,0 +1 @@ +.tool-versions diff --git a/AGENTS.md b/AGENTS.md new file mode 100644 index 0000000..f394d00 --- /dev/null +++ b/AGENTS.md @@ -0,0 +1,130 @@ +# AGENTS.md — Entur Helm Charts + +## Project Overview + +This repository contains Entur's opinionated Helm charts for deploying applications to Kubernetes. The primary chart is `common` (in `charts/common/`), which provides a convention-over-configuration approach for Spring Boot apps and general workloads. Example charts in `examples/common/` demonstrate usage patterns. + +Compatible with Helm 3 and Helm 4. + +## Repository Structure + +``` +charts/common/ # The main Helm chart (source of truth) + templates/ # Kubernetes resource templates + tests/ # helm-unittest test suites + tests/values/ # Shared test values + values.yaml # Default values (heavily documented) + values.schema.json # JSON Schema for values validation +examples/common/ # 7 example charts showing usage patterns +fixture/helm/ # Fixture values for template rendering validation +.github/workflows/ # CI/CD (PR checks, release, docs generation) +``` + +## Tools + +- **`helm`** — render templates, manage dependencies, package charts +- **`helm unittest`** — run YAML-based unit tests (plugin: helm-unittest) +- **`helm template`** — render and inspect template output with fixture values +- **`helm lint`** — validate chart structure and values against JSON Schema +- **`helm-docs`** — auto-generate README.md from values.yaml comments and Chart.yaml description +- **`gh`** — GitHub CLI for issues, PRs, releases, and CI status + +## Development Commands + +```bash +# Run unit tests (primary validation step — always run after any template/values change) +helm unittest ./charts/common + +# Run a single test file +helm unittest ./charts/common -f tests/pdb_test.yaml + +# Lint chart with schema validation +helm lint charts/common -f fixture/helm/values-minimal.yaml + +# Render templates with fixture values to verify output +helm template charts/common -f fixture/helm/values-minimal.yaml +helm template charts/common -f fixture/helm/values-cron.yaml +helm template charts/common -f fixture/helm/values-secrets.yaml +helm template charts/common -f fixture/helm/values-postgres.yaml +helm template charts/common -f fixture/helm/values-postgres-multi.yaml + +# Render a single template +helm template test charts/common -f fixture/helm/values-minimal.yaml --show-only templates/pdb.yaml + +# Render with value overrides (useful for testing specific scenarios) +helm template test charts/common -f fixture/helm/values-minimal.yaml --set env=prd --set deployment.minReplicas=3 + +# Regenerate chart documentation (README.md files) — run after changing values.yaml or Chart.yaml +helm-docs + +# Update dependencies for example charts after version bump +helm dependency update examples/common/ + +# View GitHub issues +gh issue view --repo entur/helm-charts +gh issue view --repo entur/helm-charts --comments +``` + +## Testing + +- **Framework**: [helm-unittest](https://github.com/helm-unittest/helm-unittest) — YAML-based declarative assertions +- **Test location**: `charts/common/tests/*_test.yaml` +- **Shared test values**: `charts/common/tests/values/common-test-values.yaml` +- **Snapshots**: `charts/common/tests/__snapshot__/` +- **Always run `helm unittest ./charts/common` after modifying any template or values** +- Tests cover: deployment, service, ingress, HPA, PDB, VPA, configmap, secrets, cron, sql-proxy, sql-credentials + +## Key Conventions + +### Commit Messages +Uses [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/): +- `feat!:` — breaking change (major version bump) +- `feat:` — new feature (minor version bump) +- `fix:` — bug fix (patch version bump) +- `docs:` / `ci:` / `chore:` — no version bump + +### Chart Design Principles +- HPA is always enabled with `minReplicas: 2` by default (use `forceReplicas` to opt out) +- Memory limit always equals memory request +- Security: non-root, no privilege escalation, drop all capabilities, seccompProfile RuntimeDefault +- Traffic types must be explicit: `api`, `public`, or `http2` +- Template helpers are in `charts/common/templates/_helpers.tpl` +- Deprecated values use `fail` to give clear migration messages +- Scaling fields (minReplicas, maxReplicas, forceReplicas, minAvailable) belong under `deployment.*` only — not `container.*` +- Container-specific fields (cpu, memory, image, probes, env, ports, lifecycle) belong under `container.*` + +### Values Patterns +- Required fields: `app`, `appId`, `team`, `env`, `container.image` +- Environment values: `sbx`, `dev`, `tst`, `prd` +- Single container: use `container:` key +- Multiple containers: use `containers:` list +- Environment-specific overrides go in `env/values-kub-ent-{dev,tst,prd}.yaml` +- Postgres/Cloud SQL: `postgres.enabled: true` defaults to `secretKeyPrefix: PG`. For multiple instances use `postgres.instances: [{secretKeyPrefix: PG}, {secretKeyPrefix: ANALYTICS_PG}]`. The `secretKeyPrefix` is the contract with the `entur/terraform-google-sql-db` Terraform module — it derives Secret Manager keys `{prefix}INSTANCES`, `{prefix}USER`, `{prefix}PASSWORD` +- gRPC: set `grpc: true` — native K8s gRPC probes are used automatically with `service.internalPort` +- Custom HPA metrics: use `hpa.metrics` list to add Pods/External/Object metrics alongside default CPU + +### Documentation +- `README.md` files in `charts/` and `examples/` are **auto-generated** by `helm-docs` — never edit them manually +- To update documentation: edit `values.yaml` comments (use `# --` prefix for helm-docs) or `Chart.yaml` description, then run `helm-docs` +- After any change to `values.yaml`, `Chart.yaml`, or `values.schema.json`: run `helm-docs` to regenerate README.md files +- `values.schema.json` must be updated manually when adding/removing/renaming values fields + +## CI/CD + +- **PR checks**: lint, unit tests (Helm 3 + 4), kind cluster install tests (Helm 3 + 4), example validation (Helm 3 + 4) +- **Release**: automated via release-please with semantic versioning; tags like `common-v2.0.0` +- **Docs**: auto-generated on release branches via helm-docs workflow +- **Ownership**: `@entur/team-plattform` (see CODEOWNERS) + +## Important Notes + +- `README.md` files in charts are **auto-generated** by helm-docs — do not edit them manually; edit `values.yaml` comments or `Chart.yaml` description instead +- Example charts pin their dependency on `common` — update both `Chart.yaml` version and run `helm dependency update` when changing +- The chart supports both Deployment and CronJob workloads (mutually exclusive via `deployment.enabled` / `cron.enabled`) +- Fixture values in `fixture/helm/` are used for CI template rendering validation +- `shortname` is removed — use `appId` (matches GoogleCloudApplication `metadata.id`) +- `postgres.connectionConfig` is removed — use `postgres.enabled: true` (defaults to `secretKeyPrefix: PG`) +- `postgres.termTimeout` is removed — use `postgres.maxSigtermDelay` +- `deployment.replicas` is removed — use `deployment.minReplicas` (HPA controls pod count) +- `container.memoryLimit` is removed — memory limit always equals memory request +- `pdb.minAvailable` is removed — use `deployment.minAvailable` diff --git a/CLAUDE.md b/CLAUDE.md new file mode 120000 index 0000000..47dc3e3 --- /dev/null +++ b/CLAUDE.md @@ -0,0 +1 @@ +AGENTS.md \ No newline at end of file diff --git a/README.md b/README.md index 1e410ce..09fb574 100644 --- a/README.md +++ b/README.md @@ -10,6 +10,43 @@ Browse our [examples/common](./examples/common) folder for quick examples to get Full documentation on each chart you can find in the [charts/](./charts/) folders. +## IDE Support + +The chart includes a [JSON Schema](./charts/common/values.schema.json) for `values.yaml` validation. This gives you autocompletion, inline documentation, and error highlighting in your IDE. + +### VS Code + +Install the [YAML extension](https://marketplace.visualstudio.com/items?itemName=redhat.vscode-yaml) and add this to your `values.yaml`: + +```yaml +# yaml-language-server: $schema=https://raw.githubusercontent.com/entur/helm-charts/main/charts/common/values.schema.json +common: + app: my-app + ... +``` + +Or configure it in `.vscode/settings.json`: + +```json +{ + "yaml.schemas": { + "https://raw.githubusercontent.com/entur/helm-charts/main/charts/common/values.schema.json": ["values*.yaml", "env/values*.yaml"] + } +} +``` + +### JetBrains (IntelliJ / GoLand) + +Go to **Settings → Languages & Frameworks → Schemas and DTDs → JSON Schema Mappings**, add a new mapping: +- **Schema URL**: `https://raw.githubusercontent.com/entur/helm-charts/main/charts/common/values.schema.json` +- **File path pattern**: `values*.yaml` + +Note: Since the chart is used as a dependency, your values are nested under `common:`. The schema validates the properties under that key. + +## Upgrading + +See [UPGRADE.md](UPGRADE.md) for migration guides between major versions. + ## Contributing For guidance on how to contribute, see our [contribution documentation](CONTRIBUTING.md). diff --git a/UPGRADE.md b/UPGRADE.md new file mode 100644 index 0000000..359c9ad --- /dev/null +++ b/UPGRADE.md @@ -0,0 +1,235 @@ +# Upgrading to common chart v2 + +This guide covers all breaking changes and required migration steps when upgrading from v1 to v2 of the `common` Helm chart. + +## Prerequisites + +- Helm 3.x or Helm 4.x +- External Secrets Operator installed (required if using `postgres` or `secrets`) +- Optionally: [kube-startup-cpu-boost](https://github.com/google/kube-startup-cpu-boost) operator for CPU boost feature + +## Breaking Changes + +### 1. `shortname` renamed to `appId` + +The `shortname` field is removed. Use `appId` instead. This aligns with the GoogleCloudApplication [`metadata.id`](https://github.com/entur/tf-gcp-apps/blob/main/docs/manifests/GoogleCloudApplication.md) field. + +```yaml +# v1 +common: + shortname: myapp + +# v2 +common: + appId: myapp +``` + +The Kubernetes label `shortname` is still emitted for backwards compatibility alongside the new `appId` label. + +### 2. Scaling fields replaced + +Scaling fields have been removed from `container.*` and consolidated under `deployment.*`. Additionally, `deployment.replicas` is replaced by `deployment.minReplicas`. + +**HPA is now always enabled** (unless `forceReplicas` is set). The Deployment spec never emits `replicas` — HPA controls the pod count in all environments. This prevents the v1 bug where `helm upgrade` would reset HPA-managed replica counts. + +| Removed (v1) | Replacement (v2) | +|---|---| +| `container.replicas` | `deployment.minReplicas` | +| `deployment.replicas` | `deployment.minReplicas` | +| `container.maxReplicas` | `deployment.maxReplicas` | +| `container.forceReplicas` | `deployment.forceReplicas` | +| `container.minAvailable` | `deployment.minAvailable` | +| `container.terminationGracePeriodSeconds` | `deployment.terminationGracePeriodSeconds` | + +Default `minReplicas` by environment: +- `sbx`/`dev`/`tst`: **1** (scales down to single pod in low traffic) +- `prd`: **2** (HA by default) + +```yaml +# v1 +common: + container: + replicas: 2 + maxReplicas: 5 + +# v2 +common: + deployment: + minReplicas: 2 # HPA minimum + maxReplicas: 5 # HPA maximum +``` + +To disable HPA and fix a specific replica count, use `forceReplicas`: + +```yaml +common: + deployment: + forceReplicas: 3 # disables HPA, fixed at 3 pods +``` + +### 3. `container.memoryLimit` removed + +Memory limit is now always equal to memory request. The 1.2x multiplier and `memoryLimit` override are removed. Set `container.memory` to the value you need for both request and limit. + +```yaml +# v1 +common: + container: + memory: 512 + memoryLimit: 1024 + +# v2 +common: + container: + memory: 1024 # sets both request and limit +``` + +### 4. Cloud SQL Proxy — `secretKeyPrefix` integration + +The postgres integration now uses `secretKeyPrefix` as the single contract with the `entur/terraform-google-sql-db` Terraform module. Given a prefix, the chart derives all Secret Manager key names and fetches everything via External Secrets. Terraform-created Kubernetes secrets are no longer needed. + +**`postgres.instances` format changed.** Items are now objects with `secretKeyPrefix` instead of raw Secret Manager key names. When `enabled: true` with no `instances`, the chart defaults to `[{secretKeyPrefix: PG}]`. + +**`postgres.connectionConfig`, `postgres.memoryLimit`, and `postgres.termTimeout` are removed.** The `termTimeout` field is replaced by `postgres.maxSigtermDelay` to match the Cloud SQL Proxy v2 flag name. + +```yaml +# v1 +common: + postgres: + enabled: true + connectionConfig: my-app-psql-connection + +# v2 (simplest — uses default PG prefix) +common: + postgres: + enabled: true + +# v2 (explicit prefix) +common: + postgres: + enabled: true + instances: + - secretKeyPrefix: PG + +# v2 (multiple instances) +common: + postgres: + enabled: true + instances: + - secretKeyPrefix: PG + - secretKeyPrefix: ANALYTICS_PG +``` + +**What changed:** + +- Credentials (`{prefix}USER`, `{prefix}PASSWORD`) are now fetched from Secret Manager via External Secrets, not from a Terraform-created Kubernetes secret. +- The chart generates `{prefix}HOST=localhost` and `{prefix}PORT=5432+index` as environment variables. +- A new `sql-credentials` ExternalSecret is created alongside the existing `sql-proxy` ExternalSecret. +- `credentialsSecret` still works as an escape hatch for custom credential sources. +- `postgres.maxSigtermDelay` replaces `postgres.termTimeout` — controls the delay before the proxy begins shutdown after SIGTERM (default `30s`). + +**Migration steps:** + +1. Replace `instances: [PGINSTANCES]` with `instances: [{secretKeyPrefix: PG}]`, or simply use `enabled: true` for the default `PG` prefix. +2. Ensure `{prefix}USER`, `{prefix}PASSWORD`, and `{prefix}INSTANCES` exist in Secret Manager (the `entur/terraform-google-sql-db` module creates these). +3. Optionally set `create_kubernetes_resources: false` in your Terraform module — the chart no longer uses Terraform-created Kubernetes secrets. +4. For multiple databases, list each Terraform module's `secret_key_prefix` as a separate entry in `instances`. + +### 5. `ingress.class` annotation replaced with `spec.ingressClassName` + +The deprecated `kubernetes.io/ingress.class` annotation is removed. Ingress now uses `spec.ingressClassName` (defaults to `traefik`). + +### 6. `configmap.toEnv` is removed + +If you get a schema error like `configmap.toEnv is no longer valid in v2`, switch to `container.envFrom` to mount the configmap as environment variables: + +```yaml +# v1 +common: + configmap: + enabled: true + toEnv: true + data: + MY_CONFIG: "value" + +# v2 +common: + configmap: + enabled: true + data: + MY_CONFIG: "value" + container: + envFrom: + - configMapRef: + name: my-app # matches your release name +``` + +Note: The configmap is automatically mounted via `envFrom` when `configmap.enabled: true`. You only need explicit `envFrom` if you renamed the configmap or need additional control. + +## New Features (no action required) + +### HPA always enabled +- HPA is now enabled in all environments, not just `prd`. Default `minReplicas` is 1 for sbx/dev/tst and 2 for prd. +- When `startupCPUBoost` is disabled, a 120s scaleUp stabilization window prevents startup CPU spikes from triggering unnecessary scale-ups. Tune via `hpa.stabilizationWindowSeconds` to match your app's startup time. + +### PDB improvements +- `unhealthyPodEvictionPolicy: AlwaysAllow` prevents unhealthy pods from blocking cluster upgrades. +- PDB now correctly protects pods when `forceReplicas > 1` (was incorrectly set to 0%). + +### GKE Startup CPU Boost +- Disabled by default. Enable with `deployment.startupCPUBoost.enabled: true` (requires the operator installed in the cluster). +- Temporarily increases CPU by 50% during startup, reverts when pod becomes Ready. +- When enabled, a CPU limit of 1.3x the CPU request is automatically set. +- HPA default `cpuUtilization` lowered from 100% to 70%. + +### gRPC native probes +- Setting `grpc: true` now uses native Kubernetes gRPC probes by default, using `service.internalPort`. +- No longer requires the `/bin/grpc_health_probe` binary in your container image. +- No need to set `probes.liveness.grpc.port` etc. — ports default to `service.internalPort`. + +### Startup probe path +- `container.probes.startup.path` — when set, the startup probe switches from `tcpSocket` to `httpGet`. + +### Custom HPA metrics +- `hpa.metrics` — append custom metrics (Pods, External, Object) alongside default CPU scaling. +- `deployment.cpuUtilization` — set HPA CPU target (default 70%). +- `hpa.stabilizationWindowSeconds` — tune scaleUp delay (default 120s when CPU boost is disabled). + +### Per-ingress annotations and ingressClassName +- Each entry in `ingresses` list can now have its own `annotations` and `ingressClassName`. + +### Cloud SQL Proxy v2 features +- Prometheus metrics exposed on port 9801 (`/metrics`). +- Support for multiple databases via `postgres.instances` list. +- `postgres.maxSigtermDelay` — configurable shutdown delay (default `30s`). + +## Quick Migration Checklist + +- [ ] Rename `shortname` to `appId` in all values files +- [ ] Replace `container.replicas` / `deployment.replicas` → `deployment.minReplicas` +- [ ] Replace `container.maxReplicas` → `deployment.maxReplicas` +- [ ] Replace `container.forceReplicas` → `deployment.forceReplicas` +- [ ] Replace `container.minAvailable` → `deployment.minAvailable` +- [ ] Replace `container.terminationGracePeriodSeconds` → `deployment.terminationGracePeriodSeconds` +- [ ] Remove `container.memoryLimit` / `postgres.memoryLimit` — set `memory` to the value you need +- [ ] Replace `postgres.connectionConfig` with `postgres.enabled: true` or `postgres.instances: [{secretKeyPrefix: PG}]` +- [ ] Replace `postgres.termTimeout` with `postgres.maxSigtermDelay` (if set) +- [ ] If using gRPC: remove explicit `probes.*.grpc.port` settings (now defaults to `service.internalPort`) +- [ ] Update `Chart.yaml` dependency version to v2 +- [ ] Run `helm dependency update` +- [ ] Run `helm lint . -f env/values-kub-ent-dev.yaml` to catch unknown properties and schema errors +- [ ] Run `helm template . -f env/values-kub-ent-dev.yaml` to verify rendered output + +## Automated upgrade with an AI coding agent + +Paste this prompt into Claude Code, Copilot, Cursor, or any AI coding agent from **your application's repo**: + +``` +Upgrade the Entur common Helm chart dependency from v1 to v2. + +Read the upgrade skill and follow its instructions: + https://raw.githubusercontent.com/entur/helm-charts/main/.claude/skills/upgrade-common-chart/SKILL.md + +Apply all migration steps to every values file in this repository. +Run `helm dependency update` and `helm lint` to verify. +``` diff --git a/charts/common/Chart.yaml b/charts/common/Chart.yaml index dbf4dd6..7613c9e 100644 --- a/charts/common/Chart.yaml +++ b/charts/common/Chart.yaml @@ -1,5 +1,6 @@ apiVersion: v2 name: common +icon: https://avatars.githubusercontent.com/u/23213604?s=96&v=4 description: > A Helm chart for Entur's Kubernetes workloads @@ -9,22 +10,26 @@ description: > * Defaults typically match a properly configured Spring Boot project - * Automatic HA with HPA and PDB in `prd` + * Automatic HA with HPA and PDB in all environments (minReplicas: 2 by default) - * Enforces explicit setting of important aspecs such as traffic type + * Enforces explicit setting of important aspects such as traffic type * Rule based safety net, a chart that breaks business rules will fail with a helpful message - * Convention based automatic limit configuration. Cpu is 5x request, and memory is +20%. + * JSON Schema validation catches typos and unknown properties on `helm lint` + + * Compatible with Helm 3 and Helm 4 ## Take full control - * Most properties can be overriden to your specific needs. + * Most properties can be overridden to your specific needs. * Read the values.yaml file to get template documentation. + * See [UPGRADE.md](../../UPGRADE.md) for migration guides between major versions. + ### Fully customize `container.probes.spec` and `hpa.spec` with literal Kubernetes configuration
@@ -80,5 +85,5 @@ description: >
type: application -version: 1.22.0 +version: 2.0.0 appVersion: 0.0.1 diff --git a/charts/common/README.md b/charts/common/README.md index 2f6ba23..8969e0d 100644 --- a/charts/common/README.md +++ b/charts/common/README.md @@ -1,21 +1,23 @@ # common -![Version: 1.21.1](https://img.shields.io/badge/Version-1.21.1-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 0.0.1](https://img.shields.io/badge/AppVersion-0.0.1-informational?style=flat-square) +![Version: 2.0.0](https://img.shields.io/badge/Version-2.0.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 0.0.1](https://img.shields.io/badge/AppVersion-0.0.1-informational?style=flat-square) A Helm chart for Entur's Kubernetes workloads ## Highlighted features * Defaults typically match a properly configured Spring Boot project -* Automatic HA with HPA and PDB in `prd` +* Automatic HA with HPA and PDB in all environments (minReplicas: 2 by default) * Enforces explicit setting of important aspects such as traffic type * Rule based safety net, a chart that breaks business rules will fail with a helpful message -* Convention based automatic limit configuration. Cpu is 5x request, and memory is +20%. +* JSON Schema validation catches typos and unknown properties on `helm lint` +* Compatible with Helm 3 and Helm 4 ## Take full control * Most properties can be overridden to your specific needs. * Read the values.yaml file to get template documentation. +* See [UPGRADE.md](../../UPGRADE.md) for migration guides between major versions. ### Fully customize `container.probes.spec` and `hpa.spec` with literal Kubernetes configuration
@@ -70,21 +72,19 @@ common: | Key | Type | Default | Description | |-----|------|---------|-------------| | app | string | `nil` | Application name, typically on the form `the-application` | +| appId | string | `nil` | App ID from GoogleCloudApplication `metadata.id`. Max 10 alphanumeric characters. See https://github.com/entur/tf-gcp-apps/blob/main/docs/manifests/GoogleCloudApplication.md | | configmap.data | object | `{}` | Set data for configmap | | configmap.enabled | bool | false | Enable or disable the configmap | | container.args | string | `nil` | Optionally set the arguments that will be passed to the command, e.g. ["arg1","arg2"]. | | container.command | string | `nil` | Optionally set the command that will run in the pod. If not set, the entrypoint for the container-image is used (recommended for most Java-apps). | -| container.cpu | float | 0.1 | Set CPU without any unit. 100m is 0.1 | +| container.cpu | float | 0.3 | Set CPU request without any unit. 100m is 0.1. Default is sized for JVM/Spring Boot apps; lighter workloads (sidecars, small Go services, static frontends) should override down. | | container.cpuLimit | float | `5 x cpu` | Set CPU limit without any unit. 100m is 0.1 | | container.env | list | `[]` | Specify `env` entries for your container | | container.envFrom | list | `[]` | Attach secrets and configmaps to your `env` | -| container.forceReplicas | int | `nil` | Force replicas disables autoscaling and PDB, if set to 1 it will use Recreate strategy | | container.labels | object | `{}` | Add labels to your pods | | container.lifecycle | object | `{}` | Set pod lifecycle handlers | -| container.maxReplicas | int | `nil` | Set the maxReplicas for your HPA | -| container.memory | int | 16 | Set memory without any unit, `Mi` is inferred | -| container.memoryLimit | string | `1.2 * memory` | Set memory limit without any unit, `Mi` is inferred | -| container.minAvailable | string | 50% | Set the minimal available replicas, used by PDB | +| container.memory | int | 512 | Set memory request without any unit, `Mi` is inferred. Memory limit always equals request. Default is sized for JVM/Spring Boot apps (the JVM alone needs ~150–250 MiB before app code runs); lighter workloads should override down. | +| container.memoryLimit | string | `nil` | @deprecated memoryLimit is removed. Memory limit is now always equal to memory request. Use `container.memory` instead. | | container.name | string | .app | Name of container | | container.probes.enabled | bool | `true` | Enable or disable probes | | container.probes.liveness.failureThreshold | int | 6 | Set the failure threshold | @@ -102,65 +102,71 @@ common: | container.probes.spec | string | `nil` | Override with k8s spec for custom probes | | container.probes.startup.failureThreshold | int | 300 | Set the failure threshold | | container.probes.startup.grpc | string | `nil` | Specify grpc probes for a port. Needs `port` child stanza | +| container.probes.startup.path | string | `nil` | Set the path for startup probe. If set, uses httpGet instead of tcpSocket. Useful when startup includes long-running tasks like cache warming. | | container.probes.startup.periodSeconds | int | 1 | Set the period of checking | | container.prometheus.enabled | bool | `false` | Enable or disable Prometheus | | container.prometheus.path | string | /actuator/prometheus | Set the path for scraping metrics | | container.prometheus.port | int | service.internalPort | Set the port for prometheus scraping | -| container.replicas | int | `nil` | Set the target replica count, if equal to 1 the PDB minAvailable will be set to 100% | -| container.terminationGracePeriodSeconds | int | `nil` | Override pod terminationGracePeriodSeconds (default 30s). | | container.uid | int | 1000 | Set the uid that your user runs with | | container.volumeMounts | list | `[]` | Configure volume mounts, accepts kubernetes syntax | | container.volumes | list | `[]` | Configure volume, accepts kubernetes syntax | | containers | list | `[]` | Takes a list of `container` entries, you must add a `name` field for each entry | | cron.activeDeadlineSeconds | int | `nil` | Active deadline seconds for the job, default 24 hours (86300s) | | cron.concurrencyPolicy | string | Forbid | Concurrency policy | -| cron.enabled | bool | `false` | Generate a CronJob resource. Requires `cron.schedule` to be set. Set `deployment.enabled: false` if you only want a CronJob. | +| cron.enabled | bool | `false` | Enable or disable the cron job | | cron.failedJobsHistoryLimit | int | 1 | Failed jobs history limit | | cron.labels | object | `{}` | Add labels to your pods | | cron.restartPolicy | string | OnFailure | Override pod restartPolicy (default OnFailure). | -| cron.schedule | string | `nil` | Required crontab schedule `* * * * *` | +| cron.schedule | string | `nil` | Required crontab schedule `* * * * * *` | | cron.serviceAccountName | string | application | Override pod serviceAccountName (default application). | | cron.successfulJobsHistoryLimit | int | 1 | Successful jobs history limit | | cron.suspend | string | false | Suspend flag | | cron.terminationGracePeriodSeconds | int | false | Override pod terminationGracePeriodSeconds (default 30s). | | cron.volumes | list | `[]` | Configure volume, accepts kubernetes syntax | -| deployment.enabled | bool | `true` | Generate a Deployment resource | -| deployment.forceReplicas | int | `nil` | Force replicas disables autoscaling and PDB, if set to 1 it will use Recreate strategy | +| deployment.cpuUtilization | string | 70 | Set the target CPU average utilization (%) for HPA scaling. With startupCPUBoost enabled, 70% is a good default. Without it, 100% may be needed for Java apps with heavy startup CPU usage. | +| deployment.enabled | bool | `true` | Enable or disable the deployment | +| deployment.forceReplicas | int | `nil` | Force a fixed replica count, disables HPA and PDB. If set to 1 it will use Recreate strategy. | | deployment.labels | object | `{}` | Add labels to your pods | -| deployment.maxReplicas | string | 10 | Set the max replica count | -| deployment.maxSurge | string | 25% | Limit max surge for rolling updates (default 25%). Not in use when using forceReplicas. | -| deployment.maxUnavailable | string | 25% | Limit max unavailable for rolling updates (default 25%). Not in use when using forceReplicas. | -| deployment.minAvailable | string | 50% | Set minimum available % | +| deployment.maxReplicas | int | 10 | Set the max replica count for HPA | +| deployment.maxSurge | string | 1 | Limit max surge for rolling updates. Accepts an integer (pod count) or a string percentage (e.g. "25%"). Not in use when using forceReplicas. | +| deployment.maxUnavailable | string | 1 | Limit max unavailable for rolling updates. Accepts an integer (pod count) or a string percentage (e.g. "25%"). Not in use when using forceReplicas. | +| deployment.minAvailable | string | 50% | Set minimum available % for PDB | | deployment.minReadySeconds | int | 0 | See https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#progress-deadline-seconds | -| deployment.replicas | string | container.replicas | Set the target replica count | +| deployment.minReplicas | int | 2 | Set the minimum replica count for HPA. | | deployment.serviceAccountName | string | application | Override pod serviceAccountName (default application). | +| deployment.startupCPUBoost.enabled | bool | `false` | Enable GKE Startup CPU Boost to temporarily increase CPU during pod startup. Requires the kube-startup-cpu-boost operator installed in the cluster. Boost is reverted when the pod becomes Ready. When enabled, a CPU limit of 1.3x the CPU request is automatically set (unless `container.cpuLimit` is explicitly configured). | +| deployment.startupCPUBoost.percentageIncrease | int | 50 | Percentage to increase CPU requests during startup | | deployment.terminationGracePeriodSeconds | int | `nil` | Override pod terminationGracePeriodSeconds (default 30s). | | deployment.volumes | list | `[]` | Configure volume, accepts kubernetes syntax | | env | string | `nil` | The current env, override in your `values-kub-ent-$env.yaml` files to `dev`, `tst` or `prd` | | grpc | bool | `false` | Enable gRPC which will add an annotation and use grpc probes | -| hpa.spec | object | `{}` | Custom spec for HPA, inherits `scaleTargetRef` and min/max replicas. ps: Reason why we have set 100% cpu as default is because the java applications are resource hogs during startup. If you have good startupProbe/readinessProbes in place you can lower the cpu average utilization to ie 50/60%. - Or scale on other (custom) metrics. | +| hpa.metrics | list | [] | Additional HPA metrics appended alongside the default CPU metric. Accepts standard `autoscaling/v2` metric entries (Pods, Object, External). Use for scaling on custom metrics from Cloud Monitoring, Prometheus (GMP), or Pub/Sub. When multiple metrics are specified, HPA picks the one demanding the most replicas. | +| hpa.spec | object | `{}` | Full custom spec for HPA, replaces default metrics and min/max replicas. Inherits `scaleTargetRef`. | +| hpa.stabilizationWindowSeconds | int | 120 | Seconds to wait before scaling up after a metric spike. Only applied when startupCPUBoost is disabled, to avoid scaling on startup CPU spikes. Tune this to match your application's typical startup time (e.g. 60s for a fast app, 300s for a heavy Spring Boot app with cache warming). | | ingress.annotations | object | `{}` | Optionally set annotations for the ingress | | ingress.enabled | bool | `true` | Enable or disable the ingress | | ingress.host | string | `nil` | Set the host name, do this in your `values-kub-ent-$env.yaml` files | -| ingress.trafficType | string | `nil` | Set the traffic type (`api`,`public` or `http2` for gRPC) | +| ingress.ingressClassName | string | traefik | Set the IngressClass name. Uses `spec.ingressClassName` (replaces the deprecated `kubernetes.io/ingress.class` annotation). | +| ingress.trafficType | string | `nil` | Set the traffic type (`api`,`public` or `http2` for gRPC). Note: changing this value will cause a couple of minutes of downtime while the ingress controller reconciles. | | ingresses | list | `[]` | Specify a list of `ingress` specs | | initContainers | list | `[]` | See: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ | | labels | object | `{ app shortname team common:version environment }` | Specify additional labels for every resource | -| pdb.minAvailable | string | 50% | Set minimum available %, this overrides pdb setting minAvailable in deployment/container | -| postgres.connectionConfig | string | `nil` | Override name for connection configmap. This must at least contain `INSTANCES`. | +| pdb | object | `{}` | | | postgres.cpu | float | 0.05 | Configure cpu request for proxy | | postgres.cpuLimit | float | `nil` | Configure optional cpu limit for proxy | -| postgres.credentialsSecret | string | `nil` | Override name for credentials secret. This must at least contain `PGUSER` and `PGPASSWORD`. | -| postgres.enabled | bool | false | Enable or disable the proxy | +| postgres.credentialsSecret | string | `nil` | Override the Kubernetes secret name for credentials. Bypasses the ExternalSecret for credentials; the proxy ExternalSecret is still created. The secret must contain the expected env vars (e.g. `PGUSER`, `PGPASSWORD`). | +| postgres.enabled | bool | false | Enable or disable the Cloud SQL proxy v2 sidecar | +| postgres.instances | list | [] | List of database connections keyed by Terraform `secret_key_prefix`. Each entry derives Secret Manager keys: `{prefix}INSTANCES`, `{prefix}USER`, `{prefix}PASSWORD`. The chart generates `{prefix}HOST=localhost` and `{prefix}PORT=5432+index`. When empty and `enabled: true`, defaults to `[{secretKeyPrefix: PG}]`. | +| postgres.maxSigtermDelay | string | 30s | Override the max-sigterm-delay for the Cloud SQL Proxy. Adds a delay before the proxy begins shutdown after receiving SIGTERM, useful for allowing load balancers to deregister the pod. | | postgres.memory | int | 16 | Configure memory request for proxy without units, `Mi` inferred | -| postgres.memoryLimit | int | 16 | Configure memoryLimit for proxy without units, `Mi` inferred | | releaseName | string | `nil` | Override release name, useful for multiple deployments | | secrets | object | `{}` | Add externalSecret to sync secrets from secret manager | | service.annotations | object | `{}` | Optionally set annotations for the service | | service.enabled | bool | `true` | Enable or disable the service | | service.externalPort | int | 80 | Set the external port for your service | | service.internalPort | int | 8080 | Set the internal port for your service | -| shortname | string | `nil` | `id` for GCP 2.0, typically on the form `theapp`. Max 10 characters | | team | string | `nil` | Your team name, without a `team-` prefix | | vpa.enabled | bool | `true` | Enable Vertical Pod Autoscaler to get resource requirement and limit recommendations | +---------------------------------------------- +Autogenerated from chart metadata using [helm-docs v1.14.2](https://github.com/norwoodj/helm-docs/releases/v1.14.2) diff --git a/charts/common/templates/_helpers.tpl b/charts/common/templates/_helpers.tpl index 760a4ac..5084b2d 100644 --- a/charts/common/templates/_helpers.tpl +++ b/charts/common/templates/_helpers.tpl @@ -4,7 +4,11 @@ {{- define "labels" }} app: {{ empty .Values.releaseName | ternary .Release.Name .Values.releaseName }} -shortname: {{ .Values.shortname }} +{{- if and (not .Values.appId) .Values.shortname }} + {{- fail "shortname is deprecated. Use appId instead." }} +{{- end }} +appId: {{ .Values.appId }} +shortname: {{ .Values.appId }} team: {{ .Values.team }} common: {{ .Chart.Version }} environment: {{ .Values.env }} @@ -53,12 +57,11 @@ resources: limits: {{- if .cpuLimit }} cpu: "{{ .cpuLimit| float64 }}" + {{- else if .startupCPUBoostEnabled }} + {{- /* When CPU boost is enabled, set limit to 1.3x request so the boost operator has a ceiling to work within */}} + cpu: "{{ printf "%.2f" (divf (mulf .cpu 13) 10) }}" {{- end }} - {{- if .memoryLimit }} - memory: "{{ .memoryLimit }}Mi" - {{- else }} - memory: "{{ (div (mul .memory 6) 5) }}Mi" - {{- end }} + memory: "{{ .memory }}Mi" {{- if .ephemeralStorageLimit }} ephemeral-storage: "{{ .ephemeralStorageLimit }}" {{- end }} @@ -71,13 +74,26 @@ resources: {{- end }} {{- define "environment" }} +{{- $postgresInstances := list }} +{{- if .postgres.enabled }} + {{- $postgresInstances = .postgres.instances | default list }} + {{- if eq (len $postgresInstances) 0 }} + {{- $postgresInstances = list (dict "secretKeyPrefix" "PG") }} + {{- end }} +{{- end }} env: - name: COMMON_ENV value: {{ .envLabel }} + {{- range $i, $inst := $postgresInstances }} + - name: {{ $inst.secretKeyPrefix }}HOST + value: "localhost" + - name: {{ $inst.secretKeyPrefix }}PORT + value: "{{ $inst.port | default (add 5432 $i) }}" + {{- end }} {{- if .env }} {{- toYaml .env | nindent 2 }} {{ end }} -{{- if or .envFrom .configmap.enabled .postgres.enabled .secrets}} +{{- if or .envFrom .configmap.enabled (gt (len $postgresInstances) 0) .secrets}} envFrom: {{- if .envFrom }} {{- toYaml .envFrom | nindent 2 }} @@ -86,12 +102,12 @@ envFrom: - configMapRef: name: {{ .releaseName }} {{- end }} - {{- if .postgres.enabled }} + {{- if gt (len $postgresInstances) 0 }} - secretRef: {{- if .postgres.credentialsSecret }} name: {{ .postgres.credentialsSecret }} {{- else }} - name: {{ .app }}-psql-credentials + name: {{ .releaseName }}-sql-credentials {{- end }} {{- end }} {{- if .secrets }} @@ -121,8 +137,14 @@ readinessProbe: failureThreshold: {{ .probes.readiness.failureThreshold | default 6 }} periodSeconds: {{ .probes.readiness.periodSeconds | default 5 }} startupProbe: + {{- if .probes.startup.path }} + httpGet: + path: {{ .probes.startup.path }} + port: {{ .probes.startup.port | default .internalPort }} + {{- else }} tcpSocket: port: {{ .probes.startup.port | default .internalPort }} + {{- end }} failureThreshold: {{ .probes.startup.failureThreshold | default 300 }} periodSeconds: {{ .probes.startup.periodSeconds | default 1 }} {{- end }} @@ -130,60 +152,40 @@ startupProbe: {{- define "grpcprobes" }} startupProbe: grpc: - port: {{ .probes.startup.grpc.port | default .internalPort }} + port: {{ ((.probes.startup).grpc).port | default .internalPort }} initialDelaySeconds: 10 failureThreshold: 30 periodSeconds: 10 readinessProbe: grpc: - port: {{ .probes.readiness.grpc.port | default .internalPort }} + port: {{ ((.probes.readiness).grpc).port | default .internalPort }} initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 5 livenessProbe: grpc: - port: {{ .probes.liveness.grpc.port | default .internalPort }} + port: {{ ((.probes.liveness).grpc).port | default .internalPort }} initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 5 {{- end }} -{{- define "grpcexecprobes" }} -startupProbe: - exec: - command: ["/bin/grpc_health_probe", "-addr=:{{ .internalPort }}", "-service=ready"] - initialDelaySeconds: 10 - failureThreshold: 30 - periodSeconds: 10 -readinessProbe: - exec: - command: ["/bin/grpc_health_probe", "-addr=:{{ .internalPort }}", "-service=ready"] - initialDelaySeconds: 10 - periodSeconds: 10 - timeoutSeconds: 5 -livenessProbe: - exec: - command: ["/bin/grpc_health_probe", "-addr=:{{ .internalPort }}", "-service=health"] - initialDelaySeconds: 10 - periodSeconds: 10 - timeoutSeconds: 5 -{{- end }} - {{- define "gcloud_sql_proxy" }} - name: "{{ .app }}-sql-proxy" - image: eu.gcr.io/cloudsql-docker/gce-proxy:1.33.16 + image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.21.2 command: - - "/cloud_sql_proxy" - - "-verbose=false" - - "-log_debug_stdout=true" - - "-structured_logs=true" - - "-term_timeout={{ .postgres.termTimeout | default "30s" }}" + - "/cloud-sql-proxy" + - "--structured-logs" + - "--max-sigterm-delay={{ .postgres.maxSigtermDelay | default "30s" }}" + - "--http-port=9801" + - "--prometheus" + - "--port=5432" + ports: + - name: metrics + containerPort: 9801 + protocol: TCP envFrom: - - configMapRef: - {{- if .postgres.connectionConfig }} - name: {{ .postgres.connectionConfig }} - {{- else }} - name: {{ .app }}-psql-connection - {{- end }} + - secretRef: + name: {{ .releaseName }}-sql-proxy securityContext: runAsNonRoot: true allowPrivilegeEscalation: false @@ -196,21 +198,21 @@ livenessProbe: {{- if .postgres.cpuLimit }} cpu: "{{ .postgres.cpuLimit }}" {{- end }} - {{- if .postgres.memoryLimt }} - memory: "{{ .postgres.memoryLimit }}Mi" - {{- else }} memory: "{{ .postgres.memory }}Mi" - {{- end }} requests: cpu: "{{ .postgres.cpu }}" memory: "{{ .postgres.memory }}Mi" {{- end }} {{- define "hpa.enabled" -}} - {{- if and (not .forceReplicas) (or (eq "prd" .env) .maxReplicas .hpa.spec.minReplicas) -}} - {{- printf "true" -}} - {{- else -}} + {{- if .forceReplicas -}} {{- printf "false" -}} + {{- else -}} + {{- printf "true" -}} {{- end -}} {{- end -}} +{{- define "hpa.minReplicas" -}} + {{- .replicas | default 2 -}} +{{- end -}} + diff --git a/charts/common/templates/cron.yaml b/charts/common/templates/cron.yaml index def2e8f..b031fcd 100644 --- a/charts/common/templates/cron.yaml +++ b/charts/common/templates/cron.yaml @@ -1,7 +1,7 @@ {{- /* Rules */}} {{- $env := .Values.env | required ".Values.common.env is required." -}} {{- $app := .Values.app | required ".Values.common.app is required." -}} -{{- $shortname := .Values.shortname | required ".Values.common.shortname is required." -}} +{{- $appId := .Values.appId | required ".Values.common.appId is required." -}} {{- $team := .Values.team | required ".Values.common.team is required." -}} {{- $releaseName := include "name" . -}} @@ -9,6 +9,7 @@ {{- $configmap := .Values.configmap -}} {{- $postgres := .Values.postgres -}} {{- $secrets := .Values.secrets -}} +{{- $startupCPUBoostEnabled := ((.Values.deployment).startupCPUBoost).enabled -}} {{- $cronjob := .Values.cron -}} {{- if $cronjob.enabled -}} {{- /* YAML Spec */}} @@ -52,6 +53,9 @@ spec: spec: serviceAccountName: {{ .Values.cron.serviceAccountName | default "application" }} containers: + {{- if $postgres.enabled }} + {{- include "gcloud_sql_proxy" (dict "postgres" $postgres "app" $app "releaseName" $releaseName) | indent 12 }} + {{- end }} {{ range $containers }} {{- $image := .image | required ".Values.common.container.image is required." -}} {{- printf "\n " -}} @@ -65,7 +69,7 @@ spec: args: {{ toYaml .args | nindent 14 }} {{- end }} {{- include "environment" (dict "releaseName" $releaseName "app" $app "envLabel" $env "env" .env "envFrom" .envFrom "configmap" $configmap "postgres" $postgres "secrets" $secrets) | nindent 14 -}} - {{- include "resources" . | nindent 14 }} + {{- include "resources" (merge (dict "startupCPUBoostEnabled" $startupCPUBoostEnabled) .) | nindent 14 }} {{- include "securitycontext" . | nindent 14 }} {{- if .volumeMounts }} volumeMounts: @@ -75,9 +79,6 @@ spec: lifecycle: {{- toYaml .lifecycle | nindent 16 }} {{- end }} - {{- if $postgres.enabled }} - {{- include "gcloud_sql_proxy" (dict "postgres" $postgres "app" $app) | indent 12 }} - {{- end }} {{- end }} {{- if $cronjob.volumes }} volumes: @@ -92,4 +93,6 @@ spec: runAsNonRoot: true runAsUser: {{ .Values.container.uid }} fsGroup: {{ .Values.container.uid }} + seccompProfile: + type: RuntimeDefault {{- end -}} \ No newline at end of file diff --git a/charts/common/templates/deployment.yaml b/charts/common/templates/deployment.yaml index b4ea0f1..019c853 100644 --- a/charts/common/templates/deployment.yaml +++ b/charts/common/templates/deployment.yaml @@ -1,7 +1,7 @@ {{- /* Rules */}} {{- $env := .Values.env | required ".Values.common.env is required." -}} {{- $app := .Values.app | required ".Values.common.app is required." -}} -{{- $shortname := .Values.shortname | required ".Values.common.shortname is required." -}} +{{- $appId := .Values.appId | required ".Values.common.appId is required." -}} {{- $team := .Values.team | required ".Values.common.team is required." -}} {{- $releaseName := include "name" . -}} @@ -11,18 +11,16 @@ {{- $secrets := .Values.secrets -}} {{- $internalPort := .Values.service.internalPort -}} {{- $grpc := .Values.grpc -}} -{{- $labels := .Values.deployment.labels | default .Values.container.labels }} -{{- $volumes := .Values.deployment.volumes | default .Values.container.volumes }} -{{- $enabled := .Values.deployment.enabled | default .Values.container.enabled }} +{{- $labels := .Values.deployment.labels }} +{{- $volumes := .Values.deployment.volumes }} +{{- $enabled := .Values.deployment.enabled }} {{- $prometheus := .Values.deployment.prometheus | default .Values.container.prometheus }} -{{- $prometheusPath := (.Values.deployment.prometheus).path | default .Values.container.prometheus.path }} -{{- $replicas := .Values.deployment.replicas | default .Values.container.replicas }} -{{- $forceReplicas := .Values.deployment.forceReplicas | default .Values.container.forceReplicas }} -{{- $maxReplicas := .Values.deployment.maxReplicas | default .Values.container.maxReplicas }} -{{- $terminationGracePeriodSeconds := .Values.deployment.terminationGracePeriodSeconds | default .Values.container.terminationGracePeriodSeconds }} -{{- $maxSurge := .Values.deployment.maxSurge | default "25%" }} -{{- $maxUnavailable := .Values.deployment.maxUnavailable | default "25%" }} -{{- $hpa := .Values.hpa | default dict }} +{{- $prometheusPath := (.Values.deployment.prometheus).path | default .Values.container.prometheus.path | default "/actuator/prometheus" }} +{{- $forceReplicas := .Values.deployment.forceReplicas }} +{{- $terminationGracePeriodSeconds := .Values.deployment.terminationGracePeriodSeconds }} +{{- $maxSurge := .Values.deployment.maxSurge | default 1 }} +{{- $maxUnavailable := .Values.deployment.maxUnavailable | default 1 }} +{{- $startupCPUBoostEnabled := ((.Values.deployment).startupCPUBoost).enabled }} {{- if $enabled }} {{- /* YAML Spec */}} apiVersion: apps/v1 @@ -41,11 +39,8 @@ spec: minReadySeconds: {{ .Values.deployment.minReadySeconds }} {{- if $forceReplicas }} replicas: {{ $forceReplicas }} - {{- else if eq (include "hpa.enabled" (dict "env" $env "forceReplicas" $forceReplicas "maxReplicas" $maxReplicas "hpa" $hpa)) "true" }} - # intentionally skip replicas - {{- else }} - replicas: {{ $replicas }} {{- end }} + {{- /* When HPA is enabled (no forceReplicas), replicas is omitted so HPA controls the pod count */}} strategy: {{- if (eq (int $forceReplicas) 1) }} type: Recreate @@ -65,6 +60,11 @@ spec: prometheus.io/path: "{{ $prometheusPath }}" prometheus.io/port: "{{ $prometheus.port | default .Values.service.internalPort | required "Must set deployment.prometheus.port" }}" {{- end }} + {{- if $postgres.enabled }} + prometheus.io/scrape-sql-proxy: "true" + prometheus.io/sql-proxy-port: "9801" + prometheus.io/sql-proxy-path: "/metrics" + {{- end }} labels: {{- include "labels" . |trim| nindent 8 }} {{- if $labels }} @@ -75,7 +75,7 @@ spec: containers: {{- if $postgres.enabled }} - {{- include "gcloud_sql_proxy" (dict "postgres" $postgres "app" $app) | indent 8 }} + {{- include "gcloud_sql_proxy" (dict "postgres" $postgres "app" $app "releaseName" $releaseName) | indent 8 }} {{- end }} {{ range $containers }} {{- $image := .image | required ".Values.common.container.image is required." -}} @@ -97,14 +97,10 @@ spec: - containerPort: {{ $internalPort }} protocol: TCP {{- end }} - {{- include "resources" . | nindent 10 }} + {{- include "resources" (merge (dict "startupCPUBoostEnabled" $startupCPUBoostEnabled) .) | nindent 10 }} {{- include "securitycontext" . | nindent 10 }} {{- if or $grpc .grpc }} - {{- if .probes.liveness.grpc }} - {{- include "grpcprobes" (dict "internalPort" $internalPort "probes" .probes) | nindent 10 -}} - {{- else }} - {{- include "grpcexecprobes" (dict "internalPort" $internalPort) | nindent 10 -}} - {{- end }} + {{- include "grpcprobes" (dict "internalPort" $internalPort "probes" .probes) | nindent 10 -}} {{- else if (and (ne .probes.enabled false) .probes.spec) }} {{ toYaml .probes.spec | nindent 10 }} {{- else if ne .probes.enabled false }} diff --git a/charts/common/templates/hpa.yaml b/charts/common/templates/hpa.yaml index 02c27ae..49fadce 100644 --- a/charts/common/templates/hpa.yaml +++ b/charts/common/templates/hpa.yaml @@ -1,16 +1,16 @@ {{- /* Rules */}} {{- $env := .Values.env | required ".Values.common.env is required." -}} {{- $releaseName := include "name" . -}} -{{- $replicas := .Values.deployment.replicas | default .Values.container.replicas }} -{{- $forceReplicas := .Values.deployment.forceReplicas | default .Values.container.forceReplicas }} -{{- $maxReplicas := .Values.deployment.maxReplicas | default .Values.container.maxReplicas }} +{{- $minReplicas := .Values.deployment.minReplicas }} +{{- $forceReplicas := .Values.deployment.forceReplicas }} +{{- $maxReplicas := .Values.deployment.maxReplicas }} {{- $hpa := .Values.hpa | default dict }} -{{- $cpuUtilization := .Values.cpuUtilization | default 100 }} +{{- $cpuUtilization := .Values.deployment.cpuUtilization | default 70 }} -{{- if eq (include "hpa.enabled" (dict "env" $env "forceReplicas" $forceReplicas "maxReplicas" $maxReplicas "hpa" $hpa)) "true" }} +{{- if eq (include "hpa.enabled" (dict "forceReplicas" $forceReplicas)) "true" }} {{- /* Rules */}} {{- if (eq 1 (int $maxReplicas)) }} - {{- required ".Values.common.container.maxReplicas must be more than 1." .Values.error -}} + {{- required ".Values.common.deployment.maxReplicas must be more than 1." .Values.error -}} {{- else }} {{- /* YAML Spec */}} @@ -34,8 +34,16 @@ spec: target: type: Utilization averageUtilization: {{ $cpuUtilization }} + {{- range ((.Values.hpa).metrics) }} + - {{ toYaml . | nindent 4 }} + {{- end }} + {{- if not ((.Values.deployment).startupCPUBoost).enabled }} + behavior: + scaleUp: + stabilizationWindowSeconds: {{ ((.Values.hpa).stabilizationWindowSeconds) | default 120 }} + {{- end }} maxReplicas: {{ $maxReplicas | default 10 }} - minReplicas: {{ max 2 $replicas }} + minReplicas: {{ include "hpa.minReplicas" (dict "replicas" $minReplicas) }} {{- end }} scaleTargetRef: apiVersion: apps/v1 diff --git a/charts/common/templates/ingress.yaml b/charts/common/templates/ingress.yaml index 173600a..b9cc2fb 100644 --- a/charts/common/templates/ingress.yaml +++ b/charts/common/templates/ingress.yaml @@ -19,11 +19,13 @@ metadata: traffic-type: {{.trafficType }} annotations: {{- include "annotations" $chart |trim| nindent 4 }} - kubernetes.io/ingress.class: traefik - {{- if $.Values.ingress.annotations }} + {{- if .annotations }} + {{- toYaml .annotations | nindent 4 }} + {{- else if $.Values.ingress.annotations }} {{- toYaml $.Values.ingress.annotations | nindent 4 }} {{- end }} spec: + ingressClassName: {{ .ingressClassName | default $.Values.ingress.ingressClassName | default "traefik" }} rules: {{- if .rules -}} {{ toYaml .rules | nindent 4 }} diff --git a/charts/common/templates/pdb.yaml b/charts/common/templates/pdb.yaml index f5f74f7..ef01b1e 100644 --- a/charts/common/templates/pdb.yaml +++ b/charts/common/templates/pdb.yaml @@ -2,12 +2,26 @@ {{- $env := .Values.env | required ".Values.common.env is required." -}} {{- $releaseName := include "name" . -}} {{- $releaseNamespace := .Release.Namespace -}} -{{- $forceReplicas := .Values.deployment.forceReplicas | default .Values.container.forceReplicas -}} -{{- $minAvailable := .Values.deployment.minAvailable | default .Values.container.minAvailable -}} -{{- $minAvailablePDB := .Values.pdb.minAvailable -}} -{{- $replicas := .Values.deployment.replicas | default .Values.container.replicas -}} -{{- $hpa := .Values.hpa | default dict -}} -{{- $maxReplicas := .Values.deployment.maxReplicas | default .Values.container.maxReplicas -}} +{{- $forceReplicas := .Values.deployment.forceReplicas -}} +{{- $minAvailable := .Values.deployment.minAvailable -}} +{{- $minReplicas := .Values.deployment.minReplicas -}} +{{- /* + Determine if PDB should protect pods. + - forceReplicas: exact count, no HPA — protect if > 1 + - HPA: protect if minReplicas > 1 (default 2) + PDB is set to 0% only when effective replicas <= 1. +*/}} +{{- $protected := false -}} +{{- if $forceReplicas -}} + {{- if gt (int $forceReplicas) 1 -}} + {{- $protected = true -}} + {{- end -}} +{{- else -}} + {{- $effectiveMinReplicas := int (include "hpa.minReplicas" (dict "replicas" $minReplicas)) -}} + {{- if gt $effectiveMinReplicas 1 -}} + {{- $protected = true -}} + {{- end -}} +{{- end -}} {{- /* YAML Spec */}} apiVersion: policy/v1 kind: PodDisruptionBudget @@ -19,14 +33,13 @@ metadata: annotations: {{- include "annotations" . |trim| nindent 4 }} spec: - {{- if (or (eq (int $replicas) 1) (eq "false" (include "hpa.enabled" (dict "env" $env "forceReplicas" $forceReplicas "maxReplicas" $maxReplicas "hpa" $hpa)))) }} - {{- /* We set PDB even if forceReplicas or replicas = 1 or if hpa is disabled */}} + {{- /* Allow eviction of unhealthy pods regardless of budget to prevent blocking node drains */}} + unhealthyPodEvictionPolicy: AlwaysAllow + {{- if not $protected }} + {{- /* We set PDB even if only 1 effective replica or if hpa is disabled */}} {{- /* This is because helm is not able to delete unknown-previous config. */}} {{- /* In this case we set the minAvailable to 0% so it behaves the same way as a PDB does not exist. */}} minAvailable: 0% - {{- else if ($minAvailablePDB) }} - {{- /* PDB.minAvailable takes precedence over deployment/container.minAvailable */}} - minAvailable: {{ $minAvailablePDB }} {{- else }} minAvailable: {{ $minAvailable | default "50%" }} {{- end }} diff --git a/charts/common/templates/sql-credentials-secret.yaml b/charts/common/templates/sql-credentials-secret.yaml new file mode 100644 index 0000000..7d5e56f --- /dev/null +++ b/charts/common/templates/sql-credentials-secret.yaml @@ -0,0 +1,46 @@ +{{- /* Rules */}} +{{- $chart := . -}} +{{- $releaseNamespace := .Release.Namespace -}} +{{- $releaseName := include "name" . -}} +{{- $postgres := .Values.postgres -}} +{{- if and $postgres.enabled (not $postgres.credentialsSecret) }} +{{- $instances := $postgres.instances | default list -}} +{{- if eq (len $instances) 0 }} + {{- $instances = list (dict "secretKeyPrefix" "PG") }} +{{- end }} +{{- /* YAML Spec */}} +apiVersion: external-secrets.io/v1 +kind: ExternalSecret +metadata: + name: {{ $releaseName }}-sql-credentials + namespace: {{ $releaseNamespace }} + labels: + {{- include "labels" $chart |trim| nindent 4 }} + annotations: + timestamp: {{ now | date "2006-01-02T15:04:05" }} + {{- include "annotations" $chart |trim| nindent 4 }} +spec: + data: + {{- range $inst := $instances }} + - remoteRef: + conversionStrategy: Default + decodingStrategy: None + key: {{ $inst.secretKeyPrefix }}USER + version: latest + secretKey: {{ $inst.secretKeyPrefix }}USER + - remoteRef: + conversionStrategy: Default + decodingStrategy: None + key: {{ $inst.secretKeyPrefix }}PASSWORD + version: latest + secretKey: {{ $inst.secretKeyPrefix }}PASSWORD + {{- end }} + refreshInterval: 1h + secretStoreRef: + kind: SecretStore + name: {{ $releaseNamespace }} + target: + creationPolicy: Owner + deletionPolicy: Delete + name: {{ $releaseName }}-sql-credentials +{{- end }} diff --git a/charts/common/templates/sql-proxy-secret.yaml b/charts/common/templates/sql-proxy-secret.yaml new file mode 100644 index 0000000..8968023 --- /dev/null +++ b/charts/common/templates/sql-proxy-secret.yaml @@ -0,0 +1,40 @@ +{{- /* Rules */}} +{{- $chart := . -}} +{{- $releaseNamespace := .Release.Namespace -}} +{{- $releaseName := include "name" . -}} +{{- $postgres := .Values.postgres -}} +{{- if $postgres.enabled }} +{{- $instances := $postgres.instances | default list -}} +{{- if eq (len $instances) 0 }} + {{- $instances = list (dict "secretKeyPrefix" "PG") }} +{{- end }} +{{- /* YAML Spec */}} +apiVersion: external-secrets.io/v1 +kind: ExternalSecret +metadata: + name: {{ $releaseName }}-sql-proxy + namespace: {{ $releaseNamespace }} + labels: + {{- include "labels" $chart |trim| nindent 4 }} + annotations: + timestamp: {{ now | date "2006-01-02T15:04:05" }} + {{- include "annotations" $chart |trim| nindent 4 }} +spec: + data: + {{- range $i, $inst := $instances }} + - remoteRef: + conversionStrategy: Default + decodingStrategy: None + key: {{ $inst.secretKeyPrefix }}INSTANCES + version: latest + secretKey: CSQL_PROXY_INSTANCE_CONNECTION_NAME_{{ $i }} + {{- end }} + refreshInterval: 1h + secretStoreRef: + kind: SecretStore + name: {{ $releaseNamespace }} + target: + creationPolicy: Owner + deletionPolicy: Delete + name: {{ $releaseName }}-sql-proxy +{{- end }} diff --git a/charts/common/templates/startup-cpu-boost.yaml b/charts/common/templates/startup-cpu-boost.yaml new file mode 100644 index 0000000..b54c3fd --- /dev/null +++ b/charts/common/templates/startup-cpu-boost.yaml @@ -0,0 +1,28 @@ +{{- $releaseName := include "name" . -}} +{{- $startupCPUBoost := .Values.deployment.startupCPUBoost | default dict -}} +{{- if and .Values.deployment.enabled $startupCPUBoost.enabled }} +apiVersion: autoscaling.x-k8s.io/v1alpha1 +kind: StartupCPUBoost +metadata: + name: {{ $releaseName }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "labels" . |trim| nindent 4 }} + annotations: + {{- include "annotations" . |trim| nindent 4 }} +selector: + matchExpressions: + - key: app + operator: In + values: ["{{ $releaseName }}"] +spec: + resourcePolicy: + containerPolicies: + - containerName: "*" + percentageIncrease: + value: {{ $startupCPUBoost.percentageIncrease | default 50 }} + durationPolicy: + podCondition: + type: Ready + status: "True" +{{- end }} diff --git a/charts/common/tests/cron_test.yaml b/charts/common/tests/cron_test.yaml index b2ce8b3..92b6956 100644 --- a/charts/common/tests/cron_test.yaml +++ b/charts/common/tests/cron_test.yaml @@ -68,31 +68,28 @@ tests: - equal: path: spec.jobTemplate.spec.template.spec.containers[0].resources.requests.cpu value: "0.1" - # #TODO mem req/limit should be equal - - it: memory limit is 120 percent of request - release: {} + - it: CPU limit is auto-set to 1.3x request when startupCPUBoost is enabled set: - containers: - - image: img - memory: 256 + deployment: + startupCPUBoost: + enabled: true + container: + cpu: 0.2 asserts: - equal: - path: spec.jobTemplate.spec.template.spec.containers[0].resources.limits.memory - value: 307Mi - - equal: - path: spec.jobTemplate.spec.template.spec.containers[0].resources.requests.memory - value: 256Mi - - it: memory limit can be overridden + path: spec.jobTemplate.spec.template.spec.containers[0].resources.limits.cpu + value: "0.26" + # #TODO mem req/limit should be equal + - it: memory limit equals memory request release: {} set: containers: - image: img memory: 256 - memoryLimit: 1024 asserts: - equal: path: spec.jobTemplate.spec.template.spec.containers[0].resources.limits.memory - value: 1024Mi + value: 256Mi - equal: path: spec.jobTemplate.spec.template.spec.containers[0].resources.requests.memory value: 256Mi @@ -239,4 +236,50 @@ tests: asserts: - equal: path: spec.jobTemplate.spec.activeDeadlineSeconds - value: 1200 \ No newline at end of file + value: 1200 + - it: must enable sidecar if postgres enabled + set: + postgres: + enabled: true + asserts: + - equal: + path: spec.jobTemplate.spec.template.spec.containers[0].name + value: rudder-test-sql-proxy + - equal: + path: spec.jobTemplate.spec.template.spec.containers[0].envFrom[0].secretRef.name + value: rudder-test-sql-proxy + - contains: + path: spec.jobTemplate.spec.template.spec.containers[0].command + content: "--port=5432" + - it: must mount sql-credentials envFrom if postgres enabled + set: + postgres: + enabled: true + asserts: + - equal: + path: spec.jobTemplate.spec.template.spec.containers[1].envFrom[0].secretRef.name + value: rudder-test-sql-credentials + - it: generates PGHOST and PGPORT env vars in cronjob + set: + postgres: + enabled: true + asserts: + - contains: + path: spec.jobTemplate.spec.template.spec.containers[1].env + content: + name: PGHOST + value: "localhost" + - contains: + path: spec.jobTemplate.spec.template.spec.containers[1].env + content: + name: PGPORT + value: "5432" + - it: credentialsSecret override works in cronjob + set: + postgres: + enabled: true + credentialsSecret: my-cron-creds + asserts: + - equal: + path: spec.jobTemplate.spec.template.spec.containers[1].envFrom[0].secretRef.name + value: my-cron-creds \ No newline at end of file diff --git a/charts/common/tests/deployment_test.yaml b/charts/common/tests/deployment_test.yaml index 47fa305..de06dca 100644 --- a/charts/common/tests/deployment_test.yaml +++ b/charts/common/tests/deployment_test.yaml @@ -48,35 +48,49 @@ tests: - equal: path: spec.template.spec.containers[0].resources.limits.cpu value: "1" - # #TODO mem req/limit should be equal - - it: memory limit is 120 percent of request + - it: CPU limit is auto-set to 1.3x request when startupCPUBoost is enabled set: - containers: - - name: test - image: img - memory: 256 - probes: - enabled: false + deployment: + startupCPUBoost: + enabled: true + container: + cpu: 0.5 asserts: - equal: - path: spec.template.spec.containers[0].resources.limits.memory - value: 307Mi + path: spec.template.spec.containers[0].resources.limits.cpu + value: "0.65" - equal: - path: spec.template.spec.containers[0].resources.requests.memory - value: 256Mi - - it: memory limit can be overridden + path: spec.template.spec.containers[0].resources.requests.cpu + value: "0.5" + - it: explicit cpuLimit takes precedence over startupCPUBoost auto-limit + set: + deployment: + startupCPUBoost: + enabled: true + container: + cpu: 0.5 + cpuLimit: 2 + asserts: + - equal: + path: spec.template.spec.containers[0].resources.limits.cpu + value: "2" + - it: no CPU limit when startupCPUBoost is disabled and no cpuLimit set + asserts: + - notExists: + path: spec.template.spec.containers[0].resources.limits.cpu + # #TODO mem req/limit should be equal + - it: memory limit equals memory request set: containers: - name: test image: img memory: 256 - memoryLimit: 1024 probes: enabled: false asserts: - equal: path: spec.template.spec.containers[0].resources.limits.memory - value: 1024Mi + value: 256Mi - equal: path: spec.template.spec.containers[0].resources.requests.memory value: 256Mi @@ -111,15 +125,14 @@ tests: - equal: path: spec.template.metadata.annotations["prometheus.io/port"] value: "8080" - - it: must use 3 replicas if replicas is 3 + - it: replicas field is skipped when HPA is enabled set: env: dev deployment: - replicas: 3 + minReplicas: 3 asserts: - - equal: + - notExists: path: spec.replicas - value: 3 - equal: path: spec.strategy.type value: RollingUpdate @@ -127,7 +140,7 @@ tests: set: env: prd deployment: - replicas: 3 + minReplicas: 3 maxReplicas: 8 asserts: - notExists: @@ -156,7 +169,7 @@ tests: - equal: path: spec.strategy.type value: RollingUpdate - - it: must adopt for gRPC + - it: must adopt for gRPC with native probes set: containers: - name: test @@ -172,7 +185,7 @@ tests: periodSeconds: 1 asserts: - isNotEmpty: - path: spec.template.spec.containers[0].livenessProbe.exec.command + path: spec.template.spec.containers[0].livenessProbe.grpc - it: must adopt for new gRPC probes 1.24 set: containers: @@ -245,20 +258,13 @@ tests: set: postgres: enabled: true + instances: + - secretKeyPrefix: PG credentialsSecret: my-secret asserts: - equal: path: spec.template.spec.containers[1].envFrom[0].secretRef.name value: my-secret - - it: override connectionConfig - set: - postgres: - enabled: true - connectionConfig: my-config - asserts: - - equal: - path: spec.template.spec.containers[0].envFrom[0].configMapRef.name - value: my-config - it: command use correct value set: containers: @@ -409,18 +415,17 @@ tests: asserts: - notExists: path: spec.template.spec.terminationGracePeriodSeconds - - it: must adopt for gRPC + - it: must adopt for gRPC with native probes using internalPort set: grpc: true container: image: img asserts: - isNotEmpty: - path: spec.template.spec.containers[0].livenessProbe.exec.command - - it: terminationGracePeriodSeconds use correct value + path: spec.template.spec.containers[0].livenessProbe.grpc + - it: terminationGracePeriodSeconds use correct value from container set: - container: - image: img + deployment: terminationGracePeriodSeconds: 60 asserts: - equal: @@ -431,29 +436,32 @@ tests: set: postgres: enabled: true + instances: + - secretKeyPrefix: PG cpu: 0.1 asserts: - notExists: path: spec.template.spec.containers[1].resources.limits.cpu - - it: must use 3 replicas if replicas is 3 + - it: replicas field is skipped when HPA is enabled (v1 compat) set: env: dev + deployment: + minReplicas: 3 container: image: testimg - replicas: 3 asserts: - - equal: + - notExists: path: spec.replicas - value: 3 - equal: path: spec.strategy.type value: RollingUpdate - it: must use 1 replica if forceReplicas is 1 and recreate set: env: prd + deployment: + forceReplicas: 1 container: image: some - forceReplicas: 1 asserts: - equal: path: spec.replicas @@ -464,9 +472,10 @@ tests: - it: must use 3 replica if forceReplicas is 3 set: env: prd + deployment: + forceReplicas: 3 container: image: some - forceReplicas: 3 asserts: - equal: path: spec.replicas @@ -505,7 +514,7 @@ tests: asserts: - equal: path: spec.template.spec.containers[1].envFrom[0].secretRef.name - value: rudder-test-psql-credentials + value: rudder-test-sql-credentials - it: must enable sidecar if postgres enabled set: postgres: @@ -514,8 +523,8 @@ tests: - isNotEmpty: path: spec.template.spec.containers[1] - equal: - path: spec.template.spec.containers[0].envFrom[0].configMapRef.name - value: rudder-test-psql-connection + path: spec.template.spec.containers[0].envFrom[0].secretRef.name + value: rudder-test-sql-proxy - it: must enable only one sidecar if postgres enabled for multiple containers set: container: {} @@ -534,8 +543,8 @@ tests: - isNotEmpty: path: spec.template.spec.containers[2] - equal: - path: spec.template.spec.containers[0].envFrom[0].configMapRef.name - value: rudder-test-psql-connection + path: spec.template.spec.containers[0].envFrom[0].secretRef.name + value: rudder-test-sql-proxy - notExists: path: spec.template.spec.containers[3] - it: postgres cpu limit can be overridden @@ -543,6 +552,8 @@ tests: set: postgres: enabled: true + instances: + - secretKeyPrefix: PG cpu: 0.1 cpuLimit: 1 asserts: @@ -552,6 +563,101 @@ tests: - equal: path: spec.template.spec.containers[0].resources.requests.cpu value: "0.1" + - it: generates PGHOST and PGPORT env vars for default instance + set: + postgres: + enabled: true + asserts: + - contains: + path: spec.template.spec.containers[1].env + content: + name: PGHOST + value: "localhost" + - contains: + path: spec.template.spec.containers[1].env + content: + name: PGPORT + value: "5432" + - it: generates sequential ports for multiple instances + set: + postgres: + enabled: true + instances: + - secretKeyPrefix: PG + - secretKeyPrefix: ANALYTICS_PG + asserts: + - contains: + path: spec.template.spec.containers[1].env + content: + name: PGPORT + value: "5432" + - contains: + path: spec.template.spec.containers[1].env + content: + name: ANALYTICS_PGPORT + value: "5433" + - contains: + path: spec.template.spec.containers[1].env + content: + name: ANALYTICS_PGHOST + value: "localhost" + - it: port override works for same-instance additional user + set: + postgres: + enabled: true + instances: + - secretKeyPrefix: PG + - secretKeyPrefix: READONLY_PG + port: 5432 + asserts: + - contains: + path: spec.template.spec.containers[1].env + content: + name: PGPORT + value: "5432" + - contains: + path: spec.template.spec.containers[1].env + content: + name: READONLY_PGPORT + value: "5432" + - it: proxy command includes --port=5432 + set: + postgres: + enabled: true + asserts: + - contains: + path: spec.template.spec.containers[0].command + content: "--port=5432" + - it: prometheus sql-proxy annotations when postgres enabled + set: + postgres: + enabled: true + asserts: + - equal: + path: spec.template.metadata.annotations["prometheus.io/scrape-sql-proxy"] + value: "true" + - equal: + path: spec.template.metadata.annotations["prometheus.io/sql-proxy-port"] + value: "9801" + - equal: + path: spec.template.metadata.annotations["prometheus.io/sql-proxy-path"] + value: "/metrics" + - it: no prometheus sql-proxy annotations when postgres disabled + asserts: + - notExists: + path: spec.template.metadata.annotations["prometheus.io/scrape-sql-proxy"] + - it: no PGHOST or PGPORT env vars when postgres disabled + asserts: + - notContains: + path: spec.template.spec.containers[0].env + content: + name: PGHOST + any: true + - notContains: + path: spec.template.spec.containers[0].env + content: + name: PGPORT + any: true - it: has common runtime env set from env label asserts: - equal: diff --git a/charts/common/tests/deployment_v1_compatibility_test.yaml b/charts/common/tests/deployment_v1_compatibility_test.yaml deleted file mode 100644 index 2af80ba..0000000 --- a/charts/common/tests/deployment_v1_compatibility_test.yaml +++ /dev/null @@ -1,219 +0,0 @@ -suite: test v1 compatibility -values: - - ./values/deployment-v1-values.yaml -templates: - - deployment.yaml -tests: - - it: must have labels - set: - env: dev - labels: - custom: label - container: - image: img - labels: - version: 1 - asserts: - - isNotEmpty: - path: metadata.labels - - isNotEmpty: - path: metadata.labels.environment - - isNotEmpty: - path: metadata.labels.custom - - isNotEmpty: - path: spec.template.metadata.labels.common - - it: has common runtime env set from env label - asserts: - - equal: - path: spec.template.spec.containers[0].env[0].value - value: dev - - it: must add to env if listed - set: - container: - image: img - env: - - name: FOO - value: bar - asserts: - - equal: - path: spec.template.spec.containers[0].env[1].value - value: bar - - it: must mount envFrom if configmap is enabled - release: - name: testsuite - set: - configmap: - enabled: true - data: - FOO: bar - asserts: - - equal: - path: spec.template.spec.containers[0].envFrom[0].configMapRef.name - value: testsuite - - it: cpu limit can be set - set: - container: - image: img - cpu: "0.1" - cpuLimit: 1 - asserts: - - equal: - path: spec.template.spec.containers[0].resources.limits.cpu - value: "1" - - equal: - path: spec.template.spec.containers[0].resources.requests.cpu - value: "0.1" - - it: cpu limit can be skipped - set: - container: - image: img - cpu: 0.1 - asserts: - - notExists: - path: spec.template.spec.containers[0].resources.limits.cpu - - equal: - path: spec.template.spec.containers[0].resources.requests.cpu - value: "0.1" - - it: memory limit is 120 percent of request - set: - container: - image: img - memory: 256 - asserts: - - equal: - path: spec.template.spec.containers[0].resources.limits.memory - value: 307Mi - - equal: - path: spec.template.spec.containers[0].resources.requests.memory - value: 256Mi - - it: memory limit can be overridden - set: - container: - image: img - memory: 256 - memoryLimit: 1024 - asserts: - - equal: - path: spec.template.spec.containers[0].resources.limits.memory - value: 1024Mi - - equal: - path: spec.template.spec.containers[0].resources.requests.memory - value: 256Mi - - it: must enable prometheus if enabled - set: - container: - image: img - prometheus: - enabled: true - asserts: - - equal: - path: spec.template.metadata.annotations["prometheus.io/scrape"] - value: "true" - - equal: - path: spec.template.metadata.annotations["prometheus.io/path"] - value: "/actuator/prometheus" - - equal: - path: spec.template.metadata.annotations["prometheus.io/port"] - value: "8080" - - it: must mount envFrom if postgres enabled - set: - postgres: - enabled: true - asserts: - - equal: - path: spec.template.spec.containers[1].envFrom[0].secretRef.name - value: testsuite-psql-credentials - - it: must enable sidecar if postgres enabled - set: - postgres: - enabled: true - asserts: - - isNotEmpty: - path: spec.template.spec.containers[1] - - equal: - path: spec.template.spec.containers[0].envFrom[0].configMapRef.name - value: testsuite-psql-connection - - it: postgres cpu limit can be overridden - set: - postgres: - enabled: true - cpu: "0.1" - cpuLimit: "1" - asserts: - - equal: - path: spec.template.spec.containers[0].resources.limits.cpu - value: "1" - - equal: - path: spec.template.spec.containers[0].resources.requests.cpu - value: "0.1" - - it: postgres cpu limit can be cleared - set: - postgres: - enabled: true - cpu: 0.1 - asserts: - - notExists: - path: spec.template.spec.containers[1].resources.limits.cpu - - it: must use 3 replicas if replicas is 3 - set: - env: dev - container: - image: testimg - replicas: 3 - asserts: - - equal: - path: spec.replicas - value: 3 - - equal: - path: spec.strategy.type - value: RollingUpdate - - it: must use 1 replica if forceReplicas is 1 and recreate - set: - env: prd - container: - image: some - forceReplicas: 1 - asserts: - - equal: - path: spec.replicas - value: 1 - - equal: - path: spec.strategy.type - value: Recreate - - it: must use 3 replica if forceReplicas is 3 - set: - env: prd - container: - image: some - forceReplicas: 3 - asserts: - - equal: - path: spec.replicas - value: 3 - - equal: - path: spec.strategy.type - value: RollingUpdate - - it: must adopt for gRPC - set: - grpc: true - container: - image: img - asserts: - - isNotEmpty: - path: spec.template.spec.containers[0].livenessProbe.exec.command - - it: terminationGracePeriodSeconds use correct value - set: - container: - image: img - terminationGracePeriodSeconds: 60 - asserts: - - equal: - path: spec.template.spec.terminationGracePeriodSeconds - value: 60 - - it: spec does not contain terminationGracePeriodSeconds if missing from values - set: - container: - image: img - asserts: - - notExists: - path: spec.template.spec.terminationGracePeriodSeconds diff --git a/charts/common/tests/hpa_test.yaml b/charts/common/tests/hpa_test.yaml index 50299f6..387c42d 100644 --- a/charts/common/tests/hpa_test.yaml +++ b/charts/common/tests/hpa_test.yaml @@ -34,31 +34,40 @@ tests: - equal: path: metadata.annotations["meta.helm.sh/release-namespace"] value: NAMESPACE - - it: Should not generate hpa by default + - it: HPA is always generated by default asserts: - hasDocuments: - count: 0 - - it: hpa must be generated if maxReplicas > 1 and must have labels + count: 1 + - it: HPA minReplicas defaults to 2 set: - container: + deployment: + minReplicas: null + asserts: + - equal: + path: spec.minReplicas + value: 2 + - it: hpa must have labels + set: + deployment: maxReplicas: 2 asserts: - isNotEmpty: path: metadata.labels - - it: must have minReplicas 2 if not set + - it: defaults to minReplicas 2 when not set set: env: prd + deployment: + maxReplicas: 5 container: image: img - maxReplicas: 5 asserts: - equal: path: spec.minReplicas value: 2 - - it: use minReplicas from deployment if not set on container + - it: deployment.minReplicas overrides default set: deployment: - replicas: 4 + minReplicas: 4 maxReplicas: 5 containers: - image: img @@ -67,19 +76,77 @@ tests: path: spec.minReplicas value: 4 # #TODO 100% cpu target is set as default? - - it: uses 100 % as target cpu + - it: uses 70 % as default target cpu set: - container: + deployment: + maxReplicas: 5 + asserts: + - equal: + path: spec.metrics[0].resource.target.averageUtilization + value: 70 + - it: deployment.cpuUtilization overrides default + set: + deployment: maxReplicas: 5 + cpuUtilization: 60 asserts: - equal: path: spec.metrics[0].resource.target.averageUtilization + value: 60 + - it: adds 120s stabilization window when startupCPUBoost is disabled + set: + env: prd + asserts: + - equal: + path: spec.behavior.scaleUp.stabilizationWindowSeconds + value: 120 + - it: uses custom stabilizationWindowSeconds when set + set: + env: prd + hpa: + stabilizationWindowSeconds: 300 + asserts: + - equal: + path: spec.behavior.scaleUp.stabilizationWindowSeconds + value: 300 + - it: no stabilization window when startupCPUBoost is enabled + set: + env: prd + deployment: + startupCPUBoost: + enabled: true + asserts: + - notExists: + path: spec.behavior + - it: custom metrics are appended alongside CPU metric + set: + env: prd + hpa: + metrics: + - type: Pods + pods: + metric: + name: prometheus.googleapis.com|http_requests|gauge + target: + type: AverageValue + averageValue: 100 + asserts: + - equal: + path: spec.metrics[0].type + value: Resource + - equal: + path: spec.metrics[1].type + value: Pods + - equal: + path: spec.metrics[1].pods.metric.name + value: "prometheus.googleapis.com|http_requests|gauge" + - equal: + path: spec.metrics[1].pods.target.averageValue value: 100 - - it: Must use minimum two replicas in prod, max 10 + - it: defaults to minReplicas 2 and maxReplicas 10 set: env: prd container: - replicas: 1 # even if 1, this must be 2 in prod image: img asserts: - equal: @@ -91,10 +158,11 @@ tests: - it: Must not use hpa if forceReplicas is set set: env: prd - container: - image: some + deployment: maxReplicas: 5 # should not matter forceReplicas: 1 + container: + image: some hpa: spec: maxReplicas: 199 # should not matter @@ -102,12 +170,10 @@ tests: asserts: - hasDocuments: count: 0 - - it: Uses maxReplicas from deployment before container settings + - it: Uses maxReplicas from deployment set: deployment: maxReplicas: 99 - containers: - - maxReplicas: 2 asserts: - equal: path: spec.maxReplicas diff --git a/charts/common/tests/ingress_test.yaml b/charts/common/tests/ingress_test.yaml index 44c8b30..9159719 100644 --- a/charts/common/tests/ingress_test.yaml +++ b/charts/common/tests/ingress_test.yaml @@ -45,7 +45,7 @@ tests: path: spec.rules[0].host value: test.dev.entur.io - equal: - path: metadata.annotations["kubernetes.io/ingress.class"] + path: spec.ingressClassName value: "traefik" - equal: path: metadata.labels.traffic-type @@ -98,7 +98,7 @@ tests: set: env: dev app: testsuite - shortname: tstsut + appId: tstsut team: common service: externalPort: 8080 diff --git a/charts/common/tests/pdb_test.yaml b/charts/common/tests/pdb_test.yaml index a66577d..3f22d19 100644 --- a/charts/common/tests/pdb_test.yaml +++ b/charts/common/tests/pdb_test.yaml @@ -34,38 +34,60 @@ tests: - equal: path: metadata.annotations["meta.helm.sh/release-namespace"] value: NAMESPACE - - it: must use default with 2 replicas or more + - it: must always have unhealthyPodEvictionPolicy AlwaysAllow set: env: prd - container: - replicas: 2 + asserts: + - equal: + path: spec.unhealthyPodEvictionPolicy + value: AlwaysAllow + - it: must have unhealthyPodEvictionPolicy in all environments + set: + env: dev + asserts: + - equal: + path: spec.unhealthyPodEvictionPolicy + value: AlwaysAllow + # default minReplicas=2 means PDB is always 50% by default + - it: default minAvailable is 50% with default minReplicas 2 + set: + env: prd + deployment: + minReplicas: 2 asserts: - equal: path: spec.minAvailable value: "50%" - - it: must use default with 2 maxreplicas or more in prod + - it: default minAvailable is 50% in dev (minReplicas defaults to 2) set: - env: prd - maxReplicas: 3 + env: dev asserts: - equal: path: spec.minAvailable value: "50%" - - it: use minAvailable from container if not set on pdb + - it: default minAvailable is 50% in tst (minReplicas defaults to 2) + set: + env: tst + asserts: + - equal: + path: spec.minAvailable + value: "50%" + # deployment.minAvailable overrides the default + - it: deployment.minAvailable overrides default 50% set: env: prd - container: - replicas: 2 + deployment: + minReplicas: 2 minAvailable: 27% asserts: - equal: path: spec.minAvailable value: "27%" - - it: use minAvailable from deployment if not set on pdb or container + - it: deployment.minAvailable works with containers list set: env: prd deployment: - replicas: 2 + minReplicas: 2 minAvailable: 26% containers: - image: app @@ -73,66 +95,75 @@ tests: - equal: path: spec.minAvailable value: "26%" - - it: check for minAvailable on deployment before container + - it: deployment.minAvailable 25% with containers list set: env: prd deployment: - replicas: 2 - minAvailable: 30% - container: - replicas: 2 - minAvailable: 50% + minReplicas: 2 + minAvailable: 25% containers: - image: app asserts: - equal: path: spec.minAvailable - value: "30%" - - it: use minAvailable from pdb if not set on pdb or container + value: "25%" + - it: deployment.minAvailable 30% with containers list set: env: prd - pdb: - minAvailable: 25% - container: - replicas: 2 + deployment: + minReplicas: 2 + minAvailable: 30% containers: - image: app asserts: - equal: path: spec.minAvailable - value: "25%" - - it: if container Replicas is set to 1, minAvailable must be 0% + value: "30%" + # minReplicas=1: single pod, PDB should be 0% + - it: minAvailable is 0% when minReplicas is 1 in prd set: env: prd + deployment: + minReplicas: 1 container: image: some - replicas: 1 asserts: - equal: path: spec.minAvailable value: "0%" - - it: if deployment Replicas is set to 1, minAvailable must be 0% + - it: minAvailable is 0% when minReplicas is 1 in dev set: - env: prd + env: dev deployment: - replicas: 1 + minReplicas: 1 container: image: some asserts: - equal: path: spec.minAvailable value: "0%" - - it: if container forceReplicas is set to 1, minAvailable must be 0% + - it: minAvailable is 0% when minReplicas is 1 in tst + set: + env: tst + deployment: + minReplicas: 1 + asserts: + - equal: + path: spec.minAvailable + value: "0%" + # forceReplicas=1: single pod, PDB should be 0% + - it: forceReplicas 1 means minAvailable 0% set: env: prd + deployment: + forceReplicas: 1 container: image: some - forceReplicas: 1 asserts: - equal: path: spec.minAvailable value: "0%" - - it: if deployment forceReplicas is set to 1, minAvailable must be 0% + - it: forceReplicas 1 with containers list means minAvailable 0% set: env: prd deployment: @@ -143,37 +174,54 @@ tests: - equal: path: spec.minAvailable value: "0%" - - it: must use pdb if forceReplicas is set to more than 1 + # forceReplicas > 1: multiple pods, PDB should protect + - it: forceReplicas 2 means minAvailable 50% set: env: prd + deployment: + forceReplicas: 2 container: image: some + asserts: + - equal: + path: spec.minAvailable + value: "50%" + - it: forceReplicas 2 with containers list means minAvailable 50% + set: + env: prd + deployment: forceReplicas: 2 - replicas: 2 + containers: + - image: some asserts: - - hasDocuments: - count: 1 - - it: must use pdb if forceReplicas is set to more than 1 + - equal: + path: spec.minAvailable + value: "50%" + - it: forceReplicas 3 means minAvailable 50% set: env: prd + deployment: + forceReplicas: 3 container: image: some - forceReplicas: 2 - replicas: 2 asserts: - - hasDocuments: - count: 1 - - it: must use pdb if forceReplicas is set to more than 1 on deployment + - equal: + path: spec.minAvailable + value: "50%" + - it: forceReplicas 3 with custom deployment.minAvailable set: env: prd deployment: - forceReplicas: 2 + forceReplicas: 3 + minAvailable: 33% containers: - image: some asserts: - - hasDocuments: - count: 1 - - it: must use pdb if forceReplicas is set to 1 on deployment + - equal: + path: spec.minAvailable + value: "33%" + # PDB document always exists (helm can't delete previous config) + - it: PDB document exists even when forceReplicas is 1 set: env: prd deployment: @@ -183,13 +231,11 @@ tests: asserts: - hasDocuments: count: 1 - - it: must use pdb when replicas not set + - it: PDB document exists with custom deployment.minAvailable set: env: prd deployment: minAvailable: 4 - container: - image: some asserts: - hasDocuments: count: 1 @@ -204,69 +250,63 @@ tests: - equal: path: metadata.name value: override - # test env - - it: must use default with 2 maxReplicas or more + # deployment.minAvailable works across environments + - it: deployment.minAvailable overrides default in tst set: env: tst - container: + deployment: maxReplicas: 5 asserts: - equal: path: spec.minAvailable value: "50%" - - it: must use default with 2 maxreplicas or more in tst + - it: deployment.minAvailable custom value in tst set: env: tst - container: - maxReplicas: 3 - asserts: - - equal: - path: spec.minAvailable - value: "50%" - - it: use minAvailable from container if not set on pdb in tst - set: - env: tst - container: + deployment: maxReplicas: 2 minAvailable: 27% asserts: - equal: path: spec.minAvailable value: "27%" - - it: minAvailable equals zero even if minAvailable is set on container if only one pod in tst + - it: default minReplicas 2 means PDB is 50% without explicit config in tst set: env: tst - container: - minAvailable: 24% + deployment: + minReplicas: null asserts: - equal: path: spec.minAvailable - value: "0%" - - it: minAvailable equals zero even if minAvailable is set on container if only one pod in tst + value: "50%" + - it: deployment.minAvailable custom value in dev set: - env: tst + env: dev deployment: + minReplicas: 2 minAvailable: 24% asserts: - equal: path: spec.minAvailable - value: "0%" - - it: minAvailable equals zero even if minAvailable is set on container if only one pod in dev + value: "24%" + - it: deployment.minAvailable custom value in dev with maxReplicas set: env: dev - container: - minAvailable: 24% + deployment: + maxReplicas: 2 + minAvailable: 29% asserts: - equal: path: spec.minAvailable - value: "0%" - - it: use minAvailable from container if not set on pdb in dev + value: "29%" + - it: minReplicas 2 in dev means minAvailable 50% set: env: dev + deployment: + minReplicas: 2 container: - maxReplicas: 2 - minAvailable: 29% + image: some asserts: - equal: path: spec.minAvailable - value: "29%" + value: "50%" diff --git a/charts/common/tests/sql_credentials_test.yaml b/charts/common/tests/sql_credentials_test.yaml new file mode 100644 index 0000000..97ca554 --- /dev/null +++ b/charts/common/tests/sql_credentials_test.yaml @@ -0,0 +1,113 @@ +suite: test sql-credentials ExternalSecret +values: + - ./values/common-test-values.yaml +templates: + - sql-credentials-secret.yaml +tests: + - it: not created when postgres disabled + asserts: + - hasDocuments: + count: 0 + - it: created with default PG prefix when enabled with no instances + set: + postgres: + enabled: true + asserts: + - hasDocuments: + count: 1 + - equal: + path: spec.data[0].remoteRef.key + value: PGUSER + - equal: + path: spec.data[0].secretKey + value: PGUSER + - equal: + path: spec.data[1].remoteRef.key + value: PGPASSWORD + - equal: + path: spec.data[1].secretKey + value: PGPASSWORD + - it: created with explicit instance prefix + set: + postgres: + enabled: true + instances: + - secretKeyPrefix: MYAPP_PG + asserts: + - equal: + path: spec.data[0].remoteRef.key + value: MYAPP_PGUSER + - equal: + path: spec.data[0].secretKey + value: MYAPP_PGUSER + - equal: + path: spec.data[1].remoteRef.key + value: MYAPP_PGPASSWORD + - equal: + path: spec.data[1].secretKey + value: MYAPP_PGPASSWORD + - it: multiple instances produce entries for each prefix + set: + postgres: + enabled: true + instances: + - secretKeyPrefix: PG + - secretKeyPrefix: ANALYTICS_PG + asserts: + - equal: + path: spec.data[0].remoteRef.key + value: PGUSER + - equal: + path: spec.data[1].remoteRef.key + value: PGPASSWORD + - equal: + path: spec.data[2].remoteRef.key + value: ANALYTICS_PGUSER + - equal: + path: spec.data[3].remoteRef.key + value: ANALYTICS_PGPASSWORD + - it: not created when credentialsSecret is set + set: + postgres: + enabled: true + credentialsSecret: my-custom-creds + asserts: + - hasDocuments: + count: 0 + - it: has correct metadata + set: + postgres: + enabled: true + asserts: + - equal: + path: metadata.name + value: rudder-test-sql-credentials + - equal: + path: spec.target.name + value: rudder-test-sql-credentials + - equal: + path: spec.secretStoreRef.kind + value: SecretStore + - equal: + path: spec.refreshInterval + value: 1h + - equal: + path: spec.target.creationPolicy + value: Owner + - equal: + path: spec.target.deletionPolicy + value: Delete + - it: uses decodingStrategy None and default conversion strategy + set: + postgres: + enabled: true + asserts: + - equal: + path: spec.data[0].remoteRef.decodingStrategy + value: None + - equal: + path: spec.data[0].remoteRef.conversionStrategy + value: Default + - equal: + path: spec.data[0].remoteRef.version + value: latest diff --git a/charts/common/tests/sql_proxy_test.yaml b/charts/common/tests/sql_proxy_test.yaml new file mode 100644 index 0000000..69f5cc7 --- /dev/null +++ b/charts/common/tests/sql_proxy_test.yaml @@ -0,0 +1,104 @@ +suite: test sql-proxy ExternalSecret +values: + - ./values/common-test-values.yaml +templates: + - sql-proxy-secret.yaml +tests: + - it: not created when postgres disabled + asserts: + - hasDocuments: + count: 0 + - it: created with default PG prefix when enabled with no instances + set: + postgres: + enabled: true + asserts: + - hasDocuments: + count: 1 + - equal: + path: spec.data[0].remoteRef.key + value: PGINSTANCES + - equal: + path: spec.data[0].secretKey + value: CSQL_PROXY_INSTANCE_CONNECTION_NAME_0 + - it: created with explicit instance prefix + set: + postgres: + enabled: true + instances: + - secretKeyPrefix: MYAPP_PG + asserts: + - equal: + path: spec.data[0].remoteRef.key + value: MYAPP_PGINSTANCES + - equal: + path: spec.data[0].secretKey + value: CSQL_PROXY_INSTANCE_CONNECTION_NAME_0 + - it: multiple instances get sequential indices + set: + postgres: + enabled: true + instances: + - secretKeyPrefix: PG + - secretKeyPrefix: ANALYTICS_PG + asserts: + - equal: + path: spec.data[0].remoteRef.key + value: PGINSTANCES + - equal: + path: spec.data[0].secretKey + value: CSQL_PROXY_INSTANCE_CONNECTION_NAME_0 + - equal: + path: spec.data[1].remoteRef.key + value: ANALYTICS_PGINSTANCES + - equal: + path: spec.data[1].secretKey + value: CSQL_PROXY_INSTANCE_CONNECTION_NAME_1 + - it: has correct metadata + set: + postgres: + enabled: true + asserts: + - equal: + path: metadata.name + value: rudder-test-sql-proxy + - equal: + path: spec.target.name + value: rudder-test-sql-proxy + - equal: + path: spec.secretStoreRef.kind + value: SecretStore + - equal: + path: spec.refreshInterval + value: 1h + - equal: + path: spec.target.creationPolicy + value: Owner + - equal: + path: spec.target.deletionPolicy + value: Delete + - it: uses decodingStrategy None and default conversion strategy + set: + postgres: + enabled: true + asserts: + - equal: + path: spec.data[0].remoteRef.decodingStrategy + value: None + - equal: + path: spec.data[0].remoteRef.conversionStrategy + value: Default + - equal: + path: spec.data[0].remoteRef.version + value: latest + - it: still created when credentialsSecret is set + set: + postgres: + enabled: true + credentialsSecret: my-custom-creds + asserts: + - hasDocuments: + count: 1 + - equal: + path: spec.data[0].remoteRef.key + value: PGINSTANCES diff --git a/charts/common/tests/values/common-test-values.yaml b/charts/common/tests/values/common-test-values.yaml index 0f4ed10..84fbecf 100644 --- a/charts/common/tests/values/common-test-values.yaml +++ b/charts/common/tests/values/common-test-values.yaml @@ -1,7 +1,6 @@ app: rudder-test releaseName: rudder-test env: dev -appname: rudder-test team: platform ingress: host: test.dev.entur.io @@ -9,11 +8,12 @@ ingress: service: externalPort: 8080 internalPort: 8080 +deployment: + minReplicas: 2 container: image: img memory: 768 cpu: 0.1 - replicas: 2 probes: liveness: path: /actuator/health @@ -24,4 +24,4 @@ container: periodSeconds: 1 prometheus: enabled: true -shortname: rudder +appId: rudder diff --git a/charts/common/tests/values/deployment-v1-values.yaml b/charts/common/tests/values/deployment-v1-values.yaml deleted file mode 100644 index 9548814..0000000 --- a/charts/common/tests/values/deployment-v1-values.yaml +++ /dev/null @@ -1,9 +0,0 @@ -env: dev -app: testsuite -shortname: tstsut -team: common -ingress: - host: test.dev.entur.io - trafficType: public -container: - image: img diff --git a/charts/common/values.schema.json b/charts/common/values.schema.json new file mode 100644 index 0000000..0d9fe8d --- /dev/null +++ b/charts/common/values.schema.json @@ -0,0 +1,516 @@ +{ + "$schema": "http://json-schema.org/draft-07/schema#", + "additionalProperties": false, + "properties": { + "global": { + "description": "Global values accessible from any chart or subchart.", + "type": "object" + }, + "app": { + "description": "Application name, typically on the form `the-application`", + "type": ["string", "null"] + }, + "appId": { + "description": "App ID from GoogleCloudApplication metadata.id. Max 10 alphanumeric characters.", + "type": ["string", "null"], + "maxLength": 10 + }, + "team": { + "description": "Your team name, without a `team-` prefix", + "type": ["string", "null"] + }, + "env": { + "description": "The current env: `dev`, `tst` or `prd`", + "type": "string", + "enum": ["sbx", "dev", "tst", "prd"] + }, + "releaseName": { + "description": "Override release name, useful for multiple deployments", + "type": ["string", "null"] + }, + "labels": { + "description": "Specify additional labels for every resource", + "type": "object" + }, + "ingress": { + "additionalProperties": false, + "properties": { + "enabled": { + "description": "Enable or disable the ingress", + "type": "boolean" + }, + "host": { + "description": "Set the host name", + "type": ["string", "null"] + }, + "trafficType": { + "description": "Set the traffic type (`api`, `public` or `http2` for gRPC)", + "type": ["string", "null"], + "enum": ["api", "public", "http2", null] + }, + "ingressClassName": { + "description": "Set the IngressClass name (default: traefik)", + "type": ["string", "null"] + }, + "annotations": { + "description": "Optionally set annotations for the ingress", + "type": "object" + }, + "rules": { + "description": "K8s spec for ingress rules", + "items": { "type": "object" }, + "type": "array" + } + }, + "type": "object" + }, + "ingresses": { + "description": "Specify a list of ingress specs", + "items": { "$ref": "#/properties/ingress" }, + "type": "array" + }, + "grpc": { + "description": "Enable gRPC which will use native K8s gRPC probes with service.internalPort", + "type": "boolean" + }, + "deployment": { + "additionalProperties": false, + "properties": { + "enabled": { + "description": "Enable or disable the deployment", + "type": "boolean" + }, + "labels": { + "description": "Add labels to your pods", + "type": "object" + }, + "volumes": { + "description": "Configure volume, accepts kubernetes syntax", + "items": { "type": "object" }, + "type": "array" + }, + "prometheus": { + "$ref": "#/properties/container/properties/prometheus" + }, + "minReplicas": { + "description": "Set the minimum replica count for HPA. Default: 2.", + "type": ["integer", "null"] + }, + "maxReplicas": { + "description": "Set the max replica count for HPA", + "type": ["integer", "null"] + }, + "forceReplicas": { + "description": "Force replicas disables autoscaling and PDB, if set to 1 it will use Recreate strategy", + "type": ["integer", "null"] + }, + "terminationGracePeriodSeconds": { + "description": "Override pod terminationGracePeriodSeconds (default 30s)", + "type": ["integer", "null"] + }, + "minAvailable": { + "description": "Set minimum available % for PDB", + "type": ["string", "integer", "null"] + }, + "maxSurge": { + "description": "Limit max surge for rolling updates (default 25%)", + "type": ["integer", "string", "null"] + }, + "maxUnavailable": { + "description": "Limit max unavailable for rolling updates (default 25%)", + "type": ["string", "null"] + }, + "serviceAccountName": { + "description": "Override pod serviceAccountName (default application)", + "type": ["string", "null"] + }, + "cpuUtilization": { + "description": "Set the target CPU average utilization (%) for HPA scaling (default 70)", + "type": ["integer", "null"] + }, + "startupCPUBoost": { + "additionalProperties": false, + "properties": { + "enabled": { + "description": "Enable GKE Startup CPU Boost. Requires the kube-startup-cpu-boost operator.", + "type": "boolean" + }, + "percentageIncrease": { + "description": "Percentage to increase CPU requests during startup", + "type": "integer" + } + }, + "type": "object" + }, + "minReadySeconds": { + "description": "Minimum number of seconds a pod should be ready before considered available", + "type": "integer" + } + }, + "type": "object" + }, + "cron": { + "additionalProperties": false, + "properties": { + "enabled": { + "description": "Enable or disable the cron job", + "type": "boolean" + }, + "concurrencyPolicy": { + "description": "Concurrency policy", + "type": ["string", "null"] + }, + "failedJobsHistoryLimit": { + "description": "Failed jobs history limit", + "type": ["integer", "null"] + }, + "schedule": { + "description": "Required crontab schedule", + "type": ["string", "null"] + }, + "successfulJobsHistoryLimit": { + "description": "Successful jobs history limit", + "type": ["integer", "null"] + }, + "suspend": { + "description": "Suspend flag", + "type": ["boolean", "null"] + }, + "labels": { + "description": "Add labels to your pods", + "type": "object" + }, + "volumes": { + "description": "Configure volume, accepts kubernetes syntax", + "items": { "type": "object" }, + "type": "array" + }, + "terminationGracePeriodSeconds": { + "description": "Override pod terminationGracePeriodSeconds (default 30s)", + "type": ["integer", "null"] + }, + "restartPolicy": { + "description": "Override pod restartPolicy (default OnFailure)", + "type": ["string", "null"] + }, + "serviceAccountName": { + "description": "Override pod serviceAccountName (default application)", + "type": ["string", "null"] + }, + "activeDeadlineSeconds": { + "description": "Active deadline seconds for the job, default 24 hours (86300s)", + "type": ["integer", "null"] + } + }, + "type": "object" + }, + "hpa": { + "additionalProperties": false, + "properties": { + "metrics": { + "description": "Additional HPA metrics appended alongside the default CPU metric (Pods, Object, External)", + "items": { "type": "object" }, + "type": "array" + }, + "stabilizationWindowSeconds": { + "description": "Seconds to wait before scaling up after a metric spike. Only applied when startupCPUBoost is disabled. Tune this to match your application's typical startup time (e.g. 60s for a fast app, 300s for a heavy Spring Boot app with cache warming).", + "type": ["integer", "null"] + }, + "spec": { + "description": "Full custom spec for HPA, replaces default metrics and min/max replicas. Inherits scaleTargetRef.", + "type": "object" + } + }, + "type": "object" + }, + "pdb": { + "description": "PDB is automatically configured based on minReplicas. Use deployment.minAvailable to override the default 50%.", + "type": "object" + }, + "service": { + "additionalProperties": false, + "properties": { + "enabled": { + "description": "Enable or disable the service", + "type": "boolean" + }, + "externalPort": { + "description": "Set the external port for your service", + "type": "integer" + }, + "internalPort": { + "description": "Set the internal port for your service", + "type": "integer" + }, + "annotations": { + "description": "Optionally set annotations for the service", + "type": "object" + }, + "ports": { + "description": "Custom ports spec, overrides default port config", + "items": { "type": "object" }, + "type": "array" + } + }, + "type": "object" + }, + "container": { + "additionalProperties": false, + "properties": { + "name": { + "description": "Name of container", + "type": ["string", "null"] + }, + "labels": { + "description": "Add labels to your pods", + "type": "object" + }, + "command": { + "description": "Optionally set the command that will run in the pod", + "type": ["array", "null"], + "items": { "type": "string" } + }, + "args": { + "description": "Optionally set the arguments that will be passed to the command", + "type": ["array", "null"], + "items": { "type": "string" } + }, + "cpu": { + "description": "Set CPU without any unit. 100m is 0.1", + "type": ["string", "number"] + }, + "cpuLimit": { + "description": "Set CPU limit without any unit. 100m is 0.1", + "type": ["string", "number", "null"] + }, + "memory": { + "description": "Set memory without any unit, Mi is inferred", + "type": "integer" + }, + "memoryLimit": { + "description": "Deprecated. Memory limit is now always equal to memory request. Use container.memory instead.", + "type": ["integer", "null"] + }, + "uid": { + "description": "Set the uid that your user runs with", + "type": "integer" + }, + "image": { + "description": "Set the image for your container", + "type": ["string", "null"] + }, + "envFrom": { + "description": "Attach secrets and configmaps to your env", + "items": { "type": "object" }, + "type": "array" + }, + "env": { + "description": "Specify env entries for your container", + "items": { "type": "object" }, + "type": "array" + }, + "ephemeralStorage": { + "description": "Set ephemeral storage request", + "type": ["string", "null"] + }, + "ephemeralStorageLimit": { + "description": "Set ephemeral storage limit", + "type": ["string", "null"] + }, + "grpc": { + "description": "Enable gRPC for this specific container", + "type": "boolean" + }, + "ports": { + "description": "Custom container ports, overrides default port config", + "items": { "type": "object" }, + "type": "array" + }, + "prometheus": { + "additionalProperties": false, + "description": "Prometheus scraping configuration", + "properties": { + "enabled": { + "description": "Enable or disable Prometheus", + "type": "boolean" + }, + "path": { + "description": "Set the path for scraping metrics", + "type": "string" + }, + "port": { + "description": "Set the port for prometheus scraping", + "type": ["integer", "null"] + } + }, + "type": "object" + }, + "probes": { + "additionalProperties": false, + "properties": { + "enabled": { + "description": "Enable or disable probes", + "type": "boolean" + }, + "spec": { + "description": "Override with k8s spec for custom probes", + "type": ["object", "null"] + }, + "liveness": { + "additionalProperties": false, + "properties": { + "path": { "description": "Set the path for liveness probe", "type": "string" }, + "initialDelaySeconds": { "type": "integer" }, + "successThreshold": { "type": "integer" }, + "failureThreshold": { "type": "integer" }, + "periodSeconds": { "type": "integer" }, + "port": { "type": ["integer", "null"] }, + "grpc": { "type": ["object", "null"] } + }, + "type": "object" + }, + "readiness": { + "additionalProperties": false, + "properties": { + "path": { "description": "Set the path for readiness probe", "type": "string" }, + "initialDelaySeconds": { "type": "integer" }, + "successThreshold": { "type": "integer" }, + "failureThreshold": { "type": "integer" }, + "periodSeconds": { "type": "integer" }, + "port": { "type": ["integer", "null"] }, + "grpc": { "type": ["object", "null"] } + }, + "type": "object" + }, + "startup": { + "additionalProperties": false, + "properties": { + "path": { "description": "Set the path for startup probe. If set, uses httpGet instead of tcpSocket.", "type": ["string", "null"] }, + "failureThreshold": { "type": "integer" }, + "periodSeconds": { "type": "integer" }, + "port": { "type": ["integer", "null"] }, + "grpc": { "type": ["object", "null"] } + }, + "type": "object" + } + }, + "type": "object" + }, + "volumeMounts": { + "description": "Configure volume mounts, accepts kubernetes syntax", + "items": { "type": "object" }, + "type": "array" + }, + "volumes": { + "description": "Configure volume, accepts kubernetes syntax", + "items": { "type": "object" }, + "type": "array" + }, + "lifecycle": { + "description": "Set pod lifecycle handlers", + "type": "object" + } + }, + "type": "object" + }, + "containers": { + "description": "Takes a list of container entries, you must add a `name` field for each entry", + "items": { "$ref": "#/properties/container" }, + "type": "array" + }, + "initContainers": { + "description": "Takes a list of initContainers", + "items": { "type": "object" }, + "type": "array" + }, + "postgres": { + "additionalProperties": false, + "properties": { + "enabled": { + "description": "Enable or disable the Cloud SQL proxy v2 sidecar", + "type": "boolean" + }, + "cpu": { + "description": "Configure cpu request for proxy", + "type": ["string", "number"] + }, + "cpuLimit": { + "description": "Configure optional cpu limit for proxy", + "type": ["string", "number", "null"] + }, + "memory": { + "description": "Configure memory request for proxy without units, Mi inferred", + "type": "integer" + }, + "instances": { + "description": "List of database connections keyed by Terraform secret_key_prefix", + "items": { + "type": "object", + "additionalProperties": false, + "required": ["secretKeyPrefix"], + "properties": { + "secretKeyPrefix": { + "description": "Terraform secret_key_prefix. Derives keys: {prefix}INSTANCES, {prefix}USER, {prefix}PASSWORD", + "type": "string" + }, + "port": { + "description": "Override auto-assigned proxy listen port (default: 5432 + index)", + "type": "integer" + } + } + }, + "type": "array" + }, + "credentialsSecret": { + "description": "Override K8s secret for credentials. Bypasses ExternalSecret for credentials.", + "type": ["string", "null"] + }, + "maxSigtermDelay": { + "description": "Override the max-sigterm-delay for the Cloud SQL Proxy (e.g. 30s, 5m).", + "type": ["string", "null"] + } + }, + "type": "object" + }, + "configmap": { + "additionalProperties": false, + "properties": { + "enabled": { + "description": "Enable or disable the configmap", + "type": "boolean" + }, + "data": { + "description": "Set data for configmap", + "type": "object" + } + }, + "type": "object" + }, + "secrets": { + "description": "Add externalSecret to sync secrets from secret manager", + "type": "object" + }, + "serviceAccount": { + "additionalProperties": false, + "properties": { + "create": { + "description": "Create a service account for this application", + "type": "boolean" + } + }, + "type": "object" + }, + "vpa": { + "additionalProperties": false, + "properties": { + "enabled": { + "description": "Enable Vertical Pod Autoscaler", + "type": "boolean" + } + }, + "type": "object" + } + }, + "required": ["app", "appId", "env", "team"], + "type": "object" +} diff --git a/charts/common/values.yaml b/charts/common/values.yaml index a6f412e..40e4b35 100644 --- a/charts/common/values.yaml +++ b/charts/common/values.yaml @@ -1,7 +1,7 @@ # -- Application name, typically on the form `the-application` app: -# -- `id` for GCP 2.0, typically on the form `theapp`. Max 10 characters -shortname: +# -- App ID from GoogleCloudApplication `metadata.id`. Max 10 alphanumeric characters. See https://github.com/entur/tf-gcp-apps/blob/main/docs/manifests/GoogleCloudApplication.md +appId: # -- Your team name, without a `team-` prefix team: # -- The current env, override in your `values-kub-ent-$env.yaml` files to `dev`, `tst` or `prd` @@ -18,8 +18,11 @@ ingress: enabled: true # -- Set the host name, do this in your `values-kub-ent-$env.yaml` files host: - # -- Set the traffic type (`api`,`public` or `http2` for gRPC) + # -- Set the traffic type (`api`,`public` or `http2` for gRPC). Note: changing this value will cause a couple of minutes of downtime while the ingress controller reconciles. trafficType: + # -- Set the IngressClass name. Uses `spec.ingressClassName` (replaces the deprecated `kubernetes.io/ingress.class` annotation). + # @default -- traefik + ingressClassName: # -- Optionally set annotations for the ingress annotations: {} # rules: # k8s spec for ingress rules @@ -39,28 +42,37 @@ deployment: volumes: [] # -- Prometheus #prometheus: same as container.prometheus stanza - # -- Set the target replica count - # @default -- container.replicas - replicas: - # -- Set the max replica count + # -- (int) Set the minimum replica count for HPA. + # @default -- 2 + minReplicas: + # -- (int) Set the max replica count for HPA # @default -- 10 maxReplicas: - # -- (int) Force replicas disables autoscaling and PDB, if set to 1 it will use Recreate strategy + # -- (int) Force a fixed replica count, disables HPA and PDB. If set to 1 it will use Recreate strategy. forceReplicas: # -- (int) Override pod terminationGracePeriodSeconds (default 30s). terminationGracePeriodSeconds: - # -- (string) Set minimum available % + # -- (string) Set minimum available % for PDB # @default -- 50% minAvailable: - # -- Limit max surge for rolling updates (default 25%). Not in use when using forceReplicas. - # @default -- 25% + # -- Limit max surge for rolling updates. Accepts an integer (pod count) or a string percentage (e.g. "25%"). Not in use when using forceReplicas. + # @default -- 1 maxSurge: - # -- Limit max unavailable for rolling updates (default 25%). Not in use when using forceReplicas. - # @default -- 25% + # -- Limit max unavailable for rolling updates. Accepts an integer (pod count) or a string percentage (e.g. "25%"). Not in use when using forceReplicas. + # @default -- 1 maxUnavailable: # -- Override pod serviceAccountName (default application). # @default -- application serviceAccountName: + # -- Set the target CPU average utilization (%) for HPA scaling. With startupCPUBoost enabled, 70% is a good default. Without it, 100% may be needed for Java apps with heavy startup CPU usage. + # @default -- 70 + cpuUtilization: + startupCPUBoost: + # -- Enable GKE Startup CPU Boost to temporarily increase CPU during pod startup. Requires the kube-startup-cpu-boost operator installed in the cluster. Boost is reverted when the pod becomes Ready. When enabled, a CPU limit of 1.3x the CPU request is automatically set (unless `container.cpuLimit` is explicitly configured). + enabled: false + # -- (int) Percentage to increase CPU requests during startup + # @default -- 50 + percentageIncrease: 50 # -- See https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#progress-deadline-seconds # @default -- 0 minReadySeconds: 0 @@ -98,10 +110,47 @@ cron: # -- (int) Active deadline seconds for the job, default 24 hours (86300s) activeDeadlineSeconds: hpa: - # -- Custom spec for HPA, inherits `scaleTargetRef` and min/max replicas. - # ps: Reason why we have set 100% cpu as default is because the java applications are resource hogs during startup. - # If you have good startupProbe/readinessProbes in place you can lower the cpu average utilization to ie 50/60%. - # - Or scale on other (custom) metrics. + # -- Additional HPA metrics appended alongside the default CPU metric. Accepts standard `autoscaling/v2` metric entries (Pods, Object, External). + # Use for scaling on custom metrics from Cloud Monitoring, Prometheus (GMP), or Pub/Sub. When multiple metrics are specified, HPA picks the one demanding the most replicas. + # @default -- [] + metrics: [] + # Pods type (per-pod custom metric, averaged across pods — supports AverageValue only): + # - type: Pods + # pods: + # metric: + # name: prometheus.googleapis.com|http_requests_total|gauge + # target: + # type: AverageValue + # averageValue: 100 + # + # External type (e.g. Pub/Sub queue depth — supports Value and AverageValue): + # - type: External + # external: + # metric: + # name: pubsub.googleapis.com|subscription|num_undelivered_messages + # selector: + # matchLabels: + # resource.labels.subscription_id: my-subscription + # target: + # type: AverageValue + # averageValue: 5 + # + # Object type (metric from another K8s object — supports Value and AverageValue): + # - type: Object + # object: + # metric: + # name: requests-per-second + # describedObject: + # apiVersion: networking.k8s.io/v1 + # kind: Ingress + # name: main-route + # target: + # type: Value + # value: 10k + # -- (int) Seconds to wait before scaling up after a metric spike. Only applied when startupCPUBoost is disabled, to avoid scaling on startup CPU spikes. Tune this to match your application's typical startup time (e.g. 60s for a fast app, 300s for a heavy Spring Boot app with cache warming). + # @default -- 120 + stabilizationWindowSeconds: + # -- Full custom spec for HPA, replaces default metrics and min/max replicas. Inherits `scaleTargetRef`. spec: {} # Example for custom spec where cpu scaling is set to 60%: @@ -115,10 +164,7 @@ hpa: # maxReplicas: 10 # minReplicas: 2 -pdb: - # -- (string) Set minimum available %, this overrides pdb setting minAvailable in deployment/container - # @default -- 50% - minAvailable: +pdb: {} service: # -- Enable or disable the service @@ -145,30 +191,20 @@ container: command: # -- Optionally set the arguments that will be passed to the command, e.g. ["arg1","arg2"]. args: - # -- Set CPU without any unit. 100m is 0.1 - # @default -- 0.1 - cpu: 0.1 + # -- Set CPU request without any unit. 100m is 0.1. Default is sized for JVM/Spring Boot apps; lighter workloads (sidecars, small Go services, static frontends) should override down. + # @default -- 0.3 + cpu: 0.3 # -- (float) Set CPU limit without any unit. 100m is 0.1 # @default -- `5 x cpu` cpuLimit: - # -- Set memory without any unit, `Mi` is inferred - # @default -- 16 - memory: 16 - # -- Set memory limit without any unit, `Mi` is inferred - # @default -- `1.2 * memory` + # -- Set memory request without any unit, `Mi` is inferred. Memory limit always equals request. Default is sized for JVM/Spring Boot apps (the JVM alone needs ~150–250 MiB before app code runs); lighter workloads should override down. + # @default -- 512 + memory: 512 + # -- @deprecated memoryLimit is removed. Memory limit is now always equal to memory request. Use `container.memory` instead. memoryLimit: # -- Set the uid that your user runs with # @default -- 1000 uid: 1000 - # -- (int) Set the target replica count, if equal to 1 the PDB minAvailable will be set to 100% - replicas: - # -- (int) Force replicas disables autoscaling and PDB, if set to 1 it will use Recreate strategy - forceReplicas: - # -- (string) Set the minimal available replicas, used by PDB - # @default -- 50% - minAvailable: - # -- (int) Set the maxReplicas for your HPA - maxReplicas: # -- Attach secrets and configmaps to your `env` envFrom: [] @@ -216,7 +252,7 @@ container: grpc: # port: 8080 readiness: - # -- Set the path for liveness probe + # -- Set the path for readiness probe # @default -- /actuator/health/readiness path: "/actuator/health/readiness" # -- Set the initial delay for the probe @@ -234,6 +270,8 @@ container: grpc: # port: 8080 startup: + # -- Set the path for startup probe. If set, uses httpGet instead of tcpSocket. Useful when startup includes long-running tasks like cache warming. + path: # -- Set the failure threshold # @default -- 300 failureThreshold: 300 @@ -247,8 +285,6 @@ container: volumeMounts: [] # -- Configure volume, accepts kubernetes syntax volumes: [] - # -- (int) Override pod terminationGracePeriodSeconds (default 30s). - terminationGracePeriodSeconds: # -- Set pod lifecycle handlers lifecycle: {} @@ -257,7 +293,7 @@ container: initContainers: [] postgres: - # -- Enable or disable the proxy + # -- Enable or disable the Cloud SQL proxy v2 sidecar # @default -- false enabled: false # -- Configure cpu request for proxy @@ -268,18 +304,17 @@ postgres: # -- Configure memory request for proxy without units, `Mi` inferred # @default -- 16 memory: 16 - # -- Configure memoryLimit for proxy without units, `Mi` inferred - # @default -- 16 - memoryLimit: 16 - # -- Override name for connection configmap. This must at least contain `INSTANCES`. - connectionConfig: - # -- Override name for credentials secret. This must at least contain `PGUSER` and `PGPASSWORD`. + # -- List of database connections keyed by Terraform `secret_key_prefix`. Each entry derives Secret Manager keys: `{prefix}INSTANCES`, `{prefix}USER`, `{prefix}PASSWORD`. The chart generates `{prefix}HOST=localhost` and `{prefix}PORT=5432+index`. When empty and `enabled: true`, defaults to `[{secretKeyPrefix: PG}]`. + # @default -- [] + instances: [] + # - secretKeyPrefix: PG + # - secretKeyPrefix: ANALYTICS_PG + # port: 6000 + # -- Override the Kubernetes secret name for credentials. Bypasses the ExternalSecret for credentials; the proxy ExternalSecret is still created. The secret must contain the expected env vars (e.g. `PGUSER`, `PGPASSWORD`). credentialsSecret: - # -- Override the term_timeout for the Cloud SQL Proxy. Controls how long the proxy waits for - # existing connections to close after receiving SIGTERM before force-closing them. - # Increase this if your app runs long-lived jobs that need the database during pod termination. + # -- Override the max-sigterm-delay for the Cloud SQL Proxy. Adds a delay before the proxy begins shutdown after receiving SIGTERM, useful for allowing load balancers to deregister the pod. # @default -- 30s - termTimeout: + maxSigtermDelay: configmap: # -- Enable or disable the configmap diff --git a/examples/common/cronjob/Chart.yaml b/examples/common/cronjob/Chart.yaml index 5d92165..6d42631 100644 --- a/examples/common/cronjob/Chart.yaml +++ b/examples/common/cronjob/Chart.yaml @@ -5,5 +5,5 @@ version: 0.0.3 appVersion: "0.0.1" dependencies: - name: common - version: 1.22.0 + version: 2.0.0 repository: "https://entur.github.io/helm-charts" diff --git a/examples/common/cronjob/README.md b/examples/common/cronjob/README.md index d21ca8c..b1052db 100644 --- a/examples/common/cronjob/README.md +++ b/examples/common/cronjob/README.md @@ -1,25 +1,48 @@ # cronjob -![Version: 0.0.3](https://img.shields.io/badge/Version-0.0.3-informational?style=flat-square) ![AppVersion: 0.0.1](https://img.shields.io/badge/AppVersion-0.0.1-informational?style=flat-square) +![Version: 0.0.3](https://img.shields.io/badge/Version-0.0.3-informational?style=flat-square) ![AppVersion: 0.0.1](https://img.shields.io/badge/AppVersion-0.0.1-informational?style=flat-square) -A Helm chart for Entur CronJob workloads (no Deployment, Service, or Ingress). +A scheduled job (Kubernetes `CronJob`) instead of a long-running `Deployment`. No service, no ingress, no HPA — just a container that runs on a schedule. + +## What this example shows + +- `deployment.enabled: false` — turn off the long-running deployment. +- `cron.enabled: true` — turn on the CronJob workload (mutually exclusive with `deployment.enabled`). +- `cron.schedule` — standard 5-field cron expression (this example: every 6 hours, in cluster timezone / UTC). +- `container.command` and `container.args` — what the job actually runs. + +## When to use this + +Use this for time-driven work: nightly imports, periodic cleanups, batch processing. If the work is event-driven instead, use a regular `Deployment` with a queue consumer (see the `multi-deploy` example). + +## Key values to know + +- `cron.schedule` — standard cron syntax. +- `container.image` — the same `<+artifacts.primary.image>` placeholder as deployments. The CronJob pulls this image each invocation. +- `container.command` / `container.args` — what to run. If your image already has the right entrypoint, you can omit both. ## Requirements | Repository | Name | Version | |------------|------|---------| -| https://entur.github.io/helm-charts | common | 1.21.1 | +| https://entur.github.io/helm-charts | common | 2.0.0 | ## Values | Key | Type | Default | Description | |-----|------|---------|-------------| | common.app | string | `"my-cronjob"` | | +| common.appId | string | `"mycron"` | | +| common.container.args[0] | string | `"-c"` | | +| common.container.args[1] | string | `"echo 'Hello from CronJob'"` | | +| common.container.command[0] | string | `"/bin/sh"` | | | common.container.image | string | `"<+artifacts.primary.image>"` | | | common.cron.enabled | bool | `true` | | -| common.cron.schedule | string | `"0 */6 * * *"` | Runs every 6 hours | +| common.cron.schedule | string | `"0 */6 * * *"` | | | common.deployment.enabled | bool | `false` | | | common.ingress.enabled | bool | `false` | | | common.service.enabled | bool | `false` | | -| common.shortname | string | `"mycron"` | | | common.team | string | `"example"` | | + +---------------------------------------------- +Autogenerated from chart metadata using [helm-docs v1.14.2](https://github.com/norwoodj/helm-docs/releases/v1.14.2) diff --git a/examples/common/cronjob/README.md.gotmpl b/examples/common/cronjob/README.md.gotmpl new file mode 100644 index 0000000..a5909d3 --- /dev/null +++ b/examples/common/cronjob/README.md.gotmpl @@ -0,0 +1,28 @@ +{{ template "chart.header" . }} + +{{ template "chart.versionBadge" . }} {{ template "chart.appVersionBadge" . }} + +A scheduled job (Kubernetes `CronJob`) instead of a long-running `Deployment`. No service, no ingress, no HPA — just a container that runs on a schedule. + +## What this example shows + +- `deployment.enabled: false` — turn off the long-running deployment. +- `cron.enabled: true` — turn on the CronJob workload (mutually exclusive with `deployment.enabled`). +- `cron.schedule` — standard 5-field cron expression (this example: every 6 hours, in cluster timezone / UTC). +- `container.command` and `container.args` — what the job actually runs. + +## When to use this + +Use this for time-driven work: nightly imports, periodic cleanups, batch processing. If the work is event-driven instead, use a regular `Deployment` with a queue consumer (see the `multi-deploy` example). + +## Key values to know + +- `cron.schedule` — standard cron syntax. +- `container.image` — the same `<+artifacts.primary.image>` placeholder as deployments. The CronJob pulls this image each invocation. +- `container.command` / `container.args` — what to run. If your image already has the right entrypoint, you can omit both. + +{{ template "chart.requirementsSection" . }} + +{{ template "chart.valuesSection" . }} + +{{ template "helm-docs.versionFooter" . }} diff --git a/examples/common/cronjob/values.yaml b/examples/common/cronjob/values.yaml index d62ac6d..166ff75 100644 --- a/examples/common/cronjob/values.yaml +++ b/examples/common/cronjob/values.yaml @@ -1,6 +1,6 @@ common: app: my-cronjob - shortname: mycron + appId: mycron team: example deployment: enabled: false @@ -13,7 +13,7 @@ common: schedule: "0 */6 * * *" container: image: <+artifacts.primary.image> - command: "['/bin/sh']" + command: ["/bin/sh"] args: - "-c" - "echo 'Hello from CronJob'" diff --git a/examples/common/grpc-app/Chart.yaml b/examples/common/grpc-app/Chart.yaml index dce8f0c..1917bda 100644 --- a/examples/common/grpc-app/Chart.yaml +++ b/examples/common/grpc-app/Chart.yaml @@ -5,5 +5,5 @@ version: 0.0.3 appVersion: "0.0.1" dependencies: - name: common - version: 1.22.0 + version: 2.0.0 repository: "https://entur.github.io/helm-charts" diff --git a/examples/common/grpc-app/README.md b/examples/common/grpc-app/README.md index 7e836e7..e2fe63b 100644 --- a/examples/common/grpc-app/README.md +++ b/examples/common/grpc-app/README.md @@ -1,25 +1,43 @@ # grpc-app -![Version: 0.0.3](https://img.shields.io/badge/Version-0.0.3-informational?style=flat-square) ![AppVersion: 0.0.1](https://img.shields.io/badge/AppVersion-0.0.1-informational?style=flat-square) +![Version: 0.0.3](https://img.shields.io/badge/Version-0.0.3-informational?style=flat-square) ![AppVersion: 0.0.1](https://img.shields.io/badge/AppVersion-0.0.1-informational?style=flat-square) -A Helm chart for basic Entur deployments using gRPC +A gRPC service. The chart switches HTTP probes to native Kubernetes gRPC probes and configures the **ingress** (the Kubernetes resource that routes external traffic into your service) and the service itself for end-to-end HTTP/2 (h2c — HTTP/2 cleartext) traffic. + +## What this example shows + +- `grpc: true` — tells the chart to use native gRPC probes (Kubernetes ≥1.24) instead of HTTP probes against `/actuator/health/*`. +- `ingress.trafficType: http2` — configures the ingress for end-to-end HTTP/2 instead of HTTP/1. +- `service.annotations.entur.no/internal-http2: "true"` — enables HTTP/2 for in-cluster service-to-service calls too. + +## When to use this + +Use this for pure gRPC services. If your service exposes both gRPC and REST, leave `grpc: false` and configure the HTTP/2 annotation manually on the parts that need it — see the comment in `values.yaml`. + +## Key values to know + +- `grpc: true` — the headline switch. Also wires up the `traefik.ingress.kubernetes.io/service.serversscheme: h2c` annotation automatically. +- `service.internalPort` — the port your gRPC server listens on. The chart wires this into the gRPC probes. +- `ingress.trafficType: http2` — anything else (`api`, `public`) terminates HTTP/2 at the ingress. ## Requirements | Repository | Name | Version | |------------|------|---------| -| https://entur.github.io/helm-charts | common | 1.21.1 | +| https://entur.github.io/helm-charts | common | 2.0.0 | ## Values | Key | Type | Default | Description | |-----|------|---------|-------------| | common.app | string | `"grpc-app"` | | +| common.appId | string | `"grpcapp"` | | | common.container.image | string | `"<+artifacts.primary.image>"` | | | common.grpc | bool | `true` | | | common.ingress.enabled | bool | `true` | | | common.ingress.trafficType | string | `"http2"` | | | common.service.annotations."entur.no/internal-http2" | string | `"true"` | | -| common.shortname | string | `"grpcapp"` | | | common.team | string | `"example"` | | +---------------------------------------------- +Autogenerated from chart metadata using [helm-docs v1.14.2](https://github.com/norwoodj/helm-docs/releases/v1.14.2) diff --git a/examples/common/grpc-app/README.md.gotmpl b/examples/common/grpc-app/README.md.gotmpl new file mode 100644 index 0000000..e9fc7f9 --- /dev/null +++ b/examples/common/grpc-app/README.md.gotmpl @@ -0,0 +1,27 @@ +{{ template "chart.header" . }} + +{{ template "chart.versionBadge" . }} {{ template "chart.appVersionBadge" . }} + +A gRPC service. The chart switches HTTP probes to native Kubernetes gRPC probes and configures the **ingress** (the Kubernetes resource that routes external traffic into your service) and the service itself for end-to-end HTTP/2 (h2c — HTTP/2 cleartext) traffic. + +## What this example shows + +- `grpc: true` — tells the chart to use native gRPC probes (Kubernetes ≥1.24) instead of HTTP probes against `/actuator/health/*`. +- `ingress.trafficType: http2` — configures the ingress for end-to-end HTTP/2 instead of HTTP/1. +- `service.annotations.entur.no/internal-http2: "true"` — enables HTTP/2 for in-cluster service-to-service calls too. + +## When to use this + +Use this for pure gRPC services. If your service exposes both gRPC and REST, leave `grpc: false` and configure the HTTP/2 annotation manually on the parts that need it — see the comment in `values.yaml`. + +## Key values to know + +- `grpc: true` — the headline switch. Also wires up the `traefik.ingress.kubernetes.io/service.serversscheme: h2c` annotation automatically. +- `service.internalPort` — the port your gRPC server listens on. The chart wires this into the gRPC probes. +- `ingress.trafficType: http2` — anything else (`api`, `public`) terminates HTTP/2 at the ingress. + +{{ template "chart.requirementsSection" . }} + +{{ template "chart.valuesSection" . }} + +{{ template "helm-docs.versionFooter" . }} diff --git a/examples/common/grpc-app/env/values-kub-ent-prd.yaml b/examples/common/grpc-app/env/values-kub-ent-prd.yaml new file mode 100644 index 0000000..157fa4f --- /dev/null +++ b/examples/common/grpc-app/env/values-kub-ent-prd.yaml @@ -0,0 +1,4 @@ +common: + env: prd + ingress: + host: grpc-app.entur.io diff --git a/examples/common/grpc-app/env/values-kub-ent-tst.yaml b/examples/common/grpc-app/env/values-kub-ent-tst.yaml new file mode 100644 index 0000000..af00018 --- /dev/null +++ b/examples/common/grpc-app/env/values-kub-ent-tst.yaml @@ -0,0 +1,4 @@ +common: + env: tst + ingress: + host: grpc-app.tst.entur.io diff --git a/examples/common/grpc-app/values.yaml b/examples/common/grpc-app/values.yaml index 63499bd..c632582 100644 --- a/examples/common/grpc-app/values.yaml +++ b/examples/common/grpc-app/values.yaml @@ -1,6 +1,6 @@ common: app: grpc-app - shortname: grpcapp + appId: grpcapp team: example grpc: true # -- Enable gRPC which will add an annotation and use grpc probes ingress: diff --git a/examples/common/multi-container/Chart.yaml b/examples/common/multi-container/Chart.yaml index ae27e3d..0554af7 100644 --- a/examples/common/multi-container/Chart.yaml +++ b/examples/common/multi-container/Chart.yaml @@ -5,5 +5,5 @@ version: 0.0.3 appVersion: "0.0.1" dependencies: - name: common - version: 1.22.0 + version: 2.0.0 repository: "https://entur.github.io/helm-charts" diff --git a/examples/common/multi-container/README.md b/examples/common/multi-container/README.md index ad10689..dab5bd1 100644 --- a/examples/common/multi-container/README.md +++ b/examples/common/multi-container/README.md @@ -1,20 +1,38 @@ # multi-container -![Version: 0.0.3](https://img.shields.io/badge/Version-0.0.3-informational?style=flat-square) ![AppVersion: 0.0.1](https://img.shields.io/badge/AppVersion-0.0.1-informational?style=flat-square) +![Version: 0.0.3](https://img.shields.io/badge/Version-0.0.3-informational?style=flat-square) ![AppVersion: 0.0.1](https://img.shields.io/badge/AppVersion-0.0.1-informational?style=flat-square) -A Helm chart for multiple containers +A pod with multiple containers running side by side. Same pod, same lifecycle, shared network namespace. + +## What this example shows + +- `containers:` (plural list) instead of `container:` (single map). Each entry in the list becomes one container in the pod. +- Custom probe paths and a long startup timeout (`startup.failureThreshold: 300`). +- Multiple service ports targeting different containers (`service.ports`). +- Prometheus scraping on a non-default path and port. + +## When to use this + +Only put two containers in one pod when they genuinely need to share a lifecycle — typical sidecars are logging shippers, config reloaders, or proxies (e.g. Envoy). If the two processes can run as separate Deployments, prefer that instead (see the `multi-deploy` example) — separate Deployments scale and fail independently. + +## Key values to know + +- `containers:` (list) — each entry follows the same shape as the single `container:` map. +- `service.ports` — list of TCP ports the service exposes. Each port's `targetPort` can route to a different container's `containerPort`. +- `deployment.prometheus.path` / `port` — override scrape config when your metrics endpoint isn't the default `/actuator/prometheus`. ## Requirements | Repository | Name | Version | |------------|------|---------| -| https://entur.github.io/helm-charts | common | 1.21.1 | +| https://entur.github.io/helm-charts | common | 2.0.0 | ## Values | Key | Type | Default | Description | |-----|------|---------|-------------| | common.app | string | `"multi-container"` | | +| common.appId | string | `"multcont"` | | | common.configmap.data.APP1CONF | string | `"yes"` | | | common.configmap.enabled | bool | `true` | | | common.containers[0].cpu | float | `1.1` | | @@ -58,6 +76,7 @@ A Helm chart for multiple containers | common.service.ports[1].port | int | `5001` | | | common.service.ports[1].protocol | string | `"TCP"` | | | common.service.ports[1].targetPort | int | `6001` | | -| common.shortname | string | `"multcont"` | | | common.team | string | `"example"` | | +---------------------------------------------- +Autogenerated from chart metadata using [helm-docs v1.14.2](https://github.com/norwoodj/helm-docs/releases/v1.14.2) diff --git a/examples/common/multi-container/README.md.gotmpl b/examples/common/multi-container/README.md.gotmpl new file mode 100644 index 0000000..061e01d --- /dev/null +++ b/examples/common/multi-container/README.md.gotmpl @@ -0,0 +1,28 @@ +{{ template "chart.header" . }} + +{{ template "chart.versionBadge" . }} {{ template "chart.appVersionBadge" . }} + +A pod with multiple containers running side by side. Same pod, same lifecycle, shared network namespace. + +## What this example shows + +- `containers:` (plural list) instead of `container:` (single map). Each entry in the list becomes one container in the pod. +- Custom probe paths and a long startup timeout (`startup.failureThreshold: 300`). +- Multiple service ports targeting different containers (`service.ports`). +- Prometheus scraping on a non-default path and port. + +## When to use this + +Only put two containers in one pod when they genuinely need to share a lifecycle — typical sidecars are logging shippers, config reloaders, or proxies (e.g. Envoy). If the two processes can run as separate Deployments, prefer that instead (see the `multi-deploy` example) — separate Deployments scale and fail independently. + +## Key values to know + +- `containers:` (list) — each entry follows the same shape as the single `container:` map. +- `service.ports` — list of TCP ports the service exposes. Each port's `targetPort` can route to a different container's `containerPort`. +- `deployment.prometheus.path` / `port` — override scrape config when your metrics endpoint isn't the default `/actuator/prometheus`. + +{{ template "chart.requirementsSection" . }} + +{{ template "chart.valuesSection" . }} + +{{ template "helm-docs.versionFooter" . }} diff --git a/examples/common/multi-container/values.yaml b/examples/common/multi-container/values.yaml index cc04395..cd5c8d7 100644 --- a/examples/common/multi-container/values.yaml +++ b/examples/common/multi-container/values.yaml @@ -1,6 +1,6 @@ common: app: multi-container - shortname: multcont + appId: multcont team: example env: dev diff --git a/examples/common/multi-deploy/Chart.yaml b/examples/common/multi-deploy/Chart.yaml index 8783c3b..f20e3db 100644 --- a/examples/common/multi-deploy/Chart.yaml +++ b/examples/common/multi-deploy/Chart.yaml @@ -5,10 +5,10 @@ version: 0.0.3 appVersion: "0.0.1" dependencies: - name: common - version: 1.22.0 + version: 2.0.0 repository: "https://entur.github.io/helm-charts" alias: multi-1 - name: common - version: 1.22.0 + version: 2.0.0 repository: "https://entur.github.io/helm-charts" alias: multi-2 diff --git a/examples/common/multi-deploy/README.md b/examples/common/multi-deploy/README.md index de71a67..eb84dba 100644 --- a/examples/common/multi-deploy/README.md +++ b/examples/common/multi-deploy/README.md @@ -1,16 +1,23 @@ # multi-deploy -![Version: 0.0.3](https://img.shields.io/badge/Version-0.0.3-informational?style=flat-square) ![AppVersion: 0.0.1](https://img.shields.io/badge/AppVersion-0.0.1-informational?style=flat-square) +![Version: 0.0.3](https://img.shields.io/badge/Version-0.0.3-informational?style=flat-square) ![AppVersion: 0.0.1](https://img.shields.io/badge/AppVersion-0.0.1-informational?style=flat-square) -A Helm Common Chart example where we deploy the same container image to two different deployments with different environment variables. +A single chart that produces two separate Deployments (and Services, ingresses, etc.) from one container image. The image is the same; what differs is which `CONTAINER_ROLE` env var each Deployment gets. -This way the application code can pick up the environment variable (CONTAINER_ROLE) and act accordingly. +## What this example shows -In the example - one is a basic rest api, and the other is a kafka queue consumer. +- `multi-1` is a REST API — gets `CONTAINER_ROLE=rest-api` and an **ingress** (the Kubernetes resource that routes external HTTP traffic into your service). +- `multi-2` is a Kafka consumer — gets `CONTAINER_ROLE=kafka-consumer` and **no** ingress (workers don't accept inbound traffic). + +The application code reads `CONTAINER_ROLE` at startup and decides which mode to run in. The two deployments scale independently. + +## When to use this + +Use this when one codebase legitimately serves two roles (typically: web API + async worker) and you want to deploy them as separate, independently-scaling workloads — without maintaining two repos or two charts. ## GitHub Actions CD -When using Enturs shared [gha-helm](https://github.com/entur/gha-helm/blob/main/README-deploy.md) reusable workflow we also need define the `image:` path being replaced during deploy. +When using Entur's shared [gha-helm](https://github.com/entur/gha-helm/blob/main/README-deploy.md) reusable workflow we also need to define the `image:` path being replaced during deploy. ```yaml helm-deploy: @@ -22,38 +29,43 @@ helm-deploy: secrets: inherit ``` +## Key values to know + +- `multi-1` / `multi-2` — top-level keys must match the `alias` fields in `Chart.yaml` dependencies. Each key configures one deployment. +- `releaseName` (per alias) — gives each deployment its own Helm release-style name. This is the secret sauce that splits them apart. +- `appId` — typically the **same** for both halves, since they belong to the same logical app. + ## Requirements -| Repository | Name | Version | -| ----------------------------------- | --------------- | ------- | -| https://entur.github.io/helm-charts | multi-1(common) | 1.21.1 | -| https://entur.github.io/helm-charts | multi-2(common) | 1.21.1 | +| Repository | Name | Version | +|------------|------|---------| +| https://entur.github.io/helm-charts | multi-1(common) | 2.0.0 | +| https://entur.github.io/helm-charts | multi-2(common) | 2.0.0 | ## Values -| Key | Type | Default | Description | -| ----------------------------------- | ------ | ------------------------------- | ----------- | -| multi-1.app | string | `"multi-1"` | | -| multi-1.configmap.data.APP1CONF | string | `"yes"` | | -| multi-1.configmap.enabled | bool | `true` | | -| multi-1.container.image | string | `"<+artifacts.primary.image>"` | | -| multi-1.env | string | `"dev"` | | -| multi-1.ingress.trafficType | string | `"public"` | | -| multi-1.releaseName | string | `"multi1"` | | -| multi-1.secrets.auth-credentials[0] | string | `"MNG_AUTH0_INT_CLIENT_ID"` | | -| multi-1.secrets.auth-credentials[1] | string | `"MNG_AUTH0_INT_CLIENT_SECRET"` | | -| multi-1.service.internalPort | int | `9000` | | -| multi-1.shortname | string | `"mult1"` | | -| multi-1.team | string | `"example"` | | -| multi-2.app | string | `"multi-2"` | | -| multi-2.configmap.data.APP2CONF | string | `"yes"` | | -| multi-2.configmap.enabled | bool | `true` | | -| multi-2.container.image | string | `"<+artifacts.primary.image>"` | | -| multi-2.env | string | `"dev"` | | -| multi-2.ingress.trafficType | string | `"public"` | | -| multi-2.releaseName | string | `"multi2"` | | -| multi-2.secrets.auth-credentials[0] | string | `"MNG_AUTH0_INT_CLIENT_ID"` | | -| multi-2.secrets.auth-credentials[1] | string | `"MNG_AUTH0_INT_CLIENT_SECRET"` | | -| multi-2.service.internalPort | int | `9000` | | -| multi-2.shortname | string | `"mult2"` | | -| multi-2.team | string | `"example"` | | +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| multi-1.app | string | `"my-rest-api"` | | +| multi-1.appId | string | `"myappid1"` | | +| multi-1.container.env[0].name | string | `"CONTAINER_ROLE"` | | +| multi-1.container.env[0].value | string | `"rest-api"` | | +| multi-1.container.image | string | `"<+artifacts.primary.image>"` | | +| multi-1.ingress.trafficType | string | `"api"` | | +| multi-1.releaseName | string | `"my-rest-api"` | | +| multi-1.secrets.auth-credentials[0] | string | `"MNG_AUTH0_INT_CLIENT_ID"` | | +| multi-1.secrets.auth-credentials[1] | string | `"MNG_AUTH0_INT_CLIENT_SECRET"` | | +| multi-1.team | string | `"team-excellence"` | | +| multi-2.app | string | `"my-kafka-consumer"` | | +| multi-2.appId | string | `"myappid1"` | | +| multi-2.container.env[0].name | string | `"CONTAINER_ROLE"` | | +| multi-2.container.env[0].value | string | `"kafka-consumer"` | | +| multi-2.container.image | string | `"<+artifacts.primary.image>"` | | +| multi-2.ingress.enabled | bool | `false` | | +| multi-2.releaseName | string | `"my-kafka-consumer"` | | +| multi-2.secrets.auth-credentials[0] | string | `"MNG_AUTH0_INT_CLIENT_ID"` | | +| multi-2.secrets.auth-credentials[1] | string | `"MNG_AUTH0_INT_CLIENT_SECRET"` | | +| multi-2.team | string | `"team-excellence"` | | + +---------------------------------------------- +Autogenerated from chart metadata using [helm-docs v1.14.2](https://github.com/norwoodj/helm-docs/releases/v1.14.2) diff --git a/examples/common/multi-deploy/README.md.gotmpl b/examples/common/multi-deploy/README.md.gotmpl new file mode 100644 index 0000000..d2b379d --- /dev/null +++ b/examples/common/multi-deploy/README.md.gotmpl @@ -0,0 +1,42 @@ +{{ template "chart.header" . }} + +{{ template "chart.versionBadge" . }} {{ template "chart.appVersionBadge" . }} + +A single chart that produces two separate Deployments (and Services, ingresses, etc.) from one container image. The image is the same; what differs is which `CONTAINER_ROLE` env var each Deployment gets. + +## What this example shows + +- `multi-1` is a REST API — gets `CONTAINER_ROLE=rest-api` and an **ingress** (the Kubernetes resource that routes external HTTP traffic into your service). +- `multi-2` is a Kafka consumer — gets `CONTAINER_ROLE=kafka-consumer` and **no** ingress (workers don't accept inbound traffic). + +The application code reads `CONTAINER_ROLE` at startup and decides which mode to run in. The two deployments scale independently. + +## When to use this + +Use this when one codebase legitimately serves two roles (typically: web API + async worker) and you want to deploy them as separate, independently-scaling workloads — without maintaining two repos or two charts. + +## GitHub Actions CD + +When using Entur's shared [gha-helm](https://github.com/entur/gha-helm/blob/main/README-deploy.md) reusable workflow we also need to define the `image:` path being replaced during deploy. + +```yaml +helm-deploy: + uses: entur/gha-helm/.github/workflows/deploy.yml@v1 + with: + environment: dev + image: amazing-app:2.3.1 + image_set_path: "multi-1.container.image,multi-2.container.image" + secrets: inherit +``` + +## Key values to know + +- `multi-1` / `multi-2` — top-level keys must match the `alias` fields in `Chart.yaml` dependencies. Each key configures one deployment. +- `releaseName` (per alias) — gives each deployment its own Helm release-style name. This is the secret sauce that splits them apart. +- `appId` — typically the **same** for both halves, since they belong to the same logical app. + +{{ template "chart.requirementsSection" . }} + +{{ template "chart.valuesSection" . }} + +{{ template "helm-docs.versionFooter" . }} diff --git a/examples/common/multi-deploy/values.yaml b/examples/common/multi-deploy/values.yaml index bde24ce..4bbdbbb 100644 --- a/examples/common/multi-deploy/values.yaml +++ b/examples/common/multi-deploy/values.yaml @@ -2,7 +2,7 @@ multi-1: # must match the multi-1 alias in Chart.yaml releaseName: my-rest-api # the secret sauce to divide the two apps into separate releases app: my-rest-api - shortname: myappid1 # appID, let this be the same for both multi-1 and multi-2. + appId: myappid1 # appID, let this be the same for both multi-1 and multi-2. team: team-excellence ingress: @@ -21,7 +21,7 @@ multi-2: # must match the multi-2 alias in Chart.yaml releaseName: my-kafka-consumer # the secret sauce to divide the two apps into separate releases app: my-kafka-consumer - shortname: myappid1 # appID, let this be the same for both multi-1 and multi-2. + appId: myappid1 # appID, let this be the same for both multi-1 and multi-2. team: team-excellence ingress: diff --git a/examples/common/simple-app/Chart.yaml b/examples/common/simple-app/Chart.yaml index b9ac0c4..f740e9f 100644 --- a/examples/common/simple-app/Chart.yaml +++ b/examples/common/simple-app/Chart.yaml @@ -5,5 +5,5 @@ version: 0.0.3 appVersion: "0.0.1" dependencies: - name: common - version: 1.22.0 + version: 2.0.0 repository: "https://entur.github.io/helm-charts" diff --git a/examples/common/simple-app/README.md b/examples/common/simple-app/README.md index b4ef663..479b408 100644 --- a/examples/common/simple-app/README.md +++ b/examples/common/simple-app/README.md @@ -1,22 +1,40 @@ # simple-app -![Version: 0.0.3](https://img.shields.io/badge/Version-0.0.3-informational?style=flat-square) ![AppVersion: 0.0.1](https://img.shields.io/badge/AppVersion-0.0.1-informational?style=flat-square) +![Version: 0.0.3](https://img.shields.io/badge/Version-0.0.3-informational?style=flat-square) ![AppVersion: 0.0.1](https://img.shields.io/badge/AppVersion-0.0.1-informational?style=flat-square) -A Helm chart for basic Entur deployments +The smallest possible deployment using the Entur common chart. Just the required values, nothing else. + +## What this example shows + +A bare-minimum Spring Boot-style deployment with public ingress. The chart fills in everything else with sensible defaults: an HPA with 2–10 replicas, a PodDisruptionBudget, security contexts, and resource requests/limits. + +## When to use this + +Use this as your starting point when introducing a brand-new service. Copy `values.yaml`, swap the names, and you have a working deployment. + +## Key values to know + +- `app` — the name of your service (becomes the Deployment name). +- `appId` — must match the `metadata.id` of your `GoogleCloudApplication` resource. +- `team` — the owning team slug. +- `container.image` — `<+artifacts.primary.image>` is the placeholder Harness CD substitutes with the actual built image at deploy time. +- `ingress.trafficType` — controls the **ingress** (the Kubernetes resource that routes external HTTP traffic into your service). Use `public` for end-user traffic, `api` for traffic behind the Apigee API Gateway. ## Requirements | Repository | Name | Version | |------------|------|---------| -| https://entur.github.io/helm-charts | common | 1.21.1 | +| https://entur.github.io/helm-charts | common | 2.0.0 | ## Values | Key | Type | Default | Description | |-----|------|---------|-------------| | common.app | string | `"simple-app"` | | +| common.appId | string | `"simapp"` | | | common.container.image | string | `"<+artifacts.primary.image>"` | | | common.ingress.trafficType | string | `"public"` | | -| common.shortname | string | `"simapp"` | | | common.team | string | `"example"` | | +---------------------------------------------- +Autogenerated from chart metadata using [helm-docs v1.14.2](https://github.com/norwoodj/helm-docs/releases/v1.14.2) diff --git a/examples/common/simple-app/README.md.gotmpl b/examples/common/simple-app/README.md.gotmpl new file mode 100644 index 0000000..395b68a --- /dev/null +++ b/examples/common/simple-app/README.md.gotmpl @@ -0,0 +1,27 @@ +{{ template "chart.header" . }} + +{{ template "chart.versionBadge" . }} {{ template "chart.appVersionBadge" . }} + +The smallest possible deployment using the Entur common chart. Just the required values, nothing else. + +## What this example shows + +A bare-minimum Spring Boot-style deployment with public ingress. The chart fills in everything else with sensible defaults: an HPA with 2–10 replicas, a PodDisruptionBudget, security contexts, and resource requests/limits. + +## When to use this + +Use this as your starting point when introducing a brand-new service. Copy `values.yaml`, swap the names, and you have a working deployment. + +## Key values to know + +- `app` — the name of your service (becomes the Deployment name). +- `appId` — must match the `metadata.id` of your `GoogleCloudApplication` resource. +- `team` — the owning team slug. +- `container.image` — `<+artifacts.primary.image>` is the placeholder Harness CD substitutes with the actual built image at deploy time. +- `ingress.trafficType` — controls the **ingress** (the Kubernetes resource that routes external HTTP traffic into your service). Use `public` for end-user traffic, `api` for traffic behind the Apigee API Gateway. + +{{ template "chart.requirementsSection" . }} + +{{ template "chart.valuesSection" . }} + +{{ template "helm-docs.versionFooter" . }} diff --git a/examples/common/simple-app/env/values-kub-ent-prd.yaml b/examples/common/simple-app/env/values-kub-ent-prd.yaml index 8d25a63..421c942 100644 --- a/examples/common/simple-app/env/values-kub-ent-prd.yaml +++ b/examples/common/simple-app/env/values-kub-ent-prd.yaml @@ -2,8 +2,8 @@ common: env: prd ingress: host: simple-app.entur.io - container: - replicas: 2 + deployment: + minReplicas: 2 maxReplicas: 10 configmap: enabled: true diff --git a/examples/common/simple-app/values.yaml b/examples/common/simple-app/values.yaml index 1ffa35a..b2e5fc9 100644 --- a/examples/common/simple-app/values.yaml +++ b/examples/common/simple-app/values.yaml @@ -1,6 +1,6 @@ common: app: simple-app - shortname: simapp + appId: simapp team: example ingress: trafficType: public diff --git a/examples/common/typical-backend/Chart.yaml b/examples/common/typical-backend/Chart.yaml index 7384add..5ae6b45 100644 --- a/examples/common/typical-backend/Chart.yaml +++ b/examples/common/typical-backend/Chart.yaml @@ -6,5 +6,5 @@ version: 0.0.3 appVersion: "0.0.1" dependencies: - name: common - version: 1.22.0 + version: 2.0.0 repository: "https://entur.github.io/helm-charts" diff --git a/examples/common/typical-backend/README.md b/examples/common/typical-backend/README.md index f90631b..6a51c91 100644 --- a/examples/common/typical-backend/README.md +++ b/examples/common/typical-backend/README.md @@ -1,20 +1,42 @@ # typical-backend -![Version: 0.0.3](https://img.shields.io/badge/Version-0.0.3-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 0.0.1](https://img.shields.io/badge/AppVersion-0.0.1-informational?style=flat-square) +![Version: 0.0.3](https://img.shields.io/badge/Version-0.0.3-informational?style=flat-square) ![AppVersion: 0.0.1](https://img.shields.io/badge/AppVersion-0.0.1-informational?style=flat-square) -A Helm chart for basic Entur deployments +The most common shape of a service at Entur: a Spring Boot backend that talks to a Cloud SQL Postgres database via a sidecar proxy, exposed through the Apigee API Gateway. + +## What this example shows + +- One container running your app. +- A `cloud-sql-proxy` sidecar so your app reaches Postgres on `localhost:5432` instead of going over the public internet. +- An **ingress** (the Kubernetes resource that routes external HTTP traffic into your service) with `trafficType: api` — the URL is only reachable behind the Apigee API Gateway at `https://api.entur.io/yourAPI`. + +## When to use this + +Use this when your service: + +- Is a Spring Boot or other JVM app. +- Needs a Postgres database. +- Exposes an HTTP API that goes through Apigee. + +## Key values to know + +- `container.cpu` / `container.memory` — sized for a JVM. The JVM alone needs ~150–250 MiB before app code runs, so 512 is a reasonable floor. Bump higher if your app needs more heap. +- `postgres.enabled: true` — turns on the Cloud SQL proxy sidecar. The chart auto-derives Secret Manager keys (`PGINSTANCES`, `PGUSER`, `PGPASSWORD`) and exports `PGHOST=localhost`, `PGPORT=5432` to your app container, matching the defaults in [`entur/terraform-google-sql-db`](https://github.com/entur/terraform-google-sql-db). +- `service.internalPort` — the port your app listens on inside the pod. Must match what your Dockerfile exposes. +- `deployment.prometheus.enabled` — flip on once you have a `/actuator/prometheus` endpoint. ## Requirements | Repository | Name | Version | |------------|------|---------| -| https://entur.github.io/helm-charts | common | 1.21.1 | +| https://entur.github.io/helm-charts | common | 2.0.0 | ## Values | Key | Type | Default | Description | |-----|------|---------|-------------| | common.app | string | `"typical-backend"` | | +| common.appId | string | `"typbak"` | | | common.container.cpu | float | `0.3` | | | common.container.image | string | `"<+artifacts.primary.image>"` | | | common.container.memory | int | `512` | | @@ -25,6 +47,7 @@ A Helm chart for basic Entur deployments | common.postgres.enabled | bool | `true` | | | common.postgres.memory | int | `32` | | | common.service.internalPort | int | `9000` | | -| common.shortname | string | `"typbak"` | | | common.team | string | `"team-example"` | | +---------------------------------------------- +Autogenerated from chart metadata using [helm-docs v1.14.2](https://github.com/norwoodj/helm-docs/releases/v1.14.2) diff --git a/examples/common/typical-backend/README.md.gotmpl b/examples/common/typical-backend/README.md.gotmpl new file mode 100644 index 0000000..6a28842 --- /dev/null +++ b/examples/common/typical-backend/README.md.gotmpl @@ -0,0 +1,32 @@ +{{ template "chart.header" . }} + +{{ template "chart.versionBadge" . }} {{ template "chart.appVersionBadge" . }} + +The most common shape of a service at Entur: a Spring Boot backend that talks to a Cloud SQL Postgres database via a sidecar proxy, exposed through the Apigee API Gateway. + +## What this example shows + +- One container running your app. +- A `cloud-sql-proxy` sidecar so your app reaches Postgres on `localhost:5432` instead of going over the public internet. +- An **ingress** (the Kubernetes resource that routes external HTTP traffic into your service) with `trafficType: api` — the URL is only reachable behind the Apigee API Gateway at `https://api.entur.io/yourAPI`. + +## When to use this + +Use this when your service: + +- Is a Spring Boot or other JVM app. +- Needs a Postgres database. +- Exposes an HTTP API that goes through Apigee. + +## Key values to know + +- `container.cpu` / `container.memory` — sized for a JVM. The JVM alone needs ~150–250 MiB before app code runs, so 512 is a reasonable floor. Bump higher if your app needs more heap. +- `postgres.enabled: true` — turns on the Cloud SQL proxy sidecar. The chart auto-derives Secret Manager keys (`PGINSTANCES`, `PGUSER`, `PGPASSWORD`) and exports `PGHOST=localhost`, `PGPORT=5432` to your app container, matching the defaults in [`entur/terraform-google-sql-db`](https://github.com/entur/terraform-google-sql-db). +- `service.internalPort` — the port your app listens on inside the pod. Must match what your Dockerfile exposes. +- `deployment.prometheus.enabled` — flip on once you have a `/actuator/prometheus` endpoint. + +{{ template "chart.requirementsSection" . }} + +{{ template "chart.valuesSection" . }} + +{{ template "helm-docs.versionFooter" . }} diff --git a/examples/common/typical-backend/values.yaml b/examples/common/typical-backend/values.yaml index f761c5a..a5dabf9 100644 --- a/examples/common/typical-backend/values.yaml +++ b/examples/common/typical-backend/values.yaml @@ -1,6 +1,6 @@ common: app: typical-backend - shortname: typbak + appId: typbak team: team-example ingress: enabled: true @@ -14,7 +14,7 @@ common: image: <+artifacts.primary.image> # The CD tool will replace this value with the actual image cpu: 0.3 # Adjust this to your application needs memory: 512 # Adjust this to your application needs, a java application might need more memory to start like 1024 or 2048 or +++ - postgres: # sets up a postgres proxy so you can connect to your postgresql instance via localhost on the pod - enabled: true + postgres: # sets up a Cloud SQL proxy sidecar so you can connect to your postgresql instance via localhost on the pod + enabled: true # defaults to secretKeyPrefix: PG, matching entur/terraform-google-sql-db's default secret_key_prefix cpu: 0.1 # PostgreSQL Proxy setting, this usually is enough for most applications memory: 32 # PostgreSQL Proxy setting, this usually is enough for most applications diff --git a/examples/common/typical-frontend/Chart.yaml b/examples/common/typical-frontend/Chart.yaml index c4f2caa..bda5589 100644 --- a/examples/common/typical-frontend/Chart.yaml +++ b/examples/common/typical-frontend/Chart.yaml @@ -6,5 +6,5 @@ version: 0.0.3 appVersion: "0.0.1" dependencies: - name: common - version: 1.22.0 + version: 2.0.0 repository: "https://entur.github.io/helm-charts" diff --git a/examples/common/typical-frontend/README.md b/examples/common/typical-frontend/README.md index b8f65d7..a378880 100644 --- a/examples/common/typical-frontend/README.md +++ b/examples/common/typical-frontend/README.md @@ -1,32 +1,56 @@ # typical-frontend -![Version: 0.0.3](https://img.shields.io/badge/Version-0.0.3-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 0.0.1](https://img.shields.io/badge/AppVersion-0.0.1-informational?style=flat-square) +![Version: 0.0.3](https://img.shields.io/badge/Version-0.0.3-informational?style=flat-square) ![AppVersion: 0.0.1](https://img.shields.io/badge/AppVersion-0.0.1-informational?style=flat-square) -A Helm chart for basic Entur deployments +A typical frontend or BFF (backend-for-frontend) web app: public ingress, a ConfigMap for non-secret runtime config, and Secret Manager-backed secrets. + +## What this example shows + +- `ingress.trafficType: public` — accepts end-user traffic directly (not behind Apigee). An **ingress** is the Kubernetes resource that routes external HTTP traffic into your service. +- `configmap` — non-sensitive runtime config (e.g. a timezone) baked into env vars. +- `secrets.auth-credentials` — Secret Manager keys pulled into a Kubernetes Secret named `-auth-credentials` via External Secrets. +- `deployment.prometheus.enabled: true` — scrapes `/actuator/prometheus` by default. + +## When to use this + +Use this for any frontend or BFF that: + +- Serves end users directly over HTTPS. +- Needs both runtime config (ConfigMap) and secrets (External Secrets). +- Is not a JVM app — otherwise bump `container.memory` higher. + +## Key values to know + +- `ingress.trafficType: public` — exposes the service to the public internet via the Entur ingress controller. +- `configmap.data` — keys here become env vars in the container. +- `secrets.auth-credentials` — list of Secret Manager keys; the chart turns them into Kubernetes Secret entries with the same names. +- `deployment.minReplicas: 2` — minimum pod count when HPA scales down. ## Requirements | Repository | Name | Version | |------------|------|---------| -| https://entur.github.io/helm-charts | common | 1.21.1 | +| https://entur.github.io/helm-charts | common | 2.0.0 | ## Values | Key | Type | Default | Description | |-----|------|---------|-------------| | common.app | string | `"typical-frontend"` | | -| common.configmap.data.TZx | string | `"Europe/Oslo"` | | +| common.appId | string | `"typfro"` | | +| common.configmap.data.TZ | string | `"Europe/Oslo"` | | | common.configmap.enabled | bool | `true` | | | common.container.cpu | float | `0.3` | | | common.container.image | string | `"<+artifacts.primary.image>"` | | | common.container.memory | int | `512` | | +| common.deployment.minReplicas | int | `2` | | | common.deployment.prometheus.enabled | bool | `true` | | -| common.deployment.replicas | int | `2` | | | common.ingress.enabled | bool | `true` | | | common.ingress.trafficType | string | `"public"` | | | common.secrets.auth-credentials[0] | string | `"MNG_AUTH0_INT_CLIENT_ID"` | | | common.secrets.auth-credentials[1] | string | `"MNG_AUTH0_INT_CLIENT_SECRET"` | | | common.service.internalPort | int | `9000` | | -| common.shortname | string | `"typfro"` | | | common.team | string | `"example"` | | +---------------------------------------------- +Autogenerated from chart metadata using [helm-docs v1.14.2](https://github.com/norwoodj/helm-docs/releases/v1.14.2) diff --git a/examples/common/typical-frontend/README.md.gotmpl b/examples/common/typical-frontend/README.md.gotmpl new file mode 100644 index 0000000..37853d0 --- /dev/null +++ b/examples/common/typical-frontend/README.md.gotmpl @@ -0,0 +1,33 @@ +{{ template "chart.header" . }} + +{{ template "chart.versionBadge" . }} {{ template "chart.appVersionBadge" . }} + +A typical frontend or BFF (backend-for-frontend) web app: public ingress, a ConfigMap for non-secret runtime config, and Secret Manager-backed secrets. + +## What this example shows + +- `ingress.trafficType: public` — accepts end-user traffic directly (not behind Apigee). An **ingress** is the Kubernetes resource that routes external HTTP traffic into your service. +- `configmap` — non-sensitive runtime config (e.g. a timezone) baked into env vars. +- `secrets.auth-credentials` — Secret Manager keys pulled into a Kubernetes Secret named `-auth-credentials` via External Secrets. +- `deployment.prometheus.enabled: true` — scrapes `/actuator/prometheus` by default. + +## When to use this + +Use this for any frontend or BFF that: + +- Serves end users directly over HTTPS. +- Needs both runtime config (ConfigMap) and secrets (External Secrets). +- Is not a JVM app — otherwise bump `container.memory` higher. + +## Key values to know + +- `ingress.trafficType: public` — exposes the service to the public internet via the Entur ingress controller. +- `configmap.data` — keys here become env vars in the container. +- `secrets.auth-credentials` — list of Secret Manager keys; the chart turns them into Kubernetes Secret entries with the same names. +- `deployment.minReplicas: 2` — minimum pod count when HPA scales down. + +{{ template "chart.requirementsSection" . }} + +{{ template "chart.valuesSection" . }} + +{{ template "helm-docs.versionFooter" . }} diff --git a/examples/common/typical-frontend/env/values-kub-ent-prd.yaml b/examples/common/typical-frontend/env/values-kub-ent-prd.yaml index 3bb8ff6..5c9bf27 100644 --- a/examples/common/typical-frontend/env/values-kub-ent-prd.yaml +++ b/examples/common/typical-frontend/env/values-kub-ent-prd.yaml @@ -3,8 +3,8 @@ common: ingress: host: typical-frontend.entur.no - container: - replicas: 2 + deployment: + minReplicas: 2 maxReplicas: 10 configmap: enabled: true diff --git a/examples/common/typical-frontend/values.yaml b/examples/common/typical-frontend/values.yaml index fc3b7f6..6a95b3f 100644 --- a/examples/common/typical-frontend/values.yaml +++ b/examples/common/typical-frontend/values.yaml @@ -1,6 +1,6 @@ common: app: typical-frontend - shortname: typfro + appId: typfro team: example ingress: enabled: true @@ -10,7 +10,7 @@ common: deployment: prometheus: enabled: true - replicas: 2 + minReplicas: 2 container: image: <+artifacts.primary.image> cpu: 0.3 @@ -18,7 +18,7 @@ common: configmap: enabled: true data: - TZx: "Europe/Oslo" + TZ: "Europe/Oslo" secrets: auth-credentials: # k8s secret name "typical-frontend-auth-credentials" - MNG_AUTH0_INT_CLIENT_ID diff --git a/fixture/helm/ci/values-ci-cronjob-tests.yaml b/fixture/helm/ci/values-ci-cronjob-tests.yaml index 9c3e661..8f842d1 100644 --- a/fixture/helm/ci/values-ci-cronjob-tests.yaml +++ b/fixture/helm/ci/values-ci-cronjob-tests.yaml @@ -1,5 +1,5 @@ app: mycronjobtest -shortname: mycrntst +appId: mycrntst team: team env: dev @@ -16,14 +16,11 @@ service: container: image: nginxinc/nginx-unprivileged:latest - commmand: ["/bin/sh", "-c", "echo hello world"] + command: ["/bin/sh", "-c", "echo hello world"] labels: version: v1.2.3 cpu: 0.2 memory: 64 - replicas: 1 - maxReplicas: 2 - memoryLimit: 64 envFrom: [] probes: liveness: diff --git a/fixture/helm/ci/values-ci-tests.yaml b/fixture/helm/ci/values-ci-tests.yaml index 7cf1750..eaf6175 100644 --- a/fixture/helm/ci/values-ci-tests.yaml +++ b/fixture/helm/ci/values-ci-tests.yaml @@ -1,5 +1,5 @@ app: mytest -shortname: mytst +appId: mytst team: team env: dev @@ -17,15 +17,16 @@ service: externalPort: 8080 internalPort: 8080 +deployment: + minReplicas: 1 + maxReplicas: 2 + container: image: nginxinc/nginx-unprivileged:latest labels: version: v1.2.3 cpu: 0.2 memory: 64 - replicas: 1 - maxReplicas: 2 - memoryLimit: 64 envFrom: [] probes: liveness: diff --git a/fixture/helm/values-cron.yaml b/fixture/helm/values-cron.yaml index c7d8b38..798d86c 100644 --- a/fixture/helm/values-cron.yaml +++ b/fixture/helm/values-cron.yaml @@ -1,5 +1,5 @@ app: my-app -shortname: myapp +appId: myapp team: plattform env: dev diff --git a/fixture/helm/values-minimal.yaml b/fixture/helm/values-minimal.yaml index 79c5975..fa700eb 100644 --- a/fixture/helm/values-minimal.yaml +++ b/fixture/helm/values-minimal.yaml @@ -1,5 +1,5 @@ app: my-app -shortname: myapp +appId: myapp team: mat env: dev diff --git a/fixture/helm/values-postgres-multi.yaml b/fixture/helm/values-postgres-multi.yaml new file mode 100644 index 0000000..1611c96 --- /dev/null +++ b/fixture/helm/values-postgres-multi.yaml @@ -0,0 +1,18 @@ +app: my-app +appId: myapp +team: mat +env: dev + +ingress: + trafficType: api + host: test.dev.entur.io + +container: + name: my-app + image: theimage + +postgres: + enabled: true + instances: + - secretKeyPrefix: PG + - secretKeyPrefix: ANALYTICS_PG diff --git a/fixture/helm/values-postgres.yaml b/fixture/helm/values-postgres.yaml index 9d17b31..e343b7c 100644 --- a/fixture/helm/values-postgres.yaml +++ b/fixture/helm/values-postgres.yaml @@ -1,5 +1,5 @@ app: my-app -shortname: myapp +appId: myapp team: mat env: dev @@ -13,3 +13,5 @@ container: postgres: enabled: true + instances: + - secretKeyPrefix: PG diff --git a/fixture/helm/values-secrets.yaml b/fixture/helm/values-secrets.yaml index 3d07f7b..5c3af75 100644 --- a/fixture/helm/values-secrets.yaml +++ b/fixture/helm/values-secrets.yaml @@ -1,5 +1,5 @@ app: my-app-with-secrets -shortname: myapps +appId: myapps team: platform env: dev