User scope update#1432
Conversation
Signed-off-by: red-hat-konflux <konflux@no-reply.konflux-ci.dev>
Signed-off-by: red-hat-konflux <konflux@no-reply.konflux-ci.dev>
Signed-off-by: red-hat-konflux <konflux@no-reply.konflux-ci.dev>
Signed-off-by: red-hat-konflux <konflux@no-reply.konflux-ci.dev>
Signed-off-by: red-hat-konflux <konflux@no-reply.konflux-ci.dev>
Signed-off-by: red-hat-konflux <konflux@no-reply.konflux-ci.dev>
Creates kustomize overlay for deploying to hcmais01ue1 via app-interface: - Uses Konflux images from redhat-services-prod/hcm-eng-prod-tenant - Scales down in-cluster databases (using external RDS from app-interface Phase 2) - Scales down MinIO (using external S3 from app-interface Phase 2) - Includes CRDs, RBAC, routes, and all application components - Patches operator to use Konflux runner image Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Convert kustomize overlay to OpenShift Template format for app-interface SaaS deployment. Split into two templates: 1. template-operator.yaml (CRDs, ClusterRoles, operator deployment) - Operator and ambient-runner images - Cluster-scoped resources (CRDs, RBAC) - Operator deployment and its ConfigMaps 2. template-services.yaml (Application services) - Backend, frontend, public-api, ambient-api-server images - All deployments, services, routes, configmaps - Scales in-cluster services to 0 (minio, postgresql, unleash) Both templates use IMAGE_TAG parameter (auto-generated from git commit SHA) and support Konflux image gating through app-interface. This allows app-interface to use provider: openshift-template with proper parameter substitution instead of the directory provider which doesn't run kustomize build.
Creates kustomize overlay for deploying to hcmais01ue1 via app-interface: - Uses Konflux images from redhat-services-prod/hcm-eng-prod-tenant - Scales down in-cluster databases (using external RDS from app-interface Phase 2) - Scales down MinIO (using external S3 from app-interface Phase 2) - Includes CRDs, RBAC, routes, and all application components - Patches operator to use Konflux runner image Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
The objects field must be a YAML array with proper list indicators. Previous version was missing the '-' prefix on array items, causing: 'unable to decode STDIN: json: cannot unmarshal object into Go struct field Template.objects of type []runtime.RawExtension' Changes: - Rebuild templates using Python yaml library for correct formatting - Objects now properly formatted as YAML array with '- apiVersion:' - Add validate.sh script for testing with oc process - Both templates validated successfully Generated from kustomize overlay output with proper YAML structure.
Remove minio, postgresql, unleash, ambient-api-server-db. Using external RDS and S3 from app-interface. Removed 12 resources (4 Deployments, 4 Services, 3 PVCs, 1 Secret) Remaining: ambient-api-server, backend-api, frontend, public-api
Disables OTEL metrics export by commenting out OTEL_EXPORTER_OTLP_ENDPOINT environment variable in operator deployment manifests. The operator was configured to send metrics to otel-collector.ambient-code.svc:4317, but this service does not exist in the cluster, causing repeated gRPC connection failures every 30 seconds with error: "failed to upload metrics: context deadline exceeded: rpc error: code = Unavailable desc = name resolver error: produced zero addresses" With OTEL_EXPORTER_OTLP_ENDPOINT unset, InitMetrics() will skip metrics export and log "metrics export disabled" instead of throwing connection errors. Changes: - Comment out OTEL_EXPORTER_OTLP_ENDPOINT in base operator deployment - Comment out OTEL_EXPORTER_OTLP_ENDPOINT in OpenShift template - Add clarifying comment about re-enabling when collector is deployed 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Changes: - Add oauth-proxy component to frontend deployment (dashboard-ui port on 8443) - Enable SSL for ambient-api-server RDS connection (db-sslmode=require) - Set AMBIENT_ENV to 'stage' for ambient-api-server - Enable OpenShift service-ca for ambient-api-server TLS cert provisioning - Regenerate templates with new oauth-proxy and api-server patches This enables: - Authenticated access to frontend via OpenShift OAuth - Secure connections to external RDS database - Automatic TLS certificate rotation for ambient-api-server Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Remove postgresql, minio, unleash, and ambient-api-server-db resources from the services template. These services are scaled to 0 via kustomize patches because we use external RDS and S3 instead. Including them in the template causes app-interface to try deploying them, which fails imagePattern validation and wastes resources. Excluded resources: - Deployment/postgresql, Service/postgresql - Deployment/minio, Service/minio, PVC/minio-data - Deployment/unleash, Service/unleash - Deployment/ambient-api-server-db, Service/ambient-api-server-db Template now has 21 service resources (down from 30). Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Switch from custom vault secrets to OpenShift service account-based OAuth: - Use Red Hat's official ose-oauth-proxy-rhel9 image - Use service account token for cookie secret (no vault needed) - Enable HTTPS on OAuth proxy with OpenShift service-ca auto-generated certs - Add system:auth-delegator ClusterRoleBinding for OAuth delegation - Add OAuth redirect reference annotation to frontend ServiceAccount - Fix service account reference from 'nginx' to 'frontend' - Add missing NAMESPACE and UPSTREAM_TIMEOUT parameters Benefits: - No manual vault secret management - Automatic TLS cert rotation via service-ca - Standard OpenShift OAuth integration pattern - Follows app-interface team recommendations Files changed: - frontend-rbac.yaml: Added OAuth annotations and auth-delegator binding - oauth-proxy component patches: Updated to new configuration - Templates: Regenerated with OAuth fixes (27 operator, 21 service resources) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
The RDS credentials secret should not be in the OpenShift template - it's
provided by the external resource provider (terraform) in app-interface.
The namespace's externalResources section already defines:
- provider: rds
output_resource_name: ambient-code-rds
This automatically creates the secret with the correct RDS credentials.
Including the secret in the template with VAULT_INJECTED placeholders
caused deployment failures.
Changes:
- Excluded ambient-code-rds secret from template generation
- Template now has 20 service resources (down from 21)
- Deployment still references the secret via volumeMount (correct)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Chris Mitchell <cmitchel@redhat.com>
Signed-off-by: Chris Mitchell <cmitchel@redhat.com>
Changes GCP service account configuration to align with app-interface deployment where credentials are provided via Vault. Changes: - template-services.yaml: Update backend vertex-credentials secret name from 'ambient-vertex' to 'stage-gcp-creds' (matches Vault secret) - template-operator.yaml: Update GOOGLE_APPLICATION_CREDENTIALS path to match Vault secret key name 'itpc-gcp-hcm-pe-eng.json' The secret is provided by app-interface via: path: engineering-productivity/ambient-code/stage-gcp-creds This allows the backend and operator to use Vertex AI for Claude and Gemini API calls with the service account configured with roles/aiplatform.user permissions. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Chris Mitchell <cmitchel@redhat.com>
Configure OAuth proxy sidecar to inject authentication token into forwarded requests, fixing 401 errors on /api/projects endpoints. Changes: - Add --pass-access-token=true flag to inject X-Forwarded-Access-Token header - Change upstream from frontend-service:3000 to localhost:3000 (correct sidecar pattern) - Remove --request-logging to reduce log noise Backend logs showed: tokenSource=none hasAuthHeader=false hasFwdToken=false The backend expects the X-Forwarded-Access-Token header, which is now injected by the OAuth proxy for all authenticated requests. Flow: 1. User authenticates via OpenShift OAuth ✓ 2. OAuth proxy injects token header ✓ (new) 3. Frontend forwards token to backend API ✓ (fixed) This resolves the 401 authentication errors while maintaining the working OpenShift OAuth integration. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Removed the '--set-authorization-header=true' option from the configuration.
Changes: - Use proper 32-byte cookie secret from Vault instead of service account token - Add --pass-access-token to forward user's OAuth token to upstream - Add --scope=user:full to request full user permissions - Mount stage-cookie-secret at /etc/oauth-cookie Problem: OAuth proxy was authenticating users but not forwarding tokens to the Next.js frontend. When frontend made backend API calls, it had no token to forward, resulting in 401 errors. Root cause: The service account token (1618 bytes) is too large for AES cipher when --pass-access-token is enabled, which requires 16/24/32 byte secrets. Solution: Use a proper 32-byte cookie secret from Vault (matching UAT config), enabling --pass-access-token to forward the authenticated user's token through the chain: OAuth proxy → Next.js → Backend. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: Chris Mitchell <cmitchel@redhat.com>
Removed the '--scope=user:full' option from the configuration.
Signed-off-by: Chris Mitchell <cmitchel@redhat.com>
chore: Update konflux deps
✅ Deploy Preview for cheerful-kitten-f556a0 canceled.
|
📝 WalkthroughWalkthroughThis pull request adds comprehensive Tekton-based CI/CD pipeline definitions for multiple platform components, migrates database secret naming from Changes
Important Pre-merge checks failedPlease resolve all errors before merging. Addressing warnings is optional. ❌ Failed checks (1 error, 2 warnings)
✅ Passed checks (5 passed)
✨ Finishing Touches🧪 Generate unit tests (beta)
✨ Simplify code
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
components/manifests/overlays/kind/api-server-db-security-patch.yaml (1)
12-25:⚠️ Potential issue | 🟠 MajorAdd resource requests/limits to this patched container.
This overlay leaves the
postgresqlcontainer unbounded; the same pattern also appears on theinit-dbinitContainer incomponents/manifests/overlays/local-dev/ambient-api-server-init-db-patch.yamlLines 10-60. Add explicit requests/limits in the patch or the shared base so rendered manifests satisfy cluster policy.Proposed patch shape
containers: - name: postgresql + resources: + requests: + cpu: 100m + memory: 256Mi + limits: + cpu: 500m + memory: 512Mi securityContext:As per coding guidelines,
components/manifests/**/*.yaml: “Resource limits/requests required on containers.”🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/manifests/overlays/kind/api-server-db-security-patch.yaml` around lines 12 - 25, The patched container "postgresql" currently has no resource requests/limits (and the same missing on the "init-db" initContainer pattern), so update the container specs to include a resources block with explicit requests and limits (e.g., resources: { requests: { cpu: "100m", memory: "128Mi" }, limits: { cpu: "500m", memory: "512Mi" } }) to satisfy cluster policy; apply this change in the overlay patch (the manifest that modifies the postgresql container and the init-db initContainer) so the rendered manifests include the resources section for those containers.
🧹 Nitpick comments (1)
components/manifests/overlays/app-interface/namespace-patch.yaml (1)
1-8: Filename says "patch" but this is a full Namespace resource. Functionally fine, just mildly misleading if someone greps for patches. Considernamespace.yamlfor clarity — purely optional.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/manifests/overlays/app-interface/namespace-patch.yaml` around lines 1 - 8, The file named namespace-patch.yaml contains a full Namespace resource (kind: Namespace, metadata.name: ambient-code) but the filename implies it's a patch; rename the file to namespace.yaml (or another clearer name) so it reflects that it defines a full Namespace resource rather than a patch, ensuring repo searches and tooling aren’t misled by the misleading "patch" suffix.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.tekton/ambient-code-ambient-runner-main-push.yaml:
- Around line 89-94: Update the pipeline YAMLs to (1) reconcile the pinned task
digests for task-build-image-index, task-prefetch-dependencies-oci-ta, and
task-buildah-oci-ta so the runner pipelines use the same sha256 digests as the
other component pipelines (replace the stale digests with the canonical ones
used across the repo), (2) add an explicit type: string to the
enable-cache-proxy parameter definition so it matches the other string
parameters, and (3) remove the redundant "Dockerfile".pathChanged() clause from
the CEL trigger (or narrow it to ./components/runners/ambient-runner/** to avoid
repo-root Dockerfile triggers) so the trigger only fires for actual runner
changes.
In @.tekton/ambient-code-operator-main-pull-request.yaml:
- Around line 11-13: The pathChanged patterns use a leading "./" which won't
match repo-relative changed paths; update every pathChanged pattern that looks
like "./components/<name>/***" to remove the leading "./" (i.e.
"./components/operator/***" -> "components/operator/***") across all 12
component PipelineRun YAMLs referenced in the PR so the Tekton Pipelines-as-Code
glob matching will work correctly.
In `@components/manifests/base/platform/ambient-api-server-secrets.yml`:
- Line 5: Update all Secret references in the mpp-openshift overlay to match the
new name ambient-code-rds: in
components/manifests/overlays/mpp-openshift/ambient-api-server.yaml update the
volume's secretName (currently secretName: ambient-api-server-db) to
ambient-code-rds, and in
components/manifests/overlays/mpp-openshift/ambient-api-server-db.yaml update
each secretKeyRef/name entries (currently name: ambient-api-server-db at the
db.user, db.password, db.name refs on lines ~57, ~62, ~67) to ambient-code-rds
so all secretName and secretKeyRef name fields reference the renamed Secret.
In `@components/manifests/base/rbac/frontend-rbac.yaml`:
- Around line 6-7: The OAuth redirect annotation
serviceaccounts.openshift.io/oauth-redirectreference.frontend currently
references the Route name "frontend", which mismatches overlays that define
"frontend-route"; update the annotation's reference name from "frontend" to
"frontend-route" in frontend-rbac.yaml (or alternatively add overlay patches in
app-interface and production to set the annotation to "frontend-route") so the
OAuth redirect reference matches the actual Route name used by the app-interface
and production overlays.
In
`@components/manifests/components/ambient-api-server-db/ambient-api-server-db-json-patch.yaml`:
- Around line 14-24: The manifest patch mounts the ambient-code-rds Secret into
the ambient-api-server-db container (env vars
POSTGRESQL_PASSWORD/POSTGRESQL_DATABASE), which leaks external RDS credentials
into the in-cluster Postgres (postgresql-rhel) even though prod uses external
RDS (--db-sslmode=require); fix by gating or changing the secret: either remove
or disable the ambient-api-server-db component from production deployments, or
create and reference a separate local-bootstrap Secret (e.g.,
ambient-code-local-bootstrap) in the ambient-api-server-db patch so
POSTGRESQL_PASSWORD and POSTGRESQL_DATABASE use the local-secret instead of
ambient-code-rds, and ensure the production overlay does not apply this patch.
In
`@components/manifests/components/oauth-proxy/frontend-oauth-deployment-patch.yaml`:
- Around line 26-35: The oauth-proxy container args removed important flags and
changed cookie secret sourcing; restore the flags --pass-access-token and
--scope=user:full in the args list (the container args block) and change the
cookie secret source from the pod service-account token
(--cookie-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token) to a
mounted shared Secret path used by other overlays (use the same mounted Secret
file path used in the template/production overlays), ensuring both flags and the
shared Secret mount are present in the args and Pod spec for oauth-proxy.
In
`@components/manifests/components/oauth-proxy/frontend-oauth-service-patch.yaml`:
- Around line 2-9: Replace the unsupported OpenShift annotation key so the
service-ca will auto-generate the TLS secret: in the Service resource (kind:
Service, metadata.name: frontend-service) change the annotation key from
service.alpha.openshift.io/serving-cert-secret-name to
service.beta.openshift.io/serving-cert-secret-name while keeping the secret
value frontend-proxy-tls so the OAuth proxy TLS secret is created and mounted
correctly.
In `@components/manifests/overlays/app-interface/operator-config-openshift.yaml`:
- Around line 10-13: The manifest enables Vertex via USE_VERTEX and sets
GOOGLE_APPLICATION_CREDENTIALS to /app/vertex/ambient-code-key.json but the
operator Deployment is missing the corresponding secret volume/volumeMount, so
add a secret volume (secretName: ambient-vertex) and a volumeMount mounting it
at /app/vertex in the operator container spec; ensure the Deployment's container
that reads GOOGLE_APPLICATION_CREDENTIALS includes the volumeMount and the pod
spec includes the secret volume so Google auth can find the key at the
configured path.
In `@components/manifests/overlays/app-interface/route.yaml`:
- Around line 17-19: The TLS termination on the Route is set to termination:
edge while the oauth-proxy is configured with --https-address=:8443
(HTTPS-only), causing a mismatch; fix by making them consistent: either change
the Route tls.termination to reencrypt or passthrough to preserve TLS to the
backend, or modify the oauth-proxy flags to use --http-address=:8443 and clear
--https-address= so it accepts plain HTTP from the edge; update the Route tls
block or the oauth-proxy container args (look for --https-address=:8443 and
--http-address flags in the oauth-proxy sidecar configuration) so both use HTTP
upstream.
In
`@components/manifests/overlays/production/ambient-api-server-jwt-args-patch.yaml`:
- Around line 36-37: The deployment currently uses the insecure flag
"--db-sslmode=require"; update the manifest to enable certificate-verified TLS
by replacing "--db-sslmode=require" with "--db-sslmode=verify-full", add the new
flag "--db-sslrootcert=/etc/rds-ca/rds-ca-bundle.pem" (or chosen mount path) to
the api-server command, and mount the AWS RDS CA bundle into the pod (add a
volume and volumeMount pointing at that path); confirm the api-server
(rh-trex-ai v0.0.25) supports custom root cert configuration and adjust flag
names if necessary before applying.
In `@components/manifests/templates/template-operator.yaml`:
- Around line 944-1115: The ClusterRole named backend-api grants broad
cluster-wide permissions (namespaces create/update/patch/delete,
project.openshift.io projects update/patch, secrets full CRUD,
roles/rolebindings full CRUD, serviceaccounts/token create, pods
deletecollection) and should be tightened: restrict backend-api to only
genuinely cluster-scoped verbs (e.g., tokenreviews, subjectaccessreviews,
namespace create if needed) and remove or narrow
secrets/roles/rolebindings/serviceaccounts/pods permissions; move per-project
permissions (namespaces CRUD, secrets, roles/rolebindings, serviceaccount token
creation, pod deletecollection) into a namespaced Role/RoleBinding created per
ambient-provisioned project namespace by the operator (use the existing
ProjectSettings.groupAccess flow), or scope them to specific resourceNames where
applicable, ensuring least-privilege for the backend-api ClusterRole and leaving
only cluster-wide necessities.
In `@components/manifests/templates/template-services.yaml`:
- Around line 304-327: Add explicit resource requests/limits to the migration
initContainer and set an explicit replicas count for the frontend Deployment:
update the initContainer named "migration" (the init container running
/usr/local/bin/ambient-api-server migrate) to include a resources block with
reasonable requests/limits (e.g., low CPU/memory instead of relying on
LimitRange defaults), and add replicas: 2 to the frontend Deployment spec (the
Deployment that serves the user-facing ingress via oauth-proxy) so it does not
implicitly default to 1.
- Around line 600-737: The template hardcodes stage-only secret names
(stage-gcp-creds and stage-cookie-secret) causing prod mount failures; update
the Deployment spec to either parameterize these secrets (replace
stage-gcp-creds and stage-cookie-secret with template variables like
${GCP_CREDS_SECRET} and ${COOKIE_SECRET} and document defaults) or add
overlays/production patchesStrategicMerge that modify the Deployment's
volumes.secret.secretName (for volumes named frontend-proxy-tls and
oauth-cookie-secret) and any secretName fields (e.g., the secretName referenced
for GCP creds) to point to the production secret names so the mounts used by the
containers (volumeMounts for name: frontend-proxy-tls and name:
oauth-cookie-secret) exist in prod.
---
Outside diff comments:
In `@components/manifests/overlays/kind/api-server-db-security-patch.yaml`:
- Around line 12-25: The patched container "postgresql" currently has no
resource requests/limits (and the same missing on the "init-db" initContainer
pattern), so update the container specs to include a resources block with
explicit requests and limits (e.g., resources: { requests: { cpu: "100m",
memory: "128Mi" }, limits: { cpu: "500m", memory: "512Mi" } }) to satisfy
cluster policy; apply this change in the overlay patch (the manifest that
modifies the postgresql container and the init-db initContainer) so the rendered
manifests include the resources section for those containers.
---
Nitpick comments:
In `@components/manifests/overlays/app-interface/namespace-patch.yaml`:
- Around line 1-8: The file named namespace-patch.yaml contains a full Namespace
resource (kind: Namespace, metadata.name: ambient-code) but the filename implies
it's a patch; rename the file to namespace.yaml (or another clearer name) so it
reflects that it defines a full Namespace resource rather than a patch, ensuring
repo searches and tooling aren’t misled by the misleading "patch" suffix.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: e5c38245-b003-4ffd-be3b-5cb2153abbea
📒 Files selected for processing (48)
.tekton/ambient-code-ambient-api-server-main-pull-request.yaml.tekton/ambient-code-ambient-api-server-main-push.yaml.tekton/ambient-code-ambient-runner-main-pull-request.yaml.tekton/ambient-code-ambient-runner-main-push.yaml.tekton/ambient-code-backend-main-pull-request.yaml.tekton/ambient-code-backend-main-push.yaml.tekton/ambient-code-frontend-main-pull-request.yaml.tekton/ambient-code-frontend-main-push.yaml.tekton/ambient-code-operator-main-pull-request.yaml.tekton/ambient-code-operator-main-push.yaml.tekton/ambient-code-public-api-main-pull-request.yaml.tekton/ambient-code-public-api-main-push.yamlcomponents/ambient-api-server/templates/db-template.ymlcomponents/manifests/README.mdcomponents/manifests/base/core/ambient-api-server-service.ymlcomponents/manifests/base/core/operator-deployment.yamlcomponents/manifests/base/platform/ambient-api-server-db.ymlcomponents/manifests/base/platform/ambient-api-server-secrets.ymlcomponents/manifests/base/rbac/frontend-rbac.yamlcomponents/manifests/components/ambient-api-server-db/ambient-api-server-db-json-patch.yamlcomponents/manifests/components/ambient-api-server-db/ambient-api-server-init-db-patch.yamlcomponents/manifests/components/ambient-api-server-db/kustomization.yamlcomponents/manifests/components/oauth-proxy/frontend-oauth-deployment-patch.yamlcomponents/manifests/components/oauth-proxy/frontend-oauth-service-patch.yamlcomponents/manifests/overlays/app-interface/ambient-api-server-db-secret-patch.yamlcomponents/manifests/overlays/app-interface/ambient-api-server-env-patch.yamlcomponents/manifests/overlays/app-interface/ambient-api-server-route.yamlcomponents/manifests/overlays/app-interface/ambient-api-server-service-ca-patch.yamlcomponents/manifests/overlays/app-interface/ambient-api-server-ssl-patch.yamlcomponents/manifests/overlays/app-interface/backend-route.yamlcomponents/manifests/overlays/app-interface/kustomization.yamlcomponents/manifests/overlays/app-interface/namespace-patch.yamlcomponents/manifests/overlays/app-interface/namespace.yamlcomponents/manifests/overlays/app-interface/operator-config-openshift.yamlcomponents/manifests/overlays/app-interface/operator-runner-image-patch.yamlcomponents/manifests/overlays/app-interface/public-api-route.yamlcomponents/manifests/overlays/app-interface/route.yamlcomponents/manifests/overlays/kind/api-server-db-security-patch.yamlcomponents/manifests/overlays/kind/api-server-no-jwt-patch.yamlcomponents/manifests/overlays/local-dev/ambient-api-server-db-credentials-patch.yamlcomponents/manifests/overlays/local-dev/ambient-api-server-db-json-patch.yamlcomponents/manifests/overlays/local-dev/ambient-api-server-init-db-patch.yamlcomponents/manifests/overlays/production/ambient-api-server-jwt-args-patch.yamlcomponents/manifests/overlays/production/ambient-api-server-migration-ssl-patch.yamlcomponents/manifests/overlays/production/kustomization.yamlcomponents/manifests/templates/template-operator.yamlcomponents/manifests/templates/template-services.yamlcomponents/manifests/templates/validate.sh
| - default: "false" | ||
| description: Enable cache proxy configuration | ||
| name: enable-cache-proxy | ||
| - default: [] | ||
| description: Array of --build-arg values ("arg=value" strings) for buildah | ||
| name: build-args |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Confirm digest divergence across the 5 new Tekton PipelineRuns
rg -nP --type=yaml '(task-build-image-index|task-prefetch-dependencies-oci-ta|task-buildah-oci-ta):.*@sha256:[a-f0-9]+' .tekton/
# Confirm enable-cache-proxy missing type on all files
rg -nPU --type=yaml '(?s)name: enable-cache-proxy.*?(?=\n\s*-\s|\n\s{4}\w)' .tekton/
# Is there a repo-root Dockerfile the CEL trigger could match?
fd -HI -t f '^Dockerfile$' -d 2Repository: ambient-code/platform
Length of output: 12576
🏁 Script executed:
# Extract parameter definitions and CEL trigger from the runner main push file
head -n 100 .tekton/ambient-code-ambient-runner-main-push.yaml | tail -n 90Repository: ambient-code/platform
Length of output: 3845
🏁 Script executed:
# Also look at the full CEL trigger condition
sed -n '1,20p' .tekton/ambient-code-ambient-runner-main-push.yamlRepository: ambient-code/platform
Length of output: 1048
🏁 Script executed:
# Check the enable-cache-proxy parameter definition more carefully
sed -n '85,100p' .tekton/ambient-code-ambient-runner-main-push.yamlRepository: ambient-code/platform
Length of output: 680
Tekton pipelines diverge from sibling components — sync before merge.
Three issues across the five new .tekton/*.yaml files:
-
Digest/version divergence. Runner pipelines (ambient-runner-main-push/pull-request) pin different task digests than the other four (api-server, public-api, backend, operator, frontend):
- Runner:
task-build-image-index:0.3@sha256:ae3fa44f…,task-prefetch-dependencies-oci-ta:0.3@sha256:9917d11f…,task-buildah-oci-ta:0.9@sha256:5375e5b9… - Others:
task-build-image-index:0.2@sha256:3fa26d2c…,task-prefetch-dependencies-oci-ta:0.3@sha256:2229dbc5…,task-buildah-oci-ta:0.9@sha256:cad04a0f…
Same semantic versions (0.3, 0.3, 0.9) → different digests = stale references. Align all five files on identical digests for reproducibility and consistent policy evaluation.
- Runner:
-
enable-cache-proxyparameter missingtype: stringacross all files. Every other string parameter explicitly declares it; this one omits it (defaults apply but inconsistent). -
CEL trigger includes redundant
"Dockerfile".pathChanged()clause. The glob./components/runners/ambient-runner/***already covers the component's Dockerfile, and no repo-root Dockerfile exists to trigger unrelated rebuilds. Drop or scope to component path.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.tekton/ambient-code-ambient-runner-main-push.yaml around lines 89 - 94,
Update the pipeline YAMLs to (1) reconcile the pinned task digests for
task-build-image-index, task-prefetch-dependencies-oci-ta, and
task-buildah-oci-ta so the runner pipelines use the same sha256 digests as the
other component pipelines (replace the stale digests with the canonical ones
used across the repo), (2) add an explicit type: string to the
enable-cache-proxy parameter definition so it matches the other string
parameters, and (3) remove the redundant "Dockerfile".pathChanged() clause from
the CEL trigger (or narrow it to ./components/runners/ambient-runner/** to avoid
repo-root Dockerfile triggers) so the trigger only fires for actual runner
changes.
| pipelinesascode.tekton.dev/on-cel-expression: event == "pull_request" && target_branch | ||
| == "main" && ( "./components/operator/***".pathChanged() || ".tekton/ambient-code-operator-main-pull-request.yaml".pathChanged() | ||
| || "Dockerfile".pathChanged() ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Lists all PaC pathChanged globs that include a leading "./".
# Expected: either no leading "./" remains, or maintainers confirm the deployed
# Pipelines-as-Code version normalizes these patterns before matching.
rg -n -C2 '"\./components/[^"]+"\.pathChanged\(\)|pipelinesascode\.tekton\.dev/on-cel-expression' --iglob '*.yaml'Repository: ambient-code/platform
Length of output: 47
🏁 Script executed:
# Check if the file exists and read its content
cat .tekton/ambient-code-operator-main-pull-request.yamlRepository: ambient-code/platform
Length of output: 20418
🏁 Script executed:
# Search more broadly for pathChanged patterns
rg 'pathChanged' --iglob '*.yaml' -A1 -B1Repository: ambient-code/platform
Length of output: 47
🌐 Web query:
Tekton Pipelines-as-Code pathChanged path normalization leading dot behavior
💡 Result:
In Tekton Pipelines-as-Code (PaC), the pathChanged CEL function and on-path-changed annotations use glob pattern matching via github.com/gobwas/glob.MustCompile(pattern).Match(filename) against the full relative paths of changed files returned by the Git provider API (GitHub/GitLab). No explicit path normalization (e.g., trimming leading/trailing slashes, case folding, or handling ..) is performed before matching. Paths are used as-is from the provider. Leading dot behavior follows standard glob semantics: patterns like ".github/" match dotfiles and hidden directories (e.g., ".github/workflows/ci.yaml", ".gitignore") because gobwas/glob treats the leading dot as literal unless "" or other recursive patterns are used. Simple patterns like ".md" do NOT match ".hidden.md" without explicit "." prefix. Test patterns locally with tkn pac info globbing "[pattern]" against your directory tree, which uses the same glob library. Examples from docs: - "docs/.md".pathChanged matches docs/file.md but not docs/.hidden.md - "[docs/]" matches files in docs/ including subdirs, with leading dot files if path has "." - GitHub PR changed files include full paths like ".github/workflows/ci.yaml", so ".github/" triggers. Sources confirm glob library (pkg.go.dev lists gobwas/glob v0.2.3), source code shows direct g.Match(changedFiles.All[i]), docs link to glob examples and CLI tester.
Citations:
- 1: https://pipelinesascode.com/docs/guide/matchingevents/
- 2: https://pipelinesascode.com/docs/guides/event-matching/path-matching/
- 3: https://pipelinesascode.com/docs/guides/event-matching/cel-expressions/
- 4: https://github.com/openshift-pipelines/pipelines-as-code/blob/main/pkg/matcher/cel.go
- 5: https://pkg.go.dev/github.com/openshift-pipelines/pipelines-as-code
- 6: https://pipelinesascode.com/docs/concepts/
🏁 Script executed:
# Find all .tekton files and search for pathChanged patterns
find .tekton -name "*.yaml" -type f | head -20Repository: ambient-code/platform
Length of output: 686
🏁 Script executed:
# Search for pathChanged patterns in all .tekton YAML files
fd '.*\.yaml$' .tekton -x grep -l "pathChanged" {} \;Repository: ambient-code/platform
Length of output: 686
🏁 Script executed:
# Extract all pathChanged patterns to see which have leading "./"
fd '.*\.yaml$' .tekton -x grep -E 'pathChanged|on-cel-expression' -A2 -B2 {} + | grep -E '"\./|pathChanged'Repository: ambient-code/platform
Length of output: 3857
Fix pathChanged patterns to match actual Git provider paths.
All 12 component PipelineRun files use "./components/<name>/***".pathChanged() patterns, but Git providers return changed files as repo-relative paths without the leading ./ prefix. The Tekton Pipelines-as-Code library (gobwas/glob) applies no path normalization, so these patterns won't match component changes. Remove the leading ./ from all pathChanged patterns in:
.tekton/ambient-code-operator-main-{pull-request,push}.yaml.tekton/ambient-code-backend-main-{pull-request,push}.yaml.tekton/ambient-code-frontend-main-{pull-request,push}.yaml.tekton/ambient-code-public-api-main-{pull-request,push}.yaml.tekton/ambient-code-ambient-runner-main-{pull-request,push}.yaml.tekton/ambient-code-ambient-api-server-main-{pull-request,push}.yaml
Change patterns like "./components/operator/***" to "components/operator/***".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.tekton/ambient-code-operator-main-pull-request.yaml around lines 11 - 13,
The pathChanged patterns use a leading "./" which won't match repo-relative
changed paths; update every pathChanged pattern that looks like
"./components/<name>/***" to remove the leading "./" (i.e.
"./components/operator/***" -> "components/operator/***") across all 12
component PipelineRun YAMLs referenced in the PR so the Tekton Pipelines-as-Code
glob matching will work correctly.
| kind: Secret | ||
| metadata: | ||
| name: ambient-api-server-db | ||
| name: ambient-code-rds |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
rg -n -C3 'secretName:\s*ambient-api-server-db|secretKeyRef:|name:\s*ambient-api-server-db' components/manifestsRepository: ambient-code/platform
Length of output: 50377
🏁 Script executed:
cat -n components/manifests/base/platform/ambient-api-server-secrets.ymlRepository: ambient-code/platform
Length of output: 1066
🏁 Script executed:
sed -n '140,150p' components/manifests/overlays/mpp-openshift/ambient-api-server.yamlRepository: ambient-code/platform
Length of output: 355
🏁 Script executed:
sed -n '50,70p' components/manifests/overlays/mpp-openshift/ambient-api-server-db.yamlRepository: ambient-code/platform
Length of output: 776
Update all Secret references in mpp-openshift overlay before renaming.
The Secret rename to ambient-code-rds affects multiple consumers in components/manifests/overlays/mpp-openshift/:
- Line 146 in
ambient-api-server.yaml:secretName: ambient-api-server-db(volume mount) - Lines 57, 62, 67 in
ambient-api-server-db.yaml:name: ambient-api-server-db(secretKeyRef for db.user, db.password, db.name)
All must be updated to ambient-code-rds or pods will fail at startup.
Required fixes
ambient-api-server.yaml line 146:
- name: db-secrets
secret:
- secretName: ambient-api-server-db
+ secretName: ambient-code-rdsambient-api-server-db.yaml lines 57, 62, 67:
env:
- name: POSTGRESQL_USER
valueFrom:
secretKeyRef:
key: db.user
- name: ambient-api-server-db
+ name: ambient-code-rds
- name: POSTGRESQL_PASSWORD
valueFrom:
secretKeyRef:
key: db.password
- name: ambient-api-server-db
+ name: ambient-code-rds
- name: POSTGRESQL_DATABASE
valueFrom:
secretKeyRef:
key: db.name
- name: ambient-api-server-db
+ name: ambient-code-rds🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/manifests/base/platform/ambient-api-server-secrets.yml` at line 5,
Update all Secret references in the mpp-openshift overlay to match the new name
ambient-code-rds: in
components/manifests/overlays/mpp-openshift/ambient-api-server.yaml update the
volume's secretName (currently secretName: ambient-api-server-db) to
ambient-code-rds, and in
components/manifests/overlays/mpp-openshift/ambient-api-server-db.yaml update
each secretKeyRef/name entries (currently name: ambient-api-server-db at the
db.user, db.password, db.name refs on lines ~57, ~62, ~67) to ambient-code-rds
so all secretName and secretKeyRef name fields reference the renamed Secret.
| annotations: | ||
| serviceaccounts.openshift.io/oauth-redirectreference.frontend: '{"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"frontend"}}' |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: List frontend Route definitions and OAuth redirect references.
# Expected: The OAuthRedirectReference route name matches an actual Route metadata.name.
rg -n -C3 'kind:\s*Route|name:\s*frontend|name:\s*frontend-route|oauth-redirectreference' components/manifestsRepository: ambient-code/platform
Length of output: 44268
Update OAuth redirect reference to match actual Route name in app-interface and production overlays.
The frontend-rbac.yaml base file references Route frontend, but overlays define frontend-route (app-interface line 4, production line 4). This breaks OAuth redirects in those environments. Only the local-dev overlay has a Route named frontend.
Proposed fix
- serviceaccounts.openshift.io/oauth-redirectreference.frontend: '{"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"frontend"}}'
+ serviceaccounts.openshift.io/oauth-redirectreference.frontend: '{"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"frontend-route"}}'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/manifests/base/rbac/frontend-rbac.yaml` around lines 6 - 7, The
OAuth redirect annotation
serviceaccounts.openshift.io/oauth-redirectreference.frontend currently
references the Route name "frontend", which mismatches overlays that define
"frontend-route"; update the annotation's reference name from "frontend" to
"frontend-route" in frontend-rbac.yaml (or alternatively add overlay patches in
app-interface and production to set the annotation to "frontend-route") so the
OAuth redirect reference matches the actual Route name used by the app-interface
and production overlays.
| name: ambient-code-rds | ||
| key: db.user | ||
| - name: POSTGRESQL_PASSWORD | ||
| valueFrom: | ||
| secretKeyRef: | ||
| name: ambient-api-server-db | ||
| name: ambient-code-rds | ||
| key: db.password | ||
| - name: POSTGRESQL_DATABASE | ||
| valueFrom: | ||
| secretKeyRef: | ||
| name: ambient-api-server-db | ||
| name: ambient-code-rds |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Verify overlays that include the in-cluster DB component alongside external-RDS settings.
# Expected: production should not include the local DB component when external RDS is active.
rg -n -C4 'components/ambient-api-server-db|components/postgresql-rhel|ambient-code-rds|--db-sslmode=require' --iglob '*.yaml'Repository: ambient-code/platform
Length of output: 24426
Gate the local Postgres Deployment or use separate bootstrap credentials in production.
Production includes both the in-cluster Postgres pod (postgresql-rhel component) and the ambient-api-server-db component, which patches the local Postgres pod to mount the ambient-code-rds Secret. However, production is configured for external AWS RDS (--db-sslmode=require). The local Postgres pod unnecessarily receives external RDS credentials. Either remove the ambient-api-server-db component from production, or provide a separate local-bootstrap Secret for Postgres initialization.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@components/manifests/components/ambient-api-server-db/ambient-api-server-db-json-patch.yaml`
around lines 14 - 24, The manifest patch mounts the ambient-code-rds Secret into
the ambient-api-server-db container (env vars
POSTGRESQL_PASSWORD/POSTGRESQL_DATABASE), which leaks external RDS credentials
into the in-cluster Postgres (postgresql-rhel) even though prod uses external
RDS (--db-sslmode=require); fix by gating or changing the secret: either remove
or disable the ambient-api-server-db component from production deployments, or
create and reference a separate local-bootstrap Secret (e.g.,
ambient-code-local-bootstrap) in the ambient-api-server-db patch so
POSTGRESQL_PASSWORD and POSTGRESQL_DATABASE use the local-secret instead of
ambient-code-rds, and ensure the production overlay does not apply this patch.
| tls: | ||
| termination: edge | ||
| insecureEdgeTerminationPolicy: Redirect |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Verify Route TLS termination and oauth-proxy listener protocol are aligned.
# Expected: edge + --http-address, or reencrypt/passthrough + --https-address.
rg -n -C4 'termination:|targetPort: dashboard-ui|--https-address|--http-address|scheme: HTTPS|scheme: HTTP' components/manifestsRepository: ambient-code/platform
Length of output: 39858
Fix TLS termination mismatch: edge termination requires HTTP upstream.
The route uses termination: edge (plain HTTP to backend), but the oauth-proxy sidecar in this overlay is configured with --https-address=:8443 (HTTPS only). This breaks traffic and health checks. Either change to reencrypt/passthrough termination or switch the oauth-proxy to --http-address=:8443 with --https-address= (empty), matching the configuration in other overlays.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/manifests/overlays/app-interface/route.yaml` around lines 17 - 19,
The TLS termination on the Route is set to termination: edge while the
oauth-proxy is configured with --https-address=:8443 (HTTPS-only), causing a
mismatch; fix by making them consistent: either change the Route tls.termination
to reencrypt or passthrough to preserve TLS to the backend, or modify the
oauth-proxy flags to use --http-address=:8443 and clear --https-address= so it
accepts plain HTTP from the edge; update the Route tls block or the oauth-proxy
container args (look for --https-address=:8443 and --http-address flags in the
oauth-proxy sidecar configuration) so both use HTTP upstream.
| # SSL required: Using external AWS RDS database | ||
| - --db-sslmode=require |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Verify whether the API and manifests support certificate-verified DB TLS.
# Expected: support for verify-full/root CA configuration and a production CA mount/reference.
rg -n -C3 -- '--db-sslmode|sslmode|sslrootcert|rootcert|rds-ca|db-ssl' --iglob '*.go' --iglob '*.yaml'Repository: ambient-code/platform
Length of output: 252
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Search for sslmode, sslrootcert, and CA-related patterns
echo "=== Searching for sslmode and certificate configs ==="
rg -t go -t yaml -- 'sslmode|sslrootcert|rootcert|rds-ca|db-ssl' -C 2 || true
echo ""
echo "=== Checking the target file ==="
cat -n components/manifests/overlays/production/ambient-api-server-jwt-args-patch.yaml | head -40
echo ""
echo "=== Searching for API server code handling DB flags ==="
fd -e go -e yaml | grep -E 'api.*server|database' | head -20 || trueRepository: ambient-code/platform
Length of output: 3579
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Find api-server main/cli code ==="
fd -e go . | xargs grep -l 'db-sslmode\|sslmode' || echo "No direct sslmode matches"
echo ""
echo "=== Search for db flag parsing or database config ==="
fd -e go components/ambient-api-server | xargs grep -l 'db-host-file\|db-password-file\|database' | head -5
echo ""
echo "=== Look at main entry point ==="
find components/ambient-api-server -name 'main.go' -o -name 'cmd.go' | head -3Repository: ambient-code/platform
Length of output: 624
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Find main.go or entry point in api-server ==="
find components/ambient-api-server -name 'main.go' -o -name 'cmd*.go' -o -name 'serve.go'
echo ""
echo "=== List api-server package structure ==="
find components/ambient-api-server -type f -name '*.go' | head -20
echo ""
echo "=== Search for database or db connection code ==="
find components/ambient-api-server -type f -name '*.go' | xargs grep -l 'database\|postgres\|sql.Open' | head -5
echo ""
echo "=== Check for any flag definitions with 'sslmode' or 'cert' ==="
find components/ambient-api-server -type f -name '*.go' | xargs grep -i 'sslmode\|rootcert\|flag.*db' | head -10Repository: ambient-code/platform
Length of output: 1715
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Read main.go to find db-sslmode flag ==="
cat -n components/ambient-api-server/cmd/ambient-api-server/main.go | head -150
echo ""
echo "=== Check production environment config ==="
cat -n components/ambient-api-server/cmd/ambient-api-server/environments/e_production.go | head -80Repository: ambient-code/platform
Length of output: 3613
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Search for db_session references ==="
find . -type f -name '*.go' | xargs grep -l 'db_session\|sslmode' | grep -v vendor | head -10
echo ""
echo "=== Look for postgres connection/DSN construction ==="
find . -type f -name '*.go' | xargs grep -i 'dsn\|connstr\|connection.*string\|postgres://' | grep -v vendor | head -20
echo ""
echo "=== Check for any CA certificate handling in db code ==="
find . -type f -name '*.go' | xargs grep -i 'ca.crt\|cafile\|rootcert' | grep -v vendor || echo "No CA cert config found"Repository: ambient-code/platform
Length of output: 4133
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Check if rh-trex-ai is vendored ==="
find . -type d -name 'rh-trex-ai' | head -5
echo ""
echo "=== Look for go.mod or go.sum for rh-trex-ai version ==="
grep -n 'rh-trex-ai' go.mod go.sum 2>/dev/null | head -10
echo ""
echo "=== Search for any db flag documentation or comments ==="
find . -type f -name '*.go' | xargs grep -B2 -A2 'db-host-file\|db-port-file\|database.*flag' | grep -v vendor | head -30
echo ""
echo "=== Check manifest for any CA volume mounts ==="
grep -n 'ca\|cert\|volume' components/manifests/overlays/production/ambient-api-server-jwt-args-patch.yamlRepository: ambient-code/platform
Length of output: 162
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Find go.mod files ==="
find . -name 'go.mod' -type f | head -5
echo ""
echo "=== Read the complete manifest file ==="
cat -n components/manifests/overlays/production/ambient-api-server-jwt-args-patch.yaml
echo ""
echo "=== Check for volume mounts in api-server deployment ==="
find components/manifests -name '*.yaml' | xargs grep -l 'ambient-api-server' | head -5Repository: ambient-code/platform
Length of output: 4313
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Read api-server go.mod for rh-trex-ai ==="
grep 'rh-trex-ai' components/ambient-api-server/go.mod
echo ""
echo "=== Check if there are other manifest patches that add volumes/mounts ==="
find components/manifests -name '*patch*.yaml' -o -name '*overlay*.yaml' | xargs grep -l 'db.*cert\|ca.*cert\|volume' | head -10
echo ""
echo "=== Look for kustomization that may reference db CA config ==="
find components/manifests -name 'kustomization.yaml' | xargs grep -l 'db\|database\|rds' | head -5Repository: ambient-code/platform
Length of output: 1401
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Read DB patch ==="
cat -n components/manifests/components/ambient-api-server-db/ambient-api-server-db-json-patch.yaml
echo ""
echo "=== Read service-ca patch (app-interface overlay) ==="
cat -n components/manifests/overlays/app-interface/ambient-api-server-service-ca-patch.yaml
echo ""
echo "=== Search rh-trex-ai package docs or code for sslmode support ==="
grep -r 'verify-full\|verify-ca\|sslmode' . --include='*.go' --include='*.md' | grep -v vendor | head -20Repository: ambient-code/platform
Length of output: 2043
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Search for README or docs about DB config ==="
find . -iname 'readme*' -o -iname 'doc*' | grep -v node_modules | head -20
echo ""
echo "=== Check for any comments or docs in go code about db flags ==="
find components/ambient-api-server -name '*.go' | xargs grep -B5 'NewServeCommand\|flag.*db' | head -40
echo ""
echo "=== Look for environment config that may document db options ==="
cat -n components/ambient-api-server/cmd/ambient-api-server/environments/types.goRepository: ambient-code/platform
Length of output: 2460
🌐 Web query:
rh-trex-ai database sslmode verify-full root certificate configuration
💡 Result:
rh-trex-ai is a Go application based on the openshift-online/rh-trex framework that uses GORM with PostgreSQL database (rh-trex database, user 'trex'). No specific SSL configuration is documented in the project's README or pkg.go.dev pages; local development uses a plain Docker PostgreSQL setup without SSL (psql -h localhost -U trex rh-trex). To configure the database connection with sslmode=verify-full (PostgreSQL libpq parameter for full server certificate and hostname verification), modify the DSN (Data Source Name) used by GORM's PostgreSQL driver (likely pgx-based) as follows: DSN format example: host=your-db-host port=5432 user=trex password=your-password dbname=rh-trex sslmode=verify-full sslrootcert=/path/to/root-ca.crt Key parameters: - sslmode=verify-full: Requires encryption, verifies server certificate against trusted CA, and checks hostname matches certificate CN/SAN. - sslrootcert=/path/to/root-ca.crt: Path to PEM file containing root CA certificate(s) that signed the server's certificate. Default location ~/.postgresql/root.crt if not specified. Steps for configuration: 1. Obtain the PostgreSQL server's root CA certificate (PEM format) from your DB provider (e.g., RDS bundle, Cloud SQL CA). 2. Place it on the filesystem accessible to the rh-trex-ai application (e.g., mounted volume in OpenShift pod). 3. Update the database DSN in rh-trex-ai's configuration (likely in cmd/trex/main.go or config/env vars like DATABASE_URL). Since source code details unavailable, check for viper/env parsing or GORM Open(postgres.Open(dsn)). 4. Ensure hostname in DSN matches server certificate's CN/SAN (use FQDN, not IP). 5. Restart the application. For production OpenShift deployments, use secrets/configmaps for cert and DSN params. Server must have SSL enabled (ssl=on in postgresql.conf). If using client certificates (mutual TLS), add sslcert and sslkey params (not typically required). Common pitfalls: Missing/wrong CA cert causes "certificate verify failed"; hostname mismatch fails verify-full; GORM/pgx may require explicit empty sslcert/sslkey if misconfigured. Official PostgreSQL docs recommend verify-full for security-sensitive environments.
Citations:
- 1: https://pkg.go.dev/github.com/openshift-online/rh-trex-ai
- 2: https://github.com/openshift-online/rh-trex
- 3: http://postgresql.org/docs/15/libpq-ssl.html
- 4: https://rexbytes.com/2026/02/17/pgmonkey-3-of-8-ssl-tls/
- 5: https://support.hashicorp.com/hc/en-us/articles/8619961243795-Terraform-Enterprise-PostgreSQL-SSLMODE
- 6: https://oneuptime.com/blog/post/2026-03-04-configure-postgresql-with-ssl-tls-encryption-on-rhel/view
Add certificate-verified TLS for production RDS connection.
sslmode=require encrypts the connection but does not verify the RDS endpoint's certificate or hostname, leaving the connection vulnerable to MITM attacks. Upgrade to verify-full by:
- Obtaining the AWS RDS CA bundle (publicly available from AWS)
- Mounting it as a volume in the pod
- Adding
--db-sslrootcert=/path/to/rds-ca-bundle.pemflag to the api-server command - Updating
--db-sslmode=requireto--db-sslmode=verify-full
This requires the app to support custom root-cert configuration (verify with upstream rh-trex-ai v0.0.25 documentation or source code if not already supported).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@components/manifests/overlays/production/ambient-api-server-jwt-args-patch.yaml`
around lines 36 - 37, The deployment currently uses the insecure flag
"--db-sslmode=require"; update the manifest to enable certificate-verified TLS
by replacing "--db-sslmode=require" with "--db-sslmode=verify-full", add the new
flag "--db-sslrootcert=/etc/rds-ca/rds-ca-bundle.pem" (or chosen mount path) to
the api-server command, and mount the AWS RDS CA bundle into the pod (add a
volume and volumeMount pointing at that path); confirm the api-server
(rh-trex-ai v0.0.25) supports custom root cert configuration and adjust flag
names if necessary before applying.
| kind: ClusterRole | ||
| metadata: | ||
| name: backend-api | ||
| rules: | ||
| - apiGroups: | ||
| - vteam.ambient-code | ||
| resources: | ||
| - agenticsessions | ||
| verbs: | ||
| - get | ||
| - list | ||
| - watch | ||
| - create | ||
| - update | ||
| - patch | ||
| - delete | ||
| - apiGroups: | ||
| - vteam.ambient-code | ||
| resources: | ||
| - agenticsessions/status | ||
| verbs: | ||
| - get | ||
| - update | ||
| - patch | ||
| - apiGroups: | ||
| - '' | ||
| resources: | ||
| - serviceaccounts | ||
| verbs: | ||
| - get | ||
| - list | ||
| - create | ||
| - update | ||
| - patch | ||
| - apiGroups: | ||
| - '' | ||
| resources: | ||
| - serviceaccounts/token | ||
| verbs: | ||
| - create | ||
| - apiGroups: | ||
| - authentication.k8s.io | ||
| resources: | ||
| - tokenreviews | ||
| verbs: | ||
| - create | ||
| - apiGroups: | ||
| - rbac.authorization.k8s.io | ||
| resources: | ||
| - roles | ||
| - rolebindings | ||
| verbs: | ||
| - get | ||
| - list | ||
| - create | ||
| - update | ||
| - patch | ||
| - delete | ||
| - apiGroups: | ||
| - rbac.authorization.k8s.io | ||
| resourceNames: | ||
| - ambient-project-admin | ||
| - ambient-project-edit | ||
| - ambient-project-view | ||
| resources: | ||
| - clusterroles | ||
| verbs: | ||
| - bind | ||
| - apiGroups: | ||
| - '' | ||
| resources: | ||
| - secrets | ||
| verbs: | ||
| - get | ||
| - list | ||
| - create | ||
| - update | ||
| - patch | ||
| - delete | ||
| - apiGroups: | ||
| - '' | ||
| resources: | ||
| - configmaps | ||
| verbs: | ||
| - get | ||
| - create | ||
| - update | ||
| - patch | ||
| - apiGroups: | ||
| - '' | ||
| resources: | ||
| - namespaces | ||
| verbs: | ||
| - get | ||
| - list | ||
| - create | ||
| - update | ||
| - patch | ||
| - delete | ||
| - apiGroups: | ||
| - project.openshift.io | ||
| resources: | ||
| - projects | ||
| verbs: | ||
| - get | ||
| - list | ||
| - watch | ||
| - update | ||
| - patch | ||
| - apiGroups: | ||
| - batch | ||
| resources: | ||
| - jobs | ||
| verbs: | ||
| - get | ||
| - list | ||
| - watch | ||
| - create | ||
| - delete | ||
| - apiGroups: | ||
| - batch | ||
| resources: | ||
| - cronjobs | ||
| verbs: | ||
| - get | ||
| - list | ||
| - watch | ||
| - create | ||
| - update | ||
| - patch | ||
| - delete | ||
| - apiGroups: | ||
| - '' | ||
| resources: | ||
| - pods | ||
| verbs: | ||
| - get | ||
| - list | ||
| - watch | ||
| - create | ||
| - delete | ||
| - deletecollection | ||
| - apiGroups: | ||
| - '' | ||
| resources: | ||
| - pods/log | ||
| verbs: | ||
| - get | ||
| - apiGroups: | ||
| - '' | ||
| resources: | ||
| - persistentvolumeclaims | ||
| verbs: | ||
| - get | ||
| - list | ||
| - watch | ||
| - apiGroups: | ||
| - '' | ||
| resources: | ||
| - services | ||
| verbs: | ||
| - get | ||
| - list | ||
| - create | ||
| - delete | ||
| - apiGroups: | ||
| - authorization.k8s.io | ||
| resources: | ||
| - subjectaccessreviews | ||
| - selfsubjectaccessreviews | ||
| verbs: | ||
| - create |
There was a problem hiding this comment.
backend-api ClusterRole is effectively cluster-admin-lite — tighten if possible.
Cluster-wide permissions that stand out:
namespaces:create/update/patch/deleteacross the cluster (line 1033–1042).project.openshift.io/projects:update/patch(line 1043–1052).secrets:get/list/create/update/patch/deletecluster-wide (line 1012–1022).roles,rolebindings: full CRUD (line 991–1001).serviceaccounts/token:create(line 978–983).pods:deletecollection(line 1085).
Combined, a compromise of the backend-api SA is equivalent to cluster-admin: it can mint tokens for any SA, read every secret, and create arbitrary RBAC. If the backend only needs these on ambient-provisioned project namespaces, prefer project-scoped Role/RoleBindings (the operator already handles per-project RBAC via ProjectSettings.groupAccess), and keep only the genuinely cluster-scoped verbs (namespace create, project list, tokenreviews/SAR) on the ClusterRole. Worth a second look before this hits a prod cluster.
As per coding guidelines: "RBAC must follow least-privilege."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/manifests/templates/template-operator.yaml` around lines 944 -
1115, The ClusterRole named backend-api grants broad cluster-wide permissions
(namespaces create/update/patch/delete, project.openshift.io projects
update/patch, secrets full CRUD, roles/rolebindings full CRUD,
serviceaccounts/token create, pods deletecollection) and should be tightened:
restrict backend-api to only genuinely cluster-scoped verbs (e.g., tokenreviews,
subjectaccessreviews, namespace create if needed) and remove or narrow
secrets/roles/rolebindings/serviceaccounts/pods permissions; move per-project
permissions (namespaces CRUD, secrets, roles/rolebindings, serviceaccount token
creation, pod deletecollection) into a namespaced Role/RoleBinding created per
ambient-provisioned project namespace by the operator (use the existing
ProjectSettings.groupAccess flow), or scope them to specific resourceNames where
applicable, ensuring least-privilege for the backend-api ClusterRole and leaving
only cluster-wide necessities.
| initContainers: | ||
| - command: | ||
| - /usr/local/bin/ambient-api-server | ||
| - migrate | ||
| - --db-host-file=/secrets/db/db.host | ||
| - --db-port-file=/secrets/db/db.port | ||
| - --db-user-file=/secrets/db/db.user | ||
| - --db-password-file=/secrets/db/db.password | ||
| - --db-name-file=/secrets/db/db.name | ||
| - --db-sslmode=require | ||
| - --alsologtostderr | ||
| - -v=4 | ||
| image: ${IMAGE_AMBIENT_API_SERVER}:${IMAGE_TAG} | ||
| imagePullPolicy: Always | ||
| name: migration | ||
| securityContext: | ||
| allowPrivilegeEscalation: false | ||
| capabilities: | ||
| drop: | ||
| - ALL | ||
| readOnlyRootFilesystem: false | ||
| volumeMounts: | ||
| - mountPath: /secrets/db | ||
| name: db-secrets |
There was a problem hiding this comment.
migration initContainer has no resource requests/limits; frontend Deployment has no replicas.
Two small gaps in this template:
-
Line 304–327 —
migrationinitContainer is missingresources. Every other container in this template sets requests/limits, and the repo convention (plus the coding guideline forcomponents/manifests/**) requires it. TheLimitRangeon line 135 will inject defaults (2 CPU / 4Gi), which is generous for a short migration — better to set them explicitly. -
Line 624 —
frontendDeployment has noreplicas:, so it defaults to1. It's the user-facing ingress path through oauth-proxy; a rollout or node drain will cause user-visible downtime. Every other Deployment in this template explicitly setsreplicas: 1; the frontend probably wants2(or at least explicit) for HA.
Side note, unrelated to correctness: the frontend --scope=user:full (line 689) matches the PR's stated intent — that part looks right.
As per coding guidelines: "Resource limits/requests required on containers."
🔧 Proposed diff
name: migration
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: false
+ resources:
+ limits:
+ cpu: 500m
+ memory: 512Mi
+ requests:
+ cpu: 50m
+ memory: 128Mi
volumeMounts:
- mountPath: /secrets/db
name: db-secrets spec:
+ replicas: 2
selector:
matchLabels:
app: frontend📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| initContainers: | |
| - command: | |
| - /usr/local/bin/ambient-api-server | |
| - migrate | |
| - --db-host-file=/secrets/db/db.host | |
| - --db-port-file=/secrets/db/db.port | |
| - --db-user-file=/secrets/db/db.user | |
| - --db-password-file=/secrets/db/db.password | |
| - --db-name-file=/secrets/db/db.name | |
| - --db-sslmode=require | |
| - --alsologtostderr | |
| - -v=4 | |
| image: ${IMAGE_AMBIENT_API_SERVER}:${IMAGE_TAG} | |
| imagePullPolicy: Always | |
| name: migration | |
| securityContext: | |
| allowPrivilegeEscalation: false | |
| capabilities: | |
| drop: | |
| - ALL | |
| readOnlyRootFilesystem: false | |
| volumeMounts: | |
| - mountPath: /secrets/db | |
| name: db-secrets | |
| initContainers: | |
| - command: | |
| - /usr/local/bin/ambient-api-server | |
| - migrate | |
| - --db-host-file=/secrets/db/db.host | |
| - --db-port-file=/secrets/db/db.port | |
| - --db-user-file=/secrets/db/db.user | |
| - --db-password-file=/secrets/db/db.password | |
| - --db-name-file=/secrets/db/db.name | |
| - --db-sslmode=require | |
| - --alsologtostderr | |
| - -v=4 | |
| image: ${IMAGE_AMBIENT_API_SERVER}:${IMAGE_TAG} | |
| imagePullPolicy: Always | |
| name: migration | |
| resources: | |
| limits: | |
| cpu: 500m | |
| memory: 512Mi | |
| requests: | |
| cpu: 50m | |
| memory: 128Mi | |
| securityContext: | |
| allowPrivilegeEscalation: false | |
| capabilities: | |
| drop: | |
| - ALL | |
| readOnlyRootFilesystem: false | |
| volumeMounts: | |
| - mountPath: /secrets/db | |
| name: db-secrets |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/manifests/templates/template-services.yaml` around lines 304 -
327, Add explicit resource requests/limits to the migration initContainer and
set an explicit replicas count for the frontend Deployment: update the
initContainer named "migration" (the init container running
/usr/local/bin/ambient-api-server migrate) to include a resources block with
reasonable requests/limits (e.g., low CPU/memory instead of relying on
LimitRange defaults), and add replicas: 2 to the frontend Deployment spec (the
Deployment that serves the user-facing ingress via oauth-proxy) so it does not
implicitly default to 1.
| secretName: stage-gcp-creds | ||
| - configMap: | ||
| name: ambient-models | ||
| optional: true | ||
| name: model-manifest | ||
| - configMap: | ||
| name: ambient-flags | ||
| optional: true | ||
| name: flags-config | ||
| - configMap: | ||
| name: ambient-agent-registry | ||
| optional: true | ||
| name: agent-registry | ||
| - configMap: | ||
| name: ldap-ca-cert | ||
| optional: true | ||
| name: ldap-ca-cert | ||
| - apiVersion: apps/v1 | ||
| kind: Deployment | ||
| metadata: | ||
| labels: | ||
| app: frontend | ||
| name: frontend | ||
| namespace: ambient-code | ||
| spec: | ||
| selector: | ||
| matchLabels: | ||
| app: frontend | ||
| template: | ||
| metadata: | ||
| labels: | ||
| app: frontend | ||
| spec: | ||
| containers: | ||
| - env: | ||
| - name: BACKEND_URL | ||
| value: http://backend-service:8080/api | ||
| - name: NODE_ENV | ||
| value: production | ||
| - name: GITHUB_APP_SLUG | ||
| value: ambient-code | ||
| - name: UNLEASH_URL | ||
| valueFrom: | ||
| secretKeyRef: | ||
| key: unleash-url | ||
| name: unleash-credentials | ||
| optional: true | ||
| - name: UNLEASH_CLIENT_KEY | ||
| valueFrom: | ||
| secretKeyRef: | ||
| key: client-api-token | ||
| name: unleash-credentials | ||
| optional: true | ||
| image: ${IMAGE_FRONTEND}:${IMAGE_TAG} | ||
| imagePullPolicy: Always | ||
| livenessProbe: | ||
| httpGet: | ||
| path: / | ||
| port: http | ||
| initialDelaySeconds: 30 | ||
| periodSeconds: 10 | ||
| name: frontend | ||
| ports: | ||
| - containerPort: 3000 | ||
| name: http | ||
| readinessProbe: | ||
| httpGet: | ||
| path: / | ||
| port: http | ||
| initialDelaySeconds: 5 | ||
| periodSeconds: 5 | ||
| resources: | ||
| limits: | ||
| cpu: 1000m | ||
| memory: 2Gi | ||
| requests: | ||
| cpu: 200m | ||
| memory: 512Mi | ||
| - args: | ||
| - --http-address=:8443 | ||
| - --https-address= | ||
| - --provider=openshift | ||
| - --openshift-service-account=frontend | ||
| - --upstream=http://localhost:3000 | ||
| - --openshift-delegate-urls={"/api":{"resource":"projects","verb":"list"}} | ||
| - --tls-cert=/etc/tls/private/tls.crt | ||
| - --tls-key=/etc/tls/private/tls.key | ||
| - --cookie-secret-file=/etc/oauth-cookie/cookie_secret | ||
| - --pass-access-token | ||
| - --scope=user:full | ||
| - --upstream-timeout=5m | ||
| - --skip-auth-regex=^/metrics | ||
| image: ${OAUTH_PROXY_IMAGE_NAME}:${OAUTH_PROXY_IMAGE_TAG} | ||
| imagePullPolicy: IfNotPresent | ||
| livenessProbe: | ||
| failureThreshold: 3 | ||
| httpGet: | ||
| path: /oauth/healthz | ||
| port: dashboard-ui | ||
| scheme: HTTP | ||
| initialDelaySeconds: 10 | ||
| periodSeconds: 10 | ||
| successThreshold: 1 | ||
| timeoutSeconds: 1 | ||
| name: oauth-proxy | ||
| ports: | ||
| - containerPort: 8443 | ||
| name: dashboard-ui | ||
| readinessProbe: | ||
| failureThreshold: 3 | ||
| httpGet: | ||
| path: /oauth/healthz | ||
| port: dashboard-ui | ||
| scheme: HTTP | ||
| initialDelaySeconds: 10 | ||
| periodSeconds: 10 | ||
| successThreshold: 1 | ||
| timeoutSeconds: 1 | ||
| resources: | ||
| limits: | ||
| cpu: 200m | ||
| memory: 200Mi | ||
| requests: | ||
| cpu: 10m | ||
| memory: 50Mi | ||
| volumeMounts: | ||
| - mountPath: /etc/tls/private | ||
| name: frontend-proxy-tls | ||
| - mountPath: /etc/oauth-cookie | ||
| name: oauth-cookie-secret | ||
| serviceAccountName: frontend | ||
| volumes: | ||
| - name: frontend-proxy-tls | ||
| secret: | ||
| secretName: frontend-proxy-tls | ||
| - name: oauth-cookie-secret | ||
| secret: | ||
| secretName: stage-cookie-secret |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check where stage-cookie-secret / stage-gcp-creds are actually created
rg -nP --type=yaml 'name:\s*stage-(cookie-secret|gcp-creds)\b' -C2Repository: ambient-code/platform
Length of output: 47
🏁 Script executed:
# Search for these secret names across all file types
rg -i 'stage-cookie-secret|stage-gcp-creds' --type=yaml --type=json --type=tomlRepository: ambient-code/platform
Length of output: 256
🏁 Script executed:
# Check the file structure and look for overlays/parameterization patterns
find components/manifests -type f \( -name "*.yaml" -o -name "kustomization.yaml" -o -name "*.kustomize" \) | head -20Repository: ambient-code/platform
Length of output: 1181
🏁 Script executed:
# Check if there are environment-specific variants or overlays
fd -e yaml components/manifests | grep -iE '(prod|stage|dev|overlay)' | head -20Repository: ambient-code/platform
Length of output: 450
🏁 Script executed:
# Look at the actual file structure around the secrets
sed -n '590,610p' components/manifests/templates/template-services.yaml
sed -n '730,740p' components/manifests/templates/template-services.yamlRepository: ambient-code/platform
Length of output: 987
🏁 Script executed:
# Check the kustomization structure - look at the main and base kustomization files
cat components/manifests/base/kustomization.yamlRepository: ambient-code/platform
Length of output: 863
🏁 Script executed:
# Check if there are overlays directory for different environments
find components/manifests -maxdepth 3 -type d | grep -iE '(overlay|prod|stage|dev)'Repository: ambient-code/platform
Length of output: 574
🏁 Script executed:
# Search for where these secrets might be generated or patched
rg -A5 -B5 'stage-cookie-secret|stage-gcp-creds' components/manifests --type=yaml | head -50Repository: ambient-code/platform
Length of output: 1863
🏁 Script executed:
# Check if there's parameterization in the template file itself
grep -n 'stage-\|prod-\|dev-' components/manifests/templates/template-services.yaml | head -20Repository: ambient-code/platform
Length of output: 591
🏁 Script executed:
# Check the production overlay - does it patch the secret names?
cat components/manifests/overlays/production/kustomization.yamlRepository: ambient-code/platform
Length of output: 3520
🏁 Script executed:
# Check if there are any patches for template-services.yaml
find components/manifests/overlays -type f -name "*.yaml" -exec grep -l 'stage-cookie-secret\|stage-gcp-creds' {} \;Repository: ambient-code/platform
Length of output: 47
🏁 Script executed:
# Check RBAC - what permissions does the frontend serviceAccount have?
rg -A10 'kind: ServiceAccount' components/manifests | grep -A10 'name: frontend'Repository: ambient-code/platform
Length of output: 7140
🏁 Script executed:
# Check if there's a RoleBinding for frontend
rg 'serviceAccountName: frontend|name: frontend' components/manifests/base/rbac --type=yaml -B2 -A5Repository: ambient-code/platform
Length of output: 1604
🏁 Script executed:
# Get full frontend-rbac.yaml to check complete RBAC
cat components/manifests/base/rbac/frontend-rbac.yamlRepository: ambient-code/platform
Length of output: 1208
Parameterize secret names or add production overlay patches for stage-gcp-creds and stage-cookie-secret.
Both secrets are hardcoded with stage- prefix in the template (lines 600, 737) while the production overlay has no patches to override them. This will cause mount failures in production since the secrets won't exist with those names. Either parameterize the secret names in the template or add patchesStrategicMerge in overlays/production to rename them for the prod environment.
Note: RBAC and resource limits are properly configured.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/manifests/templates/template-services.yaml` around lines 600 -
737, The template hardcodes stage-only secret names (stage-gcp-creds and
stage-cookie-secret) causing prod mount failures; update the Deployment spec to
either parameterize these secrets (replace stage-gcp-creds and
stage-cookie-secret with template variables like ${GCP_CREDS_SECRET} and
${COOKIE_SECRET} and document defaults) or add overlays/production
patchesStrategicMerge that modify the Deployment's volumes.secret.secretName
(for volumes named frontend-proxy-tls and oauth-cookie-secret) and any
secretName fields (e.g., the secretName referenced for GCP creds) to point to
the production secret names so the mounts used by the containers (volumeMounts
for name: frontend-proxy-tls and name: oauth-cookie-secret) exist in prod.
The
--scope=user:fulloption tells the OpenShift OAuth proxy to request OAuth tokens with full user permissions. Without it:With
--scope=user:fullrestored:Summary by CodeRabbit
New Features
Chores