Skip to content

Helmchanges#48

Open
rapoler wants to merge 2 commits intomainfrom
helmchanges
Open

Helmchanges#48
rapoler wants to merge 2 commits intomainfrom
helmchanges

Conversation

@rapoler
Copy link
Copy Markdown
Contributor

@rapoler rapoler commented Jan 13, 2026

Summary by CodeRabbit

  • New Features

    • Added Pod Disruption Budget support for enhanced high availability across deployment, stateful, web, and chat-server components.
    • Introduced Prometheus ServiceMonitor for server metrics collection (30s scrape interval).
    • Added support for extra volumes and volume mounts with multiple source types (PVC, hostPath, CSI, configMap, secret, emptyDir).
  • Improvements

    • Added liveness and readiness probes for improved health checks.
    • Introduced database migration init container with automatic PostgreSQL configuration.
    • Enhanced configuration flexibility with expanded environment variables, service ports (metrics, admin), and customization options (annotations, extra labels).

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jan 13, 2026

📝 Walkthrough

Walkthrough

Helm chart update introducing dynamic volume configuration, environment variable defaults, Pod Disruption Budgets across multiple components, Prometheus metrics monitoring via ServiceMonitor, and extensive configuration enhancements including health checks, service customization, and resource controls.

Changes

Cohort / File(s) Summary
Chart metadata
helm/gopie/Chart.yaml
Added commented-out zitadel dependency entry; no functional impact
Pod template volume flexibility
helm/gopie/templates/_pod-chat-server.tpl, helm/gopie/templates/_pod-web.tpl
Replaced static volume definitions with dynamic loops over extraVolumes; supports multiple volume source types (PVC, hostPath, CSI, configMap, secret, emptyDir) with fallback default
Pod template environment and volume configuration
helm/gopie/templates/_pod-server.tpl
Added default GOPIE_POSTGRES_* and GOPIE_AIAGENT_URL environment variables when custom env not provided; introduced dynamic extraVolumes loop matching sibling templates; applied pattern to both deployment and stateful pod definitions
Pod disruption budget enforcement
helm/gopie/templates/chat-server/pdb.yaml, helm/gopie/templates/server/pdb.yaml, helm/gopie/templates/web/pdb.yaml
New conditional PodDisruptionBudget templates for chatserver, server, and web components; each supports minAvailable or maxUnavailable configuration with policy/v1 apiVersion
Prometheus metrics integration
helm/gopie/templates/server/servicemonitor.yaml
New ServiceMonitor template for Prometheus scraping; configured with 30s scrape interval, 10s timeout, /metrics endpoint, and label preservation
Helm values configuration
helm/gopie/values.yaml
Major expansion: added detailed environment variables across deployment/stateful/web/chatserver/companion components; introduced initContainers for database migrations; defined livenessProbe and readinessProbe blocks; added podDisruptionBudget, extraVolumeMounts, extraVolumes sections; expanded service definitions with additional ports (metrics, admin), annotations, extraLabels; added ingress/autoscaling/serviceAccount customization fields

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 Hop, hop, hooray! Our Helm chart grows,
With volumes dancing where the pod goes,
PDBs guard disruptions with care,
And Prometheus whispers metrics through the air! 📊✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'Helmchanges' is vague and does not clearly convey the specific nature or scope of the changes made to the Helm chart. Replace with a more descriptive title that highlights the main change, such as 'Add PodDisruptionBudgets, health checks, and dynamic volumes to Helm chart' or 'Enhance Helm chart with configurable probes, volume mounts, and disruption budgets.'
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@claude
Copy link
Copy Markdown

claude Bot commented Jan 13, 2026

Claude finished @rapoler's task —— View job


PR Review: Helmchanges

Todo List:

  • Analyze changed Helm chart files
  • Review values.yaml configuration changes
  • Examine pod template modifications
  • Assess new PodDisruptionBudget configurations
  • Review ServiceMonitor setup
  • Check Zitadel dependency addition
  • Provide comprehensive review feedback

🚨 Critical Security Issues

1. Hardcoded Secrets in values.yaml

Severity: CRITICAL

Multiple sensitive credentials are hardcoded in plaintext in values.yaml:

  • Lines 46, 236, 242: OpenAI API keys
  • Lines 60, 250: MotherDuck JWT tokens
  • Lines 85, 275: Encryption keys
  • Lines 89-91, 279-281: S3 access credentials
  • Lines 511-513, 861-863: AWS/S3 secrets
  • Lines 682, 690, 726: External service API keys

Risk: Credentials exposed in version control, accessible to anyone with repository access.

Recommendation: Move all secrets to Kubernetes Secrets and reference them via secretKeyRef.

2. Missing Security Contexts

Severity: HIGH

All pod templates have empty security contexts (podSecurityContext: {}, securityContext: {}):

  • helm/gopie/values.yaml:119-125 (deployment)
  • helm/gopie/values.yaml:309-315 (stateful)
  • helm/gopie/values.yaml:530-543 (web)
  • helm/gopie/values.yaml:739-750 (chatserver)

Risk: Containers run as root with privileged access.

Recommendation: Set non-root user, read-only filesystem, and drop capabilities:

podSecurityContext:
  runAsNonRoot: true
  runAsUser: 1001
  fsGroup: 1001
securityContext:
  allowPrivilegeEscalation: false
  readOnlyRootFilesystem: true
  capabilities:
    drop: ["ALL"]

🔧 Configuration Issues

3. Resource Limits Missing

Severity: MEDIUM

All components have resources: {}, meaning no CPU/memory limits:

  • Could lead to resource exhaustion and cluster instability
  • No guaranteed resource allocation

Recommendation: Set appropriate resource requests and limits for all components.

4. Environment Variable Issues

Severity: MEDIUM

  • Lines 181-183: Incorrect environment variable handling in stateful pod template - using .value instead of proper valueFrom handling
  • Line 204: Incorrect volume mounts reference (deployment.extraVolumeMounts in stateful template)

5. Commented Zitadel Dependency

Severity: LOW

Lines 53-56: Zitadel dependency is commented out but the binary is added to charts. This creates inconsistency.

Recommendation: Either enable the dependency properly or remove the binary chart.

📋 Positive Changes

Good Additions

  1. PodDisruptionBudgets: Added for all components - good for high availability
  2. ServiceMonitor: Prometheus monitoring setup is correct
  3. Health Probes: Proper liveness and readiness probes configured
  4. Pod Templates: Well-structured template organization

🔍 Minor Issues

6. Template Inconsistencies

  • Line 14 (web pod): Redundant default value chaining
  • Line 159: Inconsistent naming ({{ .Chart.Name }} vs {{ include "server.name" . }})

7. Missing Namespace References

  • Line 6 (chat-server/pdb.yaml): Uses chatserver.namespace but should be gopie.namespace

📊 Overall Assessment

Category Score Notes
Security 2/10 Critical security flaws with hardcoded secrets
Configuration ⚠️ 6/10 Good structure, missing resource limits
Best Practices ⚠️ 7/10 Good template organization, some inconsistencies
Maintainability 8/10 Well-documented, clear structure

🎯 Required Actions

Must Fix Before Merge:

  1. Move all secrets to Kubernetes Secrets
  2. Add security contexts for all components
  3. Fix environment variable handling in stateful template
  4. Add resource limits for all components

Should Fix:

  1. Resolve Zitadel dependency inconsistency
  2. Fix template naming inconsistencies
  3. Correct namespace references

Consider:

  1. Add network policies for additional security
  2. Implement pod security standards/policies
  3. Add init container security contexts

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
helm/gopie/templates/_pod-server.tpl (2)

203-209: Copy-paste bug: stateful pod uses deployment.extraVolumeMounts instead of stateful.extraVolumeMounts.

The stateful pod template references .Values.deployment.extraVolumeMounts but should use .Values.stateful.extraVolumeMounts to honor the stateful component's own configuration.

Proposed fix
     volumeMounts:
-      {{- range .Values.deployment.extraVolumeMounts }}
+      {{- range .Values.stateful.extraVolumeMounts }}
       - name: {{ .name }}
         mountPath: {{ .mountPath }}
         subPath: {{ .subPath | default "" }}
         readOnly: {{ .readOnly }}
       {{- end }}

244-258: Missing nindent in stateful extraVolumes causes malformed YAML.

Unlike the deployment section (lines 104-117), the stateful extraVolumes block omits nindent when rendering nested volume types. This will produce invalid YAML indentation.

Proposed fix
     {{- else if .hostPath }}
     hostPath:
-      {{ toYaml .hostPath }}
+      {{- toYaml .hostPath | nindent 6 }}
     {{- else if .csi }}
     csi:
-      {{- toYaml .csi }}
+      {{- toYaml .csi | nindent 6 }}
     {{- else if .configMap }}
     configMap:
-      {{- toYaml .configMap }}
+      {{- toYaml .configMap | nindent 6 }}
     {{- else if .secret }}
     secret:
-      {{- toYaml .secret }}
+      {{- toYaml .secret | nindent 6 }}
     {{- else if .emptyDir }}
     emptyDir:
-      {{- toYaml .emptyDir }}
+      {{- toYaml .emptyDir | nindent 6 }}
🤖 Fix all issues with AI agents
In @helm/gopie/templates/chat-server/pdb.yaml:
- Around line 1-18: If .Values.chatserver.podDisruptionBudget.enabled is true
but neither minAvailable nor maxUnavailable is set the rendered
PodDisruptionBudget spec can be empty and invalid; update the template around
the minAvailable/maxUnavailable logic in the pdb.yaml Helm template to supply a
safe default (e.g. minAvailable: 1) when minAvailable is unset and
maxUnavailable is unset, using Helm's default function or an additional
conditional so that spec always contains either minAvailable or maxUnavailable
(refer to .Values.chatserver.podDisruptionBudget.minAvailable,
.Values.chatserver.podDisruptionBudget.maxUnavailable and
.Values.chatserver.podDisruptionBudget.enabled).

In @helm/gopie/templates/server/pdb.yaml:
- Around line 10-14: The PDB template for server may render an empty spec when
.Values.deployment.podDisruptionBudget.minAvailable and
.Values.deployment.podDisruptionBudget.maxUnavailable are both unset; update the
template (server/pdb.yaml) to provide a safe default fallback (e.g., default
minAvailable: 1 or default maxUnavailable: 1 depending on your desired behavior)
so the spec block is never empty when the PDB is enabled, mirroring the same
fallback logic used in web/pdb.yaml and using the same
.Values.deployment.podDisruptionBudget.* symbols.

In @helm/gopie/templates/server/servicemonitor.yaml:
- Around line 1-17: The ServiceMonitor template is rendered unconditionally and
will break installs without the Prometheus Operator; wrap the ServiceMonitor
manifest in a Helm conditional checking .Values.server.serviceMonitor.enabled
(e.g., if .Values.server.serviceMonitor.enabled) so it only renders when
enabled, and replace the hard-coded scrape settings (interval, scrapeTimeout,
honorLabels and path/port if desired) with values from
.Values.server.serviceMonitor (e.g., .Values.server.serviceMonitor.interval,
.scrapeTimeout, .honorLabels) so they are configurable; also add the
corresponding defaults to values.yaml (server.serviceMonitor.enabled: false and
the interval/scrapeTimeout/honorLabels keys).

In @helm/gopie/templates/web/pdb.yaml:
- Around line 10-14: The PDB template can render an empty spec when
.Values.web.podDisruptionBudget.minAvailable and
.Values.web.podDisruptionBudget.maxUnavailable are both unset; update the logic
in the template that generates the minAvailable/maxUnavailable block
(referencing .Values.web.podDisruptionBudget.minAvailable,
.Values.web.podDisruptionBudget.maxUnavailable and podDisruptionBudget.enabled)
so that either (a) a safe default is provided (for example set minAvailable to 1
when neither value is provided) or (b) the whole PDB resource is not rendered
unless at least one of minAvailable or maxUnavailable is explicitly set;
implement the chosen approach by adding the additional conditional or default
fallback to ensure the PDB spec is never empty.

In @helm/gopie/values.yaml:
- Around line 45-99: The values.yaml currently hardcodes sensitive secrets
(GOPIE_OPENAI_APIKEY, GOPIE_OPENAI_OPTIONS containing Portkey API key,
GOPIE_MOTHERDUCK_TOKEN, GOPIE_ENCRYPTION_KEY, GOPIE_S3_ACCESS_KEY,
GOPIE_S3_SECRET_KEY); replace each plain-text env entry with a valueFrom:
secretKeyRef that points to a Kubernetes Secret (e.g., name: gopie-secrets, key:
openai-api-key / portkey-api-key / motherduck-token / encryption-key /
s3-access-key / s3-secret-key) and update any combined fields like
GOPIE_OPENAI_OPTIONS to build from separate secret keys or a single secret
string so no secrets remain inline in the manifest.
- Around line 504-521: The values.yaml currently hardcodes AWS credentials via
COMPANION_AWS_KEY and COMPANION_AWS_SECRET; remove those two entries and instead
reference a Kubernetes Secret: create a Secret (e.g., companion-aws-credentials)
containing keys aws_access_key_id and aws_secret_access_key and update the
companion deployment/helm chart to set COMPANION_AWS_KEY and
COMPANION_AWS_SECRET using env valueFrom secretKeyRef (or envFrom for the whole
secret) so the chart no longer stores secrets in plain text; keep other
companion variables (COMPANION_AWS_ENDPOINT, COMPANION_AWS_REGION,
COMPANION_AWS_BUCKET, COMPANION_DOMAIN, COMPANION_PROTOCOL, COMPANION_DATADIR,
COMPANION_SELF_ENDPOINT) as non-secret config.
- Around line 853-871: The env block in values.yaml exposes COMPANION_AWS_KEY
and COMPANION_AWS_SECRET in plaintext and duplicates the same credentials used
in the web section; replace these literal values by referencing a shared
Kubernetes Secret (e.g., via env.valueFrom.secretKeyRef or envFrom) and update
both the companion and web sections to use the same secret name and keys
(COMPANION_AWS_KEY, COMPANION_AWS_SECRET) so credentials are not duplicated in
the chart and are pulled from one secure Secret.
- Around line 215-289: Summary: The stateful env block duplicates hardcoded
secret values (e.g., GOPIE_OPENAI_APIKEY, GOPIE_MOTHERDUCK_TOKEN,
GOPIE_POSTGRES_PASSWORD, GOPIE_S3_SECRET_KEY) which is a security risk and
maintenance burden; replace duplicates with secret references and/or YAML
anchors. Fix: remove hardcoded secret values from the stateful "env" array and
replace them with valueFrom: secretKeyRef entries pointing to the existing
Kubernetes Secret(s) used by the deployment (use the same secret names/keys as
the deployment uses for POSTGRES_PASSWORD etc.), or consolidate common env
entries into a YAML anchor (e.g., &commonEnv) and reference it with aliases in
both deployment and stateful sections so non-secret vars are shared while
secrets remain referenced via secretKeyRef; update env entries for
GOPIE_OPENAI_APIKEY, GOPIE_MOTHERDUCK_TOKEN, GOPIE_S3_SECRET_KEY,
GOPIE_POSTGRES_PASSWORD, and any other GOPIE_* secret vars to use secretKeyRef
instead of hardcoded value.
🧹 Nitpick comments (3)
helm/gopie/Chart.yaml (1)

53-57: Consider using the condition field instead of commenting out.

Since other dependencies already use the condition field pattern (e.g., condition: postgresql.enabled), the zitadel dependency could remain uncommented and simply be disabled via zitadel.enabled: false in values.yaml. This keeps the dependency declaration visible and follows the existing pattern in this chart.

That said, commenting out is acceptable if zitadel support is being temporarily removed or is not yet ready.

helm/gopie/templates/_pod-web.tpl (1)

14-15: Redundant default expressions (pre-existing issue).

These lines use self-referential defaults that have no effect:

image: "{{ .Values.web.image.repository | default .Values.web.image.repository }}:..."
imagePullPolicy: {{ .Values.web.image.pullPolicy | default .Values.web.image.pullPolicy }}

The | default X where X is the same value being defaulted is a no-op. While not introduced by this PR, consider cleaning this up.

Simplified version
-    image: "{{ .Values.web.image.repository | default .Values.web.image.repository }}:{{ .Values.web.image.tag | default .Values.web.image.tag | default .Chart.AppVersion }}"
-    imagePullPolicy: {{ .Values.web.image.pullPolicy | default .Values.web.image.pullPolicy }}
+    image: "{{ .Values.web.image.repository }}:{{ .Values.web.image.tag | default .Chart.AppVersion }}"
+    imagePullPolicy: {{ .Values.web.image.pullPolicy }}
helm/gopie/values.yaml (1)

768-786: Chatserver health probes are commented out.

Unlike deployment, stateful, and web components, chatserver lacks active health probes. Consider enabling them for consistent reliability and proper traffic management during rolling updates.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e5355d5 and f3b0cda.

⛔ Files ignored due to path filters (1)
  • helm/gopie/Chart.lock is excluded by !**/*.lock
📒 Files selected for processing (10)
  • helm/gopie/Chart.yaml
  • helm/gopie/charts/zitadel-8.6.2.tgz
  • helm/gopie/templates/_pod-chat-server.tpl
  • helm/gopie/templates/_pod-server.tpl
  • helm/gopie/templates/_pod-web.tpl
  • helm/gopie/templates/chat-server/pdb.yaml
  • helm/gopie/templates/server/pdb.yaml
  • helm/gopie/templates/server/servicemonitor.yaml
  • helm/gopie/templates/web/pdb.yaml
  • helm/gopie/values.yaml
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-12-02T11:48:50.411Z
Learnt from: CR
Repo: factly/gopie PR: 0
File: web/CLAUDE.md:0-0
Timestamp: 2025-12-02T11:48:50.411Z
Learning: Configure environment variables including NEXT_PUBLIC_COMPANION_URL, NEXT_PUBLIC_GOPIE_API_URL, GOPIE_API_URL, and NEXT_PUBLIC_ENABLE_AUTH

Applied to files:

  • helm/gopie/values.yaml
🪛 Gitleaks (8.30.0)
helm/gopie/values.yaml

[high] 52-52: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


[high] 60-60: Uncovered a JSON Web Token, which may lead to unauthorized access to web applications and sensitive user data.

(jwt)


[high] 242-242: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


[high] 250-250: Uncovered a JSON Web Token, which may lead to unauthorized access to web applications and sensitive user data.

(jwt)

🪛 YAMLlint (1.37.1)
helm/gopie/templates/chat-server/pdb.yaml

[error] 1-1: syntax error: expected the node content, but found '-'

(syntax)

helm/gopie/templates/server/pdb.yaml

[error] 1-1: syntax error: expected the node content, but found '-'

(syntax)

helm/gopie/templates/web/pdb.yaml

[error] 1-1: syntax error: expected the node content, but found '-'

(syntax)

helm/gopie/templates/server/servicemonitor.yaml

[error] 7-7: syntax error: expected the node content, but found '-'

(syntax)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: claude-review
🔇 Additional comments (4)
helm/gopie/templates/_pod-chat-server.tpl (1)

58-81: Dynamic volume handling looks good.

The conditional branching for various volume types (PVC, hostPath, CSI, configMap, secret, emptyDir) with a sensible emptyDir fallback is well-structured.

One consideration: ensure that volume names in extraVolumes match the corresponding names in extraVolumeMounts (lines 38-44). If a mount references a volume name that doesn't exist, Kubernetes will reject the pod spec at runtime.

helm/gopie/templates/_pod-web.tpl (1)

56-79: Consistent volume handling with chat-server template.

The dynamic volume configuration follows the same pattern as _pod-chat-server.tpl, which is good for maintainability. The conditional branching and emptyDir fallback are correctly implemented.

helm/gopie/templates/_pod-server.tpl (1)

55-75: Good fallback pattern for environment variables.

The deterministic default environment block provides sensible PostgreSQL and AI agent configuration when custom env is not specified. This ensures the chart works out-of-the-box while allowing full customization.

helm/gopie/values.yaml (1)

143-165: Well-structured PDB and health probe configuration.

The PodDisruptionBudget defaults with maxUnavailable: 1 and comprehensive health probes with reasonable timeouts provide good baseline reliability settings.

Comment on lines +1 to +18
{{- if .Values.chatserver.podDisruptionBudget.enabled }}
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: {{ include "chatserver.name" . }}-pdb
namespace: {{ include "chatserver.namespace" . }}
labels:
{{ include "chatserver.labels" . | nindent 4 }}
spec:
{{- if .Values.chatserver.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.chatserver.podDisruptionBudget.minAvailable }}
{{- else if .Values.chatserver.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.chatserver.podDisruptionBudget.maxUnavailable }}
{{- end }}
selector:
matchLabels:
{{ include "chatserver.selectorLabels" . | nindent 6 }}
{{- end }}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

PDB template structure is correct; static analysis hint is a false positive.

The YAMLlint error is expected since this is a Helm template file, not raw YAML—the {{- if ...}} directive is valid Go templating.

However, if podDisruptionBudget.enabled is true but neither minAvailable nor maxUnavailable is configured, the rendered PDB will have an empty spec (missing both fields), which Kubernetes will reject. Consider adding a default or validation.

Suggested fix: Add a default minAvailable
 spec:
   {{- if .Values.chatserver.podDisruptionBudget.minAvailable }}
   minAvailable: {{ .Values.chatserver.podDisruptionBudget.minAvailable }}
   {{- else if .Values.chatserver.podDisruptionBudget.maxUnavailable }}
   maxUnavailable: {{ .Values.chatserver.podDisruptionBudget.maxUnavailable }}
+  {{- else }}
+  minAvailable: 1
   {{- end }}
🧰 Tools
🪛 YAMLlint (1.37.1)

[error] 1-1: syntax error: expected the node content, but found '-'

(syntax)

🤖 Prompt for AI Agents
In @helm/gopie/templates/chat-server/pdb.yaml around lines 1 - 18, If
.Values.chatserver.podDisruptionBudget.enabled is true but neither minAvailable
nor maxUnavailable is set the rendered PodDisruptionBudget spec can be empty and
invalid; update the template around the minAvailable/maxUnavailable logic in the
pdb.yaml Helm template to supply a safe default (e.g. minAvailable: 1) when
minAvailable is unset and maxUnavailable is unset, using Helm's default function
or an additional conditional so that spec always contains either minAvailable or
maxUnavailable (refer to .Values.chatserver.podDisruptionBudget.minAvailable,
.Values.chatserver.podDisruptionBudget.maxUnavailable and
.Values.chatserver.podDisruptionBudget.enabled).

Comment on lines +10 to +14
{{- if .Values.deployment.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.deployment.podDisruptionBudget.minAvailable }}
{{- else if .Values.deployment.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.deployment.podDisruptionBudget.maxUnavailable }}
{{- end }}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Same issue as web PDB: spec may be empty if neither value is configured.

Consider adding a default fallback to ensure a valid PDB spec when enabled. This mirrors the pattern needed in web/pdb.yaml.

Suggested fix
   {{- if .Values.deployment.podDisruptionBudget.minAvailable }}
   minAvailable: {{ .Values.deployment.podDisruptionBudget.minAvailable }}
   {{- else if .Values.deployment.podDisruptionBudget.maxUnavailable }}
   maxUnavailable: {{ .Values.deployment.podDisruptionBudget.maxUnavailable }}
+  {{- else }}
+  maxUnavailable: 1
   {{- end }}
🤖 Prompt for AI Agents
In @helm/gopie/templates/server/pdb.yaml around lines 10 - 14, The PDB template
for server may render an empty spec when
.Values.deployment.podDisruptionBudget.minAvailable and
.Values.deployment.podDisruptionBudget.maxUnavailable are both unset; update the
template (server/pdb.yaml) to provide a safe default fallback (e.g., default
minAvailable: 1 or default maxUnavailable: 1 depending on your desired behavior)
so the spec block is never empty when the PDB is enabled, mirroring the same
fallback logic used in web/pdb.yaml and using the same
.Values.deployment.podDisruptionBudget.* symbols.

Comment on lines +1 to +17
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "server.name" . }}
namespace: {{ include "gopie.namespace" . }}
labels:
{{- include "server.labels" . | nindent 4 }}
spec:
selector:
matchLabels:
{{- include "server.selectorLabels" . | nindent 6 }}
endpoints:
- port: metrics
path: /metrics
interval: 30s
scrapeTimeout: 10s
honorLabels: true
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

ServiceMonitor should be conditional to avoid failures when Prometheus Operator is not installed.

Unlike the PDB templates which check .Values.*.podDisruptionBudget.enabled, this ServiceMonitor is unconditionally rendered. If the Prometheus Operator CRDs (monitoring.coreos.com/v1) are not installed in the cluster, the Helm deployment will fail.

Consider wrapping this in a conditional and making the scrape settings configurable.

Suggested fix
+{{- if .Values.server.serviceMonitor.enabled }}
 apiVersion: monitoring.coreos.com/v1
 kind: ServiceMonitor
 metadata:
   name: {{ include "server.name" . }}
   namespace: {{ include "gopie.namespace" . }}
   labels:
     {{- include "server.labels" . | nindent 4 }}
 spec:
   selector:
     matchLabels:
       {{- include "server.selectorLabels" . | nindent 6 }}
   endpoints:
   - port: metrics
     path: /metrics
-    interval: 30s
-    scrapeTimeout: 10s
+    interval: {{ .Values.server.serviceMonitor.interval | default "30s" }}
+    scrapeTimeout: {{ .Values.server.serviceMonitor.scrapeTimeout | default "10s" }}
     honorLabels: true
+{{- end }}

Also add the corresponding values in values.yaml:

server:
  serviceMonitor:
    enabled: false
    interval: 30s
    scrapeTimeout: 10s
🧰 Tools
🪛 YAMLlint (1.37.1)

[error] 7-7: syntax error: expected the node content, but found '-'

(syntax)

🤖 Prompt for AI Agents
In @helm/gopie/templates/server/servicemonitor.yaml around lines 1 - 17, The
ServiceMonitor template is rendered unconditionally and will break installs
without the Prometheus Operator; wrap the ServiceMonitor manifest in a Helm
conditional checking .Values.server.serviceMonitor.enabled (e.g., if
.Values.server.serviceMonitor.enabled) so it only renders when enabled, and
replace the hard-coded scrape settings (interval, scrapeTimeout, honorLabels and
path/port if desired) with values from .Values.server.serviceMonitor (e.g.,
.Values.server.serviceMonitor.interval, .scrapeTimeout, .honorLabels) so they
are configurable; also add the corresponding defaults to values.yaml
(server.serviceMonitor.enabled: false and the interval/scrapeTimeout/honorLabels
keys).

Comment on lines +10 to +14
{{- if .Values.web.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.web.podDisruptionBudget.minAvailable }}
{{- else if .Values.web.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.web.podDisruptionBudget.maxUnavailable }}
{{- end }}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

PDB spec may be empty if neither minAvailable nor maxUnavailable is configured.

When podDisruptionBudget.enabled is true but both minAvailable and maxUnavailable are unset, the rendered PDB will have no disruption constraint, which is invalid. Add a default or require at least one value.

Suggested fix with default
   {{- if .Values.web.podDisruptionBudget.minAvailable }}
   minAvailable: {{ .Values.web.podDisruptionBudget.minAvailable }}
   {{- else if .Values.web.podDisruptionBudget.maxUnavailable }}
   maxUnavailable: {{ .Values.web.podDisruptionBudget.maxUnavailable }}
+  {{- else }}
+  maxUnavailable: 1
   {{- end }}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{{- if .Values.web.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.web.podDisruptionBudget.minAvailable }}
{{- else if .Values.web.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.web.podDisruptionBudget.maxUnavailable }}
{{- end }}
{{- if .Values.web.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.web.podDisruptionBudget.minAvailable }}
{{- else if .Values.web.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.web.podDisruptionBudget.maxUnavailable }}
{{- else }}
maxUnavailable: 1
{{- end }}
🤖 Prompt for AI Agents
In @helm/gopie/templates/web/pdb.yaml around lines 10 - 14, The PDB template can
render an empty spec when .Values.web.podDisruptionBudget.minAvailable and
.Values.web.podDisruptionBudget.maxUnavailable are both unset; update the logic
in the template that generates the minAvailable/maxUnavailable block
(referencing .Values.web.podDisruptionBudget.minAvailable,
.Values.web.podDisruptionBudget.maxUnavailable and podDisruptionBudget.enabled)
so that either (a) a safe default is provided (for example set minAvailable to 1
when neither value is provided) or (b) the whole PDB resource is not rendered
unless at least one of minAvailable or maxUnavailable is explicitly set;
implement the chosen approach by adding the additional conditional or default
fallback to ensure the PDB spec is never empty.

Comment thread helm/gopie/values.yaml
Comment on lines +45 to +99
- name: GOPIE_OPENAI_APIKEY
value: "bRUBnBaNaOg8Qou7B7jVHsuIVsc="
- name: GOPIE_OPENAI_BASEURL
value: "https://api.portkey.ai/v1"
- name: GOPIE_OPENAI_MODEL
value: "o3-mini"
- name: GOPIE_OPENAI_OPTIONS
value: "x-portkey-virtual-key=openai-test-11e8b7,x-portkey-api-key=bRUBnBaNaOg8Qou7B7jVHsuIVsc="
- name: GOPIE_DUCKDB_PATH
value: "./dataful/gopie.duckdb"
- name: GOPIE_OLAPDB_ACCESS_MODE
value: "read_write"
- name: GOPIE_OLAPDB_DBTYPE
value: "motherduck"
- name: GOPIE_MOTHERDUCK_TOKEN
value: "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6ImFrc2hpdGgua2F0a3VyaUBmYWN0bHkuaW4iLCJzZXNzaW9uIjoiYWtzaGl0aC5rYXRrdXJpLmZhY3RseS5pbiIsInBhdCI6InVXY3hLakNuVVhvQ2RYd3lUSFdjZ1ZnUHpmaTFId2NjdWxSWkFhNmUyMVEiLCJ1c2VySWQiOiJhYTgwZmU0My1kNTdmLTRlNTQtOGRkNS1iNzA5NDY2NTExMmEiLCJpc3MiOiJtZF9wYXQiLCJyZWFkT25seSI6ZmFsc2UsInRva2VuVHlwZSI6InJlYWRfd3JpdGUiLCJpYXQiOjE3NTQ1NTE5ODh9.3cAa_0z5CrGQgU6r4SYlj_oc6yrQOH2UkpdU8fSLgT4"
- name: GOPIE_MOTHERDUCK_DB_NAME
value: "test"
- name: GOPIE_MOTHERDUCK_HELPER_DB_DIR_PATH
value: "./motherduck"
- name: GOPIE_CORS_HANDLED_BY_INGRESS
value: "false"
- name: GOPIE_POSTGRES_HOST
value: "gopie-postgresql"
- name: GOPIE_POSTGRES_PORT
value: "5432"
- name: GOPIE_POSTGRES_DB
value: "gopie"
- name: GOPIE_POSTGRES_USER
value: "postgres"
- name: GOPIE_POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: gopie-postgresql
key: postgres-password
- name: GOPIE_POSTGRES_SSLMODE
value: "disable"
- name: GOPIE_AIAGENT_URL
value: "http://chat-server:8003"
- name: GOPIE_ENCRYPTION_KEY
value: "E5B8A0F3C1D9E7B2A5F0C3D8E6B1A4F2"
- name: GOPIE_ENABLED_SERVERS
value: "api"
- name: GOPIE_S3_ACCESS_KEY
value: "IHJWCLQNI113NEQODJXT"
- name: GOPIE_S3_SECRET_KEY
value: "HeH6eHrfdmS0i55C6UZz0KgJDvp9ODjEfMB4YVVh"
- name: GOPIE_S3_ENDPOINT
value: "https://s3.eu-central-1.wasabisys.com"
- name: GOPIE_S3_SSL
value: "true"
- name: GOPIE_S3_REGION
value: "eu-central-1"
- name: GOPIE_DOWNLOADS_S3_BUCKET
value: "downloads"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

CRITICAL: Secrets and API keys hardcoded in plain text.

Multiple sensitive credentials are exposed in this values file:

  • Line 46: GOPIE_OPENAI_APIKEY
  • Line 52: Portkey API key in GOPIE_OPENAI_OPTIONS
  • Lines 59-60: JWT token (GOPIE_MOTHERDUCK_TOKEN)
  • Line 84: Encryption key (GOPIE_ENCRYPTION_KEY)
  • Lines 88-91: S3 access/secret keys

These secrets will be committed to version control and visible in rendered manifests. Use secretKeyRef to reference Kubernetes Secrets instead.

Example: Reference secrets properly
env:
  - name: GOPIE_OPENAI_APIKEY
    valueFrom:
      secretKeyRef:
        name: gopie-secrets
        key: openai-api-key
  - name: GOPIE_MOTHERDUCK_TOKEN
    valueFrom:
      secretKeyRef:
        name: gopie-secrets
        key: motherduck-token
  - name: GOPIE_ENCRYPTION_KEY
    valueFrom:
      secretKeyRef:
        name: gopie-secrets
        key: encryption-key
  - name: GOPIE_S3_ACCESS_KEY
    valueFrom:
      secretKeyRef:
        name: gopie-secrets
        key: s3-access-key
  - name: GOPIE_S3_SECRET_KEY
    valueFrom:
      secretKeyRef:
        name: gopie-secrets
        key: s3-secret-key
🧰 Tools
🪛 Gitleaks (8.30.0)

[high] 52-52: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


[high] 60-60: Uncovered a JSON Web Token, which may lead to unauthorized access to web applications and sensitive user data.

(jwt)

🤖 Prompt for AI Agents
In @helm/gopie/values.yaml around lines 45 - 99, The values.yaml currently
hardcodes sensitive secrets (GOPIE_OPENAI_APIKEY, GOPIE_OPENAI_OPTIONS
containing Portkey API key, GOPIE_MOTHERDUCK_TOKEN, GOPIE_ENCRYPTION_KEY,
GOPIE_S3_ACCESS_KEY, GOPIE_S3_SECRET_KEY); replace each plain-text env entry
with a valueFrom: secretKeyRef that points to a Kubernetes Secret (e.g., name:
gopie-secrets, key: openai-api-key / portkey-api-key / motherduck-token /
encryption-key / s3-access-key / s3-secret-key) and update any combined fields
like GOPIE_OPENAI_OPTIONS to build from separate secret keys or a single secret
string so no secrets remain inline in the manifest.

Comment thread helm/gopie/values.yaml
Comment on lines +215 to +289
env:
- name: POSTGRES_USER
value: "postgres"
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: gopie-postgresql
key: postgres-password
- name: POSTGRES_DB
value: "gopie"
- name: POSTGRES_HOST
value: "gopie-postgresql"
- name: POSTGRES_PORT
value: "5432"
- name: GOPIE_LOGGER_LEVEL
value: "info"
- name: GOPIE_LOGGER_FILE
value: "gopie.log"
- name: GOPIE_LOGGER_MODE
value: "dev"
- name: GOPIE_OPENAI_APIKEY
value: "bRUBnBaNaOg8Qou7B7jVHsuIVsc="
- name: GOPIE_OPENAI_BASEURL
value: "https://api.portkey.ai/v1"
- name: GOPIE_OPENAI_MODEL
value: "o3-mini"
- name: GOPIE_OPENAI_OPTIONS
value: "x-portkey-virtual-key=openai-test-11e8b7,x-portkey-api-key=bRUBnBaNaOg8Qou7B7jVHsuIVsc="
- name: GOPIE_DUCKDB_PATH
value: "./dataful/gopie.duckdb"
- name: GOPIE_OLAPDB_ACCESS_MODE
value: "read_write"
- name: GOPIE_OLAPDB_DBTYPE
value: "motherduck"
- name: GOPIE_MOTHERDUCK_TOKEN
value: "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6ImFrc2hpdGgua2F0a3VyaUBmYWN0bHkuaW4iLCJzZXNzaW9uIjoiYWtzaGl0aC5rYXRrdXJpLmZhY3RseS5pbiIsInBhdCI6InVXY3hLakNuVVhvQ2RYd3lUSFdjZ1ZnUHpmaTFId2NjdWxSWkFhNmUyMVEiLCJ1c2VySWQiOiJhYTgwZmU0My1kNTdmLTRlNTQtOGRkNS1iNzA5NDY2NTExMmEiLCJpc3MiOiJtZF9wYXQiLCJyZWFkT25seSI6ZmFsc2UsInRva2VuVHlwZSI6InJlYWRfd3JpdGUiLCJpYXQiOjE3NTQ1NTE5ODh9.3cAa_0z5CrGQgU6r4SYlj_oc6yrQOH2UkpdU8fSLgT4"
- name: GOPIE_MOTHERDUCK_DB_NAME
value: "test"
- name: GOPIE_MOTHERDUCK_HELPER_DB_DIR_PATH
value: "./motherduck"
- name: GOPIE_CORS_HANDLED_BY_INGRESS
value: "false"
- name: GOPIE_POSTGRES_HOST
value: "gopie-postgresql"
- name: GOPIE_POSTGRES_PORT
value: "5432"
- name: GOPIE_POSTGRES_DB
value: "gopie"
- name: GOPIE_POSTGRES_USER
value: "postgres"
- name: GOPIE_POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: gopie-postgresql
key: postgres-password
- name: GOPIE_POSTGRES_SSLMODE
value: "disable"
- name: GOPIE_AIAGENT_URL
value: "http://chat-server:8003"
- name: GOPIE_ENCRYPTION_KEY
value: "E5B8A0F3C1D9E7B2A5F0C3D8E6B1A4F2"
- name: GOPIE_ENABLED_SERVERS
value: "api"
- name: GOPIE_S3_ACCESS_KEY
value: "IHJWCLQNI113NEQODJXT"
- name: GOPIE_S3_SECRET_KEY
value: "HeH6eHrfdmS0i55C6UZz0KgJDvp9ODjEfMB4YVVh"
- name: GOPIE_S3_ENDPOINT
value: "https://s3.eu-central-1.wasabisys.com"
- name: GOPIE_S3_SSL
value: "true"
- name: GOPIE_S3_REGION
value: "eu-central-1"
- name: GOPIE_DOWNLOADS_S3_BUCKET
value: "downloads"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Duplicated secrets in stateful section - same security concerns apply.

This section duplicates the hardcoded secrets from the deployment section. Beyond the security issue, consider using YAML anchors or a shared configuration pattern to avoid maintaining duplicate values.

🧰 Tools
🪛 Gitleaks (8.30.0)

[high] 242-242: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


[high] 250-250: Uncovered a JSON Web Token, which may lead to unauthorized access to web applications and sensitive user data.

(jwt)

🤖 Prompt for AI Agents
In @helm/gopie/values.yaml around lines 215 - 289, Summary: The stateful env
block duplicates hardcoded secret values (e.g., GOPIE_OPENAI_APIKEY,
GOPIE_MOTHERDUCK_TOKEN, GOPIE_POSTGRES_PASSWORD, GOPIE_S3_SECRET_KEY) which is a
security risk and maintenance burden; replace duplicates with secret references
and/or YAML anchors. Fix: remove hardcoded secret values from the stateful "env"
array and replace them with valueFrom: secretKeyRef entries pointing to the
existing Kubernetes Secret(s) used by the deployment (use the same secret
names/keys as the deployment uses for POSTGRES_PASSWORD etc.), or consolidate
common env entries into a YAML anchor (e.g., &commonEnv) and reference it with
aliases in both deployment and stateful sections so non-secret vars are shared
while secrets remain referenced via secretKeyRef; update env entries for
GOPIE_OPENAI_APIKEY, GOPIE_MOTHERDUCK_TOKEN, GOPIE_S3_SECRET_KEY,
GOPIE_POSTGRES_PASSWORD, and any other GOPIE_* secret vars to use secretKeyRef
instead of hardcoded value.

Comment thread helm/gopie/values.yaml
Comment on lines +504 to +521
- name: COMPANION_AWS_ENDPOINT
value: "https://s3.eu-central-1.wasabisys.com"
- name: COMPANION_AWS_REGION
value: "eu-central-1"
- name: COMPANION_AWS_BUCKET
value: "prod-factly-gopie-sas-new"
- name: COMPANION_AWS_KEY
value: "IHJWCLQNI113NEQODJXT"
- name: COMPANION_AWS_SECRET
value: "HeH6eHrfdmS0i55C6UZz0KgJDvp9ODjEfMB4YVVh"
- name: COMPANION_DOMAIN
value: "localhost:3020"
- name: COMPANION_PROTOCOL
value: "http"
- name: COMPANION_DATADIR
value: "/"
- name: COMPANION_SELF_ENDPOINT
value: "localhost:3020"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

AWS credentials exposed in web environment configuration.

Lines 510-513 contain hardcoded AWS access and secret keys for the companion service. Move these to a Kubernetes Secret.

🤖 Prompt for AI Agents
In @helm/gopie/values.yaml around lines 504 - 521, The values.yaml currently
hardcodes AWS credentials via COMPANION_AWS_KEY and COMPANION_AWS_SECRET; remove
those two entries and instead reference a Kubernetes Secret: create a Secret
(e.g., companion-aws-credentials) containing keys aws_access_key_id and
aws_secret_access_key and update the companion deployment/helm chart to set
COMPANION_AWS_KEY and COMPANION_AWS_SECRET using env valueFrom secretKeyRef (or
envFrom for the whole secret) so the chart no longer stores secrets in plain
text; keep other companion variables (COMPANION_AWS_ENDPOINT,
COMPANION_AWS_REGION, COMPANION_AWS_BUCKET, COMPANION_DOMAIN,
COMPANION_PROTOCOL, COMPANION_DATADIR, COMPANION_SELF_ENDPOINT) as non-secret
config.

Comment thread helm/gopie/values.yaml
Comment on lines +678 to +730
env:
- name: CHAT_GOPIE_API_ENDPOINT
value: "http://gopie-server:8001"
- name: CHAT_E2B_API_KEY
value: "e2b_d07b6fb3ffab800b95c68b7646ca1d88cce8dd74"
- name: CHAT_LLM_GATEWAY_PROVIDER
value: "openrouter"
- name: CHAT_EMBEDDING_GATEWAY_PROVIDER
value: "openai"
- name: OPENAI_API_KEY
value: ""
- name: CHAT_OPENROUTER_API_KEY
value: "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"
- name: CHAT_OPENROUTER_BASE_URL
value: "https://openrouter.ai/api/v1"
- name: CHAT_FAST_MODEL
value: "google/gemini-2.5-flash"
- name: CHAT_BALANCED_MODEL
value: "google/gemini-2.5-flash"
- name: CHAT_ADVANCED_MODEL
value: "google/gemini-2.5-pro"
- name: CHAT_S3_HOST
value: "http://minio:9000"
- name: CHAT_S3_ACCESS_KEY
value: "minioadmin"
- name: CHAT_S3_SECRET_KEY
value: "minioadmin"
- name: CHAT_S3_BUCKET
value: "gopie"
- name: CHAT_S3_REGION
value: "us-central-1"
- name: CHAT_DEFAULT_LLM_MODEL
value: "google/gemini-2.5-flash"
- name: CHAT_DEFAULT_EMBEDDING_MODEL
value: "text-embedding-3-large"
- name: CHAT_QDRANT_HOST
value: "gopie-qdrant"
- name: CHAT_QDRANT_COLLECTION
value: "dataset_collection_prod"
- name: CHAT_QDRANT_PORT
value: "6333"
- name: CHAT_QDRANT_TOP_K
value: "5"
- name: CHAT_LANGSMITH_TRACING
value: "true"
- name: CHAT_LANGSMITH_ENDPOINT
value: "https://api.smith.langchain.com"
- name: CHAT_LANGSMITH_API_KEY
value: "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"
- name: CHAT_LANGSMITH_PROJECT
value: "gopie-chat-server-local"
- name: CHAT_LANGSMITH_PROMPT
value: "false"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Chatserver API keys exposed in plain text.

Sensitive API keys are hardcoded:

  • Line 682: CHAT_E2B_API_KEY
  • Line 690: CHAT_OPENROUTER_API_KEY
  • Line 726: CHAT_LANGSMITH_API_KEY

These should also use secretKeyRef to reference Kubernetes Secrets.

Comment thread helm/gopie/values.yaml
Comment on lines +853 to +871
env:
- name: COMPANION_AWS_ENDPOINT
value: "https://s3.eu-central-1.wasabisys.com"
- name: COMPANION_AWS_REGION
value: "eu-central-1"
- name: COMPANION_AWS_BUCKET
value: "prod-factly-gopie-sas-new"
- name: COMPANION_AWS_KEY
value: "IHJWCLQNI113NEQODJXT"
- name: COMPANION_AWS_SECRET
value: "HeH6eHrfdmS0i55C6UZz0KgJDvp9ODjEfMB4YVVh"
- name: COMPANION_DOMAIN
value: "localhost:3020"
- name: COMPANION_PROTOCOL
value: "http"
- name: COMPANION_DATADIR
value: "/"
- name: COMPANION_SELF_ENDPOINT
value: "localhost:3020"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Companion AWS credentials duplicated and exposed.

The same AWS credentials appear here as in the web section (lines 510-513). Use a shared Kubernetes Secret and reference it consistently across components.

🤖 Prompt for AI Agents
In @helm/gopie/values.yaml around lines 853 - 871, The env block in values.yaml
exposes COMPANION_AWS_KEY and COMPANION_AWS_SECRET in plaintext and duplicates
the same credentials used in the web section; replace these literal values by
referencing a shared Kubernetes Secret (e.g., via env.valueFrom.secretKeyRef or
envFrom) and update both the companion and web sections to use the same secret
name and keys (COMPANION_AWS_KEY, COMPANION_AWS_SECRET) so credentials are not
duplicated in the chart and are pulled from one secure Secret.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant