Conversation
📝 WalkthroughWalkthroughHelm chart update introducing dynamic volume configuration, environment variable defaults, Pod Disruption Budgets across multiple components, Prometheus metrics monitoring via ServiceMonitor, and extensive configuration enhancements including health checks, service customization, and resource controls. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Claude finished @rapoler's task —— View job PR Review: HelmchangesTodo List:
🚨 Critical Security Issues1. Hardcoded Secrets in values.yamlSeverity: CRITICAL Multiple sensitive credentials are hardcoded in plaintext in
Risk: Credentials exposed in version control, accessible to anyone with repository access. Recommendation: Move all secrets to Kubernetes Secrets and reference them via 2. Missing Security ContextsSeverity: HIGH All pod templates have empty security contexts (
Risk: Containers run as root with privileged access. Recommendation: Set non-root user, read-only filesystem, and drop capabilities: podSecurityContext:
runAsNonRoot: true
runAsUser: 1001
fsGroup: 1001
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]🔧 Configuration Issues3. Resource Limits MissingSeverity: MEDIUM All components have
Recommendation: Set appropriate resource requests and limits for all components. 4. Environment Variable IssuesSeverity: MEDIUM
5. Commented Zitadel DependencySeverity: LOW Lines 53-56: Zitadel dependency is commented out but the binary is added to charts. This creates inconsistency. Recommendation: Either enable the dependency properly or remove the binary chart. 📋 Positive Changes✅ Good Additions
🔍 Minor Issues6. Template Inconsistencies
7. Missing Namespace References
📊 Overall Assessment
🎯 Required ActionsMust Fix Before Merge:
Should Fix:
Consider:
|
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
helm/gopie/templates/_pod-server.tpl (2)
203-209: Copy-paste bug: stateful pod usesdeployment.extraVolumeMountsinstead ofstateful.extraVolumeMounts.The stateful pod template references
.Values.deployment.extraVolumeMountsbut should use.Values.stateful.extraVolumeMountsto honor the stateful component's own configuration.Proposed fix
volumeMounts: - {{- range .Values.deployment.extraVolumeMounts }} + {{- range .Values.stateful.extraVolumeMounts }} - name: {{ .name }} mountPath: {{ .mountPath }} subPath: {{ .subPath | default "" }} readOnly: {{ .readOnly }} {{- end }}
244-258: Missingnindentin stateful extraVolumes causes malformed YAML.Unlike the deployment section (lines 104-117), the stateful
extraVolumesblock omitsnindentwhen rendering nested volume types. This will produce invalid YAML indentation.Proposed fix
{{- else if .hostPath }} hostPath: - {{ toYaml .hostPath }} + {{- toYaml .hostPath | nindent 6 }} {{- else if .csi }} csi: - {{- toYaml .csi }} + {{- toYaml .csi | nindent 6 }} {{- else if .configMap }} configMap: - {{- toYaml .configMap }} + {{- toYaml .configMap | nindent 6 }} {{- else if .secret }} secret: - {{- toYaml .secret }} + {{- toYaml .secret | nindent 6 }} {{- else if .emptyDir }} emptyDir: - {{- toYaml .emptyDir }} + {{- toYaml .emptyDir | nindent 6 }}
🤖 Fix all issues with AI agents
In @helm/gopie/templates/chat-server/pdb.yaml:
- Around line 1-18: If .Values.chatserver.podDisruptionBudget.enabled is true
but neither minAvailable nor maxUnavailable is set the rendered
PodDisruptionBudget spec can be empty and invalid; update the template around
the minAvailable/maxUnavailable logic in the pdb.yaml Helm template to supply a
safe default (e.g. minAvailable: 1) when minAvailable is unset and
maxUnavailable is unset, using Helm's default function or an additional
conditional so that spec always contains either minAvailable or maxUnavailable
(refer to .Values.chatserver.podDisruptionBudget.minAvailable,
.Values.chatserver.podDisruptionBudget.maxUnavailable and
.Values.chatserver.podDisruptionBudget.enabled).
In @helm/gopie/templates/server/pdb.yaml:
- Around line 10-14: The PDB template for server may render an empty spec when
.Values.deployment.podDisruptionBudget.minAvailable and
.Values.deployment.podDisruptionBudget.maxUnavailable are both unset; update the
template (server/pdb.yaml) to provide a safe default fallback (e.g., default
minAvailable: 1 or default maxUnavailable: 1 depending on your desired behavior)
so the spec block is never empty when the PDB is enabled, mirroring the same
fallback logic used in web/pdb.yaml and using the same
.Values.deployment.podDisruptionBudget.* symbols.
In @helm/gopie/templates/server/servicemonitor.yaml:
- Around line 1-17: The ServiceMonitor template is rendered unconditionally and
will break installs without the Prometheus Operator; wrap the ServiceMonitor
manifest in a Helm conditional checking .Values.server.serviceMonitor.enabled
(e.g., if .Values.server.serviceMonitor.enabled) so it only renders when
enabled, and replace the hard-coded scrape settings (interval, scrapeTimeout,
honorLabels and path/port if desired) with values from
.Values.server.serviceMonitor (e.g., .Values.server.serviceMonitor.interval,
.scrapeTimeout, .honorLabels) so they are configurable; also add the
corresponding defaults to values.yaml (server.serviceMonitor.enabled: false and
the interval/scrapeTimeout/honorLabels keys).
In @helm/gopie/templates/web/pdb.yaml:
- Around line 10-14: The PDB template can render an empty spec when
.Values.web.podDisruptionBudget.minAvailable and
.Values.web.podDisruptionBudget.maxUnavailable are both unset; update the logic
in the template that generates the minAvailable/maxUnavailable block
(referencing .Values.web.podDisruptionBudget.minAvailable,
.Values.web.podDisruptionBudget.maxUnavailable and podDisruptionBudget.enabled)
so that either (a) a safe default is provided (for example set minAvailable to 1
when neither value is provided) or (b) the whole PDB resource is not rendered
unless at least one of minAvailable or maxUnavailable is explicitly set;
implement the chosen approach by adding the additional conditional or default
fallback to ensure the PDB spec is never empty.
In @helm/gopie/values.yaml:
- Around line 45-99: The values.yaml currently hardcodes sensitive secrets
(GOPIE_OPENAI_APIKEY, GOPIE_OPENAI_OPTIONS containing Portkey API key,
GOPIE_MOTHERDUCK_TOKEN, GOPIE_ENCRYPTION_KEY, GOPIE_S3_ACCESS_KEY,
GOPIE_S3_SECRET_KEY); replace each plain-text env entry with a valueFrom:
secretKeyRef that points to a Kubernetes Secret (e.g., name: gopie-secrets, key:
openai-api-key / portkey-api-key / motherduck-token / encryption-key /
s3-access-key / s3-secret-key) and update any combined fields like
GOPIE_OPENAI_OPTIONS to build from separate secret keys or a single secret
string so no secrets remain inline in the manifest.
- Around line 504-521: The values.yaml currently hardcodes AWS credentials via
COMPANION_AWS_KEY and COMPANION_AWS_SECRET; remove those two entries and instead
reference a Kubernetes Secret: create a Secret (e.g., companion-aws-credentials)
containing keys aws_access_key_id and aws_secret_access_key and update the
companion deployment/helm chart to set COMPANION_AWS_KEY and
COMPANION_AWS_SECRET using env valueFrom secretKeyRef (or envFrom for the whole
secret) so the chart no longer stores secrets in plain text; keep other
companion variables (COMPANION_AWS_ENDPOINT, COMPANION_AWS_REGION,
COMPANION_AWS_BUCKET, COMPANION_DOMAIN, COMPANION_PROTOCOL, COMPANION_DATADIR,
COMPANION_SELF_ENDPOINT) as non-secret config.
- Around line 853-871: The env block in values.yaml exposes COMPANION_AWS_KEY
and COMPANION_AWS_SECRET in plaintext and duplicates the same credentials used
in the web section; replace these literal values by referencing a shared
Kubernetes Secret (e.g., via env.valueFrom.secretKeyRef or envFrom) and update
both the companion and web sections to use the same secret name and keys
(COMPANION_AWS_KEY, COMPANION_AWS_SECRET) so credentials are not duplicated in
the chart and are pulled from one secure Secret.
- Around line 215-289: Summary: The stateful env block duplicates hardcoded
secret values (e.g., GOPIE_OPENAI_APIKEY, GOPIE_MOTHERDUCK_TOKEN,
GOPIE_POSTGRES_PASSWORD, GOPIE_S3_SECRET_KEY) which is a security risk and
maintenance burden; replace duplicates with secret references and/or YAML
anchors. Fix: remove hardcoded secret values from the stateful "env" array and
replace them with valueFrom: secretKeyRef entries pointing to the existing
Kubernetes Secret(s) used by the deployment (use the same secret names/keys as
the deployment uses for POSTGRES_PASSWORD etc.), or consolidate common env
entries into a YAML anchor (e.g., &commonEnv) and reference it with aliases in
both deployment and stateful sections so non-secret vars are shared while
secrets remain referenced via secretKeyRef; update env entries for
GOPIE_OPENAI_APIKEY, GOPIE_MOTHERDUCK_TOKEN, GOPIE_S3_SECRET_KEY,
GOPIE_POSTGRES_PASSWORD, and any other GOPIE_* secret vars to use secretKeyRef
instead of hardcoded value.
🧹 Nitpick comments (3)
helm/gopie/Chart.yaml (1)
53-57: Consider using the condition field instead of commenting out.Since other dependencies already use the
conditionfield pattern (e.g.,condition: postgresql.enabled), the zitadel dependency could remain uncommented and simply be disabled viazitadel.enabled: falsein values.yaml. This keeps the dependency declaration visible and follows the existing pattern in this chart.That said, commenting out is acceptable if zitadel support is being temporarily removed or is not yet ready.
helm/gopie/templates/_pod-web.tpl (1)
14-15: Redundant default expressions (pre-existing issue).These lines use self-referential defaults that have no effect:
image: "{{ .Values.web.image.repository | default .Values.web.image.repository }}:..." imagePullPolicy: {{ .Values.web.image.pullPolicy | default .Values.web.image.pullPolicy }}The
| default XwhereXis the same value being defaulted is a no-op. While not introduced by this PR, consider cleaning this up.Simplified version
- image: "{{ .Values.web.image.repository | default .Values.web.image.repository }}:{{ .Values.web.image.tag | default .Values.web.image.tag | default .Chart.AppVersion }}" - imagePullPolicy: {{ .Values.web.image.pullPolicy | default .Values.web.image.pullPolicy }} + image: "{{ .Values.web.image.repository }}:{{ .Values.web.image.tag | default .Chart.AppVersion }}" + imagePullPolicy: {{ .Values.web.image.pullPolicy }}helm/gopie/values.yaml (1)
768-786: Chatserver health probes are commented out.Unlike deployment, stateful, and web components, chatserver lacks active health probes. Consider enabling them for consistent reliability and proper traffic management during rolling updates.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
helm/gopie/Chart.lockis excluded by!**/*.lock
📒 Files selected for processing (10)
helm/gopie/Chart.yamlhelm/gopie/charts/zitadel-8.6.2.tgzhelm/gopie/templates/_pod-chat-server.tplhelm/gopie/templates/_pod-server.tplhelm/gopie/templates/_pod-web.tplhelm/gopie/templates/chat-server/pdb.yamlhelm/gopie/templates/server/pdb.yamlhelm/gopie/templates/server/servicemonitor.yamlhelm/gopie/templates/web/pdb.yamlhelm/gopie/values.yaml
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-12-02T11:48:50.411Z
Learnt from: CR
Repo: factly/gopie PR: 0
File: web/CLAUDE.md:0-0
Timestamp: 2025-12-02T11:48:50.411Z
Learning: Configure environment variables including NEXT_PUBLIC_COMPANION_URL, NEXT_PUBLIC_GOPIE_API_URL, GOPIE_API_URL, and NEXT_PUBLIC_ENABLE_AUTH
Applied to files:
helm/gopie/values.yaml
🪛 Gitleaks (8.30.0)
helm/gopie/values.yaml
[high] 52-52: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
[high] 60-60: Uncovered a JSON Web Token, which may lead to unauthorized access to web applications and sensitive user data.
(jwt)
[high] 242-242: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
[high] 250-250: Uncovered a JSON Web Token, which may lead to unauthorized access to web applications and sensitive user data.
(jwt)
🪛 YAMLlint (1.37.1)
helm/gopie/templates/chat-server/pdb.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
helm/gopie/templates/server/pdb.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
helm/gopie/templates/web/pdb.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
helm/gopie/templates/server/servicemonitor.yaml
[error] 7-7: syntax error: expected the node content, but found '-'
(syntax)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: claude-review
🔇 Additional comments (4)
helm/gopie/templates/_pod-chat-server.tpl (1)
58-81: Dynamic volume handling looks good.The conditional branching for various volume types (PVC, hostPath, CSI, configMap, secret, emptyDir) with a sensible emptyDir fallback is well-structured.
One consideration: ensure that volume names in
extraVolumesmatch the corresponding names inextraVolumeMounts(lines 38-44). If a mount references a volume name that doesn't exist, Kubernetes will reject the pod spec at runtime.helm/gopie/templates/_pod-web.tpl (1)
56-79: Consistent volume handling with chat-server template.The dynamic volume configuration follows the same pattern as
_pod-chat-server.tpl, which is good for maintainability. The conditional branching and emptyDir fallback are correctly implemented.helm/gopie/templates/_pod-server.tpl (1)
55-75: Good fallback pattern for environment variables.The deterministic default environment block provides sensible PostgreSQL and AI agent configuration when custom env is not specified. This ensures the chart works out-of-the-box while allowing full customization.
helm/gopie/values.yaml (1)
143-165: Well-structured PDB and health probe configuration.The PodDisruptionBudget defaults with
maxUnavailable: 1and comprehensive health probes with reasonable timeouts provide good baseline reliability settings.
| {{- if .Values.chatserver.podDisruptionBudget.enabled }} | ||
| apiVersion: policy/v1 | ||
| kind: PodDisruptionBudget | ||
| metadata: | ||
| name: {{ include "chatserver.name" . }}-pdb | ||
| namespace: {{ include "chatserver.namespace" . }} | ||
| labels: | ||
| {{ include "chatserver.labels" . | nindent 4 }} | ||
| spec: | ||
| {{- if .Values.chatserver.podDisruptionBudget.minAvailable }} | ||
| minAvailable: {{ .Values.chatserver.podDisruptionBudget.minAvailable }} | ||
| {{- else if .Values.chatserver.podDisruptionBudget.maxUnavailable }} | ||
| maxUnavailable: {{ .Values.chatserver.podDisruptionBudget.maxUnavailable }} | ||
| {{- end }} | ||
| selector: | ||
| matchLabels: | ||
| {{ include "chatserver.selectorLabels" . | nindent 6 }} | ||
| {{- end }} |
There was a problem hiding this comment.
PDB template structure is correct; static analysis hint is a false positive.
The YAMLlint error is expected since this is a Helm template file, not raw YAML—the {{- if ...}} directive is valid Go templating.
However, if podDisruptionBudget.enabled is true but neither minAvailable nor maxUnavailable is configured, the rendered PDB will have an empty spec (missing both fields), which Kubernetes will reject. Consider adding a default or validation.
Suggested fix: Add a default minAvailable
spec:
{{- if .Values.chatserver.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.chatserver.podDisruptionBudget.minAvailable }}
{{- else if .Values.chatserver.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.chatserver.podDisruptionBudget.maxUnavailable }}
+ {{- else }}
+ minAvailable: 1
{{- end }}🧰 Tools
🪛 YAMLlint (1.37.1)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
🤖 Prompt for AI Agents
In @helm/gopie/templates/chat-server/pdb.yaml around lines 1 - 18, If
.Values.chatserver.podDisruptionBudget.enabled is true but neither minAvailable
nor maxUnavailable is set the rendered PodDisruptionBudget spec can be empty and
invalid; update the template around the minAvailable/maxUnavailable logic in the
pdb.yaml Helm template to supply a safe default (e.g. minAvailable: 1) when
minAvailable is unset and maxUnavailable is unset, using Helm's default function
or an additional conditional so that spec always contains either minAvailable or
maxUnavailable (refer to .Values.chatserver.podDisruptionBudget.minAvailable,
.Values.chatserver.podDisruptionBudget.maxUnavailable and
.Values.chatserver.podDisruptionBudget.enabled).
| {{- if .Values.deployment.podDisruptionBudget.minAvailable }} | ||
| minAvailable: {{ .Values.deployment.podDisruptionBudget.minAvailable }} | ||
| {{- else if .Values.deployment.podDisruptionBudget.maxUnavailable }} | ||
| maxUnavailable: {{ .Values.deployment.podDisruptionBudget.maxUnavailable }} | ||
| {{- end }} |
There was a problem hiding this comment.
Same issue as web PDB: spec may be empty if neither value is configured.
Consider adding a default fallback to ensure a valid PDB spec when enabled. This mirrors the pattern needed in web/pdb.yaml.
Suggested fix
{{- if .Values.deployment.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.deployment.podDisruptionBudget.minAvailable }}
{{- else if .Values.deployment.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.deployment.podDisruptionBudget.maxUnavailable }}
+ {{- else }}
+ maxUnavailable: 1
{{- end }}🤖 Prompt for AI Agents
In @helm/gopie/templates/server/pdb.yaml around lines 10 - 14, The PDB template
for server may render an empty spec when
.Values.deployment.podDisruptionBudget.minAvailable and
.Values.deployment.podDisruptionBudget.maxUnavailable are both unset; update the
template (server/pdb.yaml) to provide a safe default fallback (e.g., default
minAvailable: 1 or default maxUnavailable: 1 depending on your desired behavior)
so the spec block is never empty when the PDB is enabled, mirroring the same
fallback logic used in web/pdb.yaml and using the same
.Values.deployment.podDisruptionBudget.* symbols.
| apiVersion: monitoring.coreos.com/v1 | ||
| kind: ServiceMonitor | ||
| metadata: | ||
| name: {{ include "server.name" . }} | ||
| namespace: {{ include "gopie.namespace" . }} | ||
| labels: | ||
| {{- include "server.labels" . | nindent 4 }} | ||
| spec: | ||
| selector: | ||
| matchLabels: | ||
| {{- include "server.selectorLabels" . | nindent 6 }} | ||
| endpoints: | ||
| - port: metrics | ||
| path: /metrics | ||
| interval: 30s | ||
| scrapeTimeout: 10s | ||
| honorLabels: true |
There was a problem hiding this comment.
ServiceMonitor should be conditional to avoid failures when Prometheus Operator is not installed.
Unlike the PDB templates which check .Values.*.podDisruptionBudget.enabled, this ServiceMonitor is unconditionally rendered. If the Prometheus Operator CRDs (monitoring.coreos.com/v1) are not installed in the cluster, the Helm deployment will fail.
Consider wrapping this in a conditional and making the scrape settings configurable.
Suggested fix
+{{- if .Values.server.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "server.name" . }}
namespace: {{ include "gopie.namespace" . }}
labels:
{{- include "server.labels" . | nindent 4 }}
spec:
selector:
matchLabels:
{{- include "server.selectorLabels" . | nindent 6 }}
endpoints:
- port: metrics
path: /metrics
- interval: 30s
- scrapeTimeout: 10s
+ interval: {{ .Values.server.serviceMonitor.interval | default "30s" }}
+ scrapeTimeout: {{ .Values.server.serviceMonitor.scrapeTimeout | default "10s" }}
honorLabels: true
+{{- end }}Also add the corresponding values in values.yaml:
server:
serviceMonitor:
enabled: false
interval: 30s
scrapeTimeout: 10s🧰 Tools
🪛 YAMLlint (1.37.1)
[error] 7-7: syntax error: expected the node content, but found '-'
(syntax)
🤖 Prompt for AI Agents
In @helm/gopie/templates/server/servicemonitor.yaml around lines 1 - 17, The
ServiceMonitor template is rendered unconditionally and will break installs
without the Prometheus Operator; wrap the ServiceMonitor manifest in a Helm
conditional checking .Values.server.serviceMonitor.enabled (e.g., if
.Values.server.serviceMonitor.enabled) so it only renders when enabled, and
replace the hard-coded scrape settings (interval, scrapeTimeout, honorLabels and
path/port if desired) with values from .Values.server.serviceMonitor (e.g.,
.Values.server.serviceMonitor.interval, .scrapeTimeout, .honorLabels) so they
are configurable; also add the corresponding defaults to values.yaml
(server.serviceMonitor.enabled: false and the interval/scrapeTimeout/honorLabels
keys).
| {{- if .Values.web.podDisruptionBudget.minAvailable }} | ||
| minAvailable: {{ .Values.web.podDisruptionBudget.minAvailable }} | ||
| {{- else if .Values.web.podDisruptionBudget.maxUnavailable }} | ||
| maxUnavailable: {{ .Values.web.podDisruptionBudget.maxUnavailable }} | ||
| {{- end }} |
There was a problem hiding this comment.
PDB spec may be empty if neither minAvailable nor maxUnavailable is configured.
When podDisruptionBudget.enabled is true but both minAvailable and maxUnavailable are unset, the rendered PDB will have no disruption constraint, which is invalid. Add a default or require at least one value.
Suggested fix with default
{{- if .Values.web.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.web.podDisruptionBudget.minAvailable }}
{{- else if .Values.web.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.web.podDisruptionBudget.maxUnavailable }}
+ {{- else }}
+ maxUnavailable: 1
{{- end }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| {{- if .Values.web.podDisruptionBudget.minAvailable }} | |
| minAvailable: {{ .Values.web.podDisruptionBudget.minAvailable }} | |
| {{- else if .Values.web.podDisruptionBudget.maxUnavailable }} | |
| maxUnavailable: {{ .Values.web.podDisruptionBudget.maxUnavailable }} | |
| {{- end }} | |
| {{- if .Values.web.podDisruptionBudget.minAvailable }} | |
| minAvailable: {{ .Values.web.podDisruptionBudget.minAvailable }} | |
| {{- else if .Values.web.podDisruptionBudget.maxUnavailable }} | |
| maxUnavailable: {{ .Values.web.podDisruptionBudget.maxUnavailable }} | |
| {{- else }} | |
| maxUnavailable: 1 | |
| {{- end }} |
🤖 Prompt for AI Agents
In @helm/gopie/templates/web/pdb.yaml around lines 10 - 14, The PDB template can
render an empty spec when .Values.web.podDisruptionBudget.minAvailable and
.Values.web.podDisruptionBudget.maxUnavailable are both unset; update the logic
in the template that generates the minAvailable/maxUnavailable block
(referencing .Values.web.podDisruptionBudget.minAvailable,
.Values.web.podDisruptionBudget.maxUnavailable and podDisruptionBudget.enabled)
so that either (a) a safe default is provided (for example set minAvailable to 1
when neither value is provided) or (b) the whole PDB resource is not rendered
unless at least one of minAvailable or maxUnavailable is explicitly set;
implement the chosen approach by adding the additional conditional or default
fallback to ensure the PDB spec is never empty.
| - name: GOPIE_OPENAI_APIKEY | ||
| value: "bRUBnBaNaOg8Qou7B7jVHsuIVsc=" | ||
| - name: GOPIE_OPENAI_BASEURL | ||
| value: "https://api.portkey.ai/v1" | ||
| - name: GOPIE_OPENAI_MODEL | ||
| value: "o3-mini" | ||
| - name: GOPIE_OPENAI_OPTIONS | ||
| value: "x-portkey-virtual-key=openai-test-11e8b7,x-portkey-api-key=bRUBnBaNaOg8Qou7B7jVHsuIVsc=" | ||
| - name: GOPIE_DUCKDB_PATH | ||
| value: "./dataful/gopie.duckdb" | ||
| - name: GOPIE_OLAPDB_ACCESS_MODE | ||
| value: "read_write" | ||
| - name: GOPIE_OLAPDB_DBTYPE | ||
| value: "motherduck" | ||
| - name: GOPIE_MOTHERDUCK_TOKEN | ||
| value: "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6ImFrc2hpdGgua2F0a3VyaUBmYWN0bHkuaW4iLCJzZXNzaW9uIjoiYWtzaGl0aC5rYXRrdXJpLmZhY3RseS5pbiIsInBhdCI6InVXY3hLakNuVVhvQ2RYd3lUSFdjZ1ZnUHpmaTFId2NjdWxSWkFhNmUyMVEiLCJ1c2VySWQiOiJhYTgwZmU0My1kNTdmLTRlNTQtOGRkNS1iNzA5NDY2NTExMmEiLCJpc3MiOiJtZF9wYXQiLCJyZWFkT25seSI6ZmFsc2UsInRva2VuVHlwZSI6InJlYWRfd3JpdGUiLCJpYXQiOjE3NTQ1NTE5ODh9.3cAa_0z5CrGQgU6r4SYlj_oc6yrQOH2UkpdU8fSLgT4" | ||
| - name: GOPIE_MOTHERDUCK_DB_NAME | ||
| value: "test" | ||
| - name: GOPIE_MOTHERDUCK_HELPER_DB_DIR_PATH | ||
| value: "./motherduck" | ||
| - name: GOPIE_CORS_HANDLED_BY_INGRESS | ||
| value: "false" | ||
| - name: GOPIE_POSTGRES_HOST | ||
| value: "gopie-postgresql" | ||
| - name: GOPIE_POSTGRES_PORT | ||
| value: "5432" | ||
| - name: GOPIE_POSTGRES_DB | ||
| value: "gopie" | ||
| - name: GOPIE_POSTGRES_USER | ||
| value: "postgres" | ||
| - name: GOPIE_POSTGRES_PASSWORD | ||
| valueFrom: | ||
| secretKeyRef: | ||
| name: gopie-postgresql | ||
| key: postgres-password | ||
| - name: GOPIE_POSTGRES_SSLMODE | ||
| value: "disable" | ||
| - name: GOPIE_AIAGENT_URL | ||
| value: "http://chat-server:8003" | ||
| - name: GOPIE_ENCRYPTION_KEY | ||
| value: "E5B8A0F3C1D9E7B2A5F0C3D8E6B1A4F2" | ||
| - name: GOPIE_ENABLED_SERVERS | ||
| value: "api" | ||
| - name: GOPIE_S3_ACCESS_KEY | ||
| value: "IHJWCLQNI113NEQODJXT" | ||
| - name: GOPIE_S3_SECRET_KEY | ||
| value: "HeH6eHrfdmS0i55C6UZz0KgJDvp9ODjEfMB4YVVh" | ||
| - name: GOPIE_S3_ENDPOINT | ||
| value: "https://s3.eu-central-1.wasabisys.com" | ||
| - name: GOPIE_S3_SSL | ||
| value: "true" | ||
| - name: GOPIE_S3_REGION | ||
| value: "eu-central-1" | ||
| - name: GOPIE_DOWNLOADS_S3_BUCKET | ||
| value: "downloads" |
There was a problem hiding this comment.
CRITICAL: Secrets and API keys hardcoded in plain text.
Multiple sensitive credentials are exposed in this values file:
- Line 46:
GOPIE_OPENAI_APIKEY - Line 52: Portkey API key in
GOPIE_OPENAI_OPTIONS - Lines 59-60: JWT token (
GOPIE_MOTHERDUCK_TOKEN) - Line 84: Encryption key (
GOPIE_ENCRYPTION_KEY) - Lines 88-91: S3 access/secret keys
These secrets will be committed to version control and visible in rendered manifests. Use secretKeyRef to reference Kubernetes Secrets instead.
Example: Reference secrets properly
env:
- name: GOPIE_OPENAI_APIKEY
valueFrom:
secretKeyRef:
name: gopie-secrets
key: openai-api-key
- name: GOPIE_MOTHERDUCK_TOKEN
valueFrom:
secretKeyRef:
name: gopie-secrets
key: motherduck-token
- name: GOPIE_ENCRYPTION_KEY
valueFrom:
secretKeyRef:
name: gopie-secrets
key: encryption-key
- name: GOPIE_S3_ACCESS_KEY
valueFrom:
secretKeyRef:
name: gopie-secrets
key: s3-access-key
- name: GOPIE_S3_SECRET_KEY
valueFrom:
secretKeyRef:
name: gopie-secrets
key: s3-secret-key🧰 Tools
🪛 Gitleaks (8.30.0)
[high] 52-52: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
[high] 60-60: Uncovered a JSON Web Token, which may lead to unauthorized access to web applications and sensitive user data.
(jwt)
🤖 Prompt for AI Agents
In @helm/gopie/values.yaml around lines 45 - 99, The values.yaml currently
hardcodes sensitive secrets (GOPIE_OPENAI_APIKEY, GOPIE_OPENAI_OPTIONS
containing Portkey API key, GOPIE_MOTHERDUCK_TOKEN, GOPIE_ENCRYPTION_KEY,
GOPIE_S3_ACCESS_KEY, GOPIE_S3_SECRET_KEY); replace each plain-text env entry
with a valueFrom: secretKeyRef that points to a Kubernetes Secret (e.g., name:
gopie-secrets, key: openai-api-key / portkey-api-key / motherduck-token /
encryption-key / s3-access-key / s3-secret-key) and update any combined fields
like GOPIE_OPENAI_OPTIONS to build from separate secret keys or a single secret
string so no secrets remain inline in the manifest.
| env: | ||
| - name: POSTGRES_USER | ||
| value: "postgres" | ||
| - name: POSTGRES_PASSWORD | ||
| valueFrom: | ||
| secretKeyRef: | ||
| name: gopie-postgresql | ||
| key: postgres-password | ||
| - name: POSTGRES_DB | ||
| value: "gopie" | ||
| - name: POSTGRES_HOST | ||
| value: "gopie-postgresql" | ||
| - name: POSTGRES_PORT | ||
| value: "5432" | ||
| - name: GOPIE_LOGGER_LEVEL | ||
| value: "info" | ||
| - name: GOPIE_LOGGER_FILE | ||
| value: "gopie.log" | ||
| - name: GOPIE_LOGGER_MODE | ||
| value: "dev" | ||
| - name: GOPIE_OPENAI_APIKEY | ||
| value: "bRUBnBaNaOg8Qou7B7jVHsuIVsc=" | ||
| - name: GOPIE_OPENAI_BASEURL | ||
| value: "https://api.portkey.ai/v1" | ||
| - name: GOPIE_OPENAI_MODEL | ||
| value: "o3-mini" | ||
| - name: GOPIE_OPENAI_OPTIONS | ||
| value: "x-portkey-virtual-key=openai-test-11e8b7,x-portkey-api-key=bRUBnBaNaOg8Qou7B7jVHsuIVsc=" | ||
| - name: GOPIE_DUCKDB_PATH | ||
| value: "./dataful/gopie.duckdb" | ||
| - name: GOPIE_OLAPDB_ACCESS_MODE | ||
| value: "read_write" | ||
| - name: GOPIE_OLAPDB_DBTYPE | ||
| value: "motherduck" | ||
| - name: GOPIE_MOTHERDUCK_TOKEN | ||
| value: "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6ImFrc2hpdGgua2F0a3VyaUBmYWN0bHkuaW4iLCJzZXNzaW9uIjoiYWtzaGl0aC5rYXRrdXJpLmZhY3RseS5pbiIsInBhdCI6InVXY3hLakNuVVhvQ2RYd3lUSFdjZ1ZnUHpmaTFId2NjdWxSWkFhNmUyMVEiLCJ1c2VySWQiOiJhYTgwZmU0My1kNTdmLTRlNTQtOGRkNS1iNzA5NDY2NTExMmEiLCJpc3MiOiJtZF9wYXQiLCJyZWFkT25seSI6ZmFsc2UsInRva2VuVHlwZSI6InJlYWRfd3JpdGUiLCJpYXQiOjE3NTQ1NTE5ODh9.3cAa_0z5CrGQgU6r4SYlj_oc6yrQOH2UkpdU8fSLgT4" | ||
| - name: GOPIE_MOTHERDUCK_DB_NAME | ||
| value: "test" | ||
| - name: GOPIE_MOTHERDUCK_HELPER_DB_DIR_PATH | ||
| value: "./motherduck" | ||
| - name: GOPIE_CORS_HANDLED_BY_INGRESS | ||
| value: "false" | ||
| - name: GOPIE_POSTGRES_HOST | ||
| value: "gopie-postgresql" | ||
| - name: GOPIE_POSTGRES_PORT | ||
| value: "5432" | ||
| - name: GOPIE_POSTGRES_DB | ||
| value: "gopie" | ||
| - name: GOPIE_POSTGRES_USER | ||
| value: "postgres" | ||
| - name: GOPIE_POSTGRES_PASSWORD | ||
| valueFrom: | ||
| secretKeyRef: | ||
| name: gopie-postgresql | ||
| key: postgres-password | ||
| - name: GOPIE_POSTGRES_SSLMODE | ||
| value: "disable" | ||
| - name: GOPIE_AIAGENT_URL | ||
| value: "http://chat-server:8003" | ||
| - name: GOPIE_ENCRYPTION_KEY | ||
| value: "E5B8A0F3C1D9E7B2A5F0C3D8E6B1A4F2" | ||
| - name: GOPIE_ENABLED_SERVERS | ||
| value: "api" | ||
| - name: GOPIE_S3_ACCESS_KEY | ||
| value: "IHJWCLQNI113NEQODJXT" | ||
| - name: GOPIE_S3_SECRET_KEY | ||
| value: "HeH6eHrfdmS0i55C6UZz0KgJDvp9ODjEfMB4YVVh" | ||
| - name: GOPIE_S3_ENDPOINT | ||
| value: "https://s3.eu-central-1.wasabisys.com" | ||
| - name: GOPIE_S3_SSL | ||
| value: "true" | ||
| - name: GOPIE_S3_REGION | ||
| value: "eu-central-1" | ||
| - name: GOPIE_DOWNLOADS_S3_BUCKET | ||
| value: "downloads" |
There was a problem hiding this comment.
Duplicated secrets in stateful section - same security concerns apply.
This section duplicates the hardcoded secrets from the deployment section. Beyond the security issue, consider using YAML anchors or a shared configuration pattern to avoid maintaining duplicate values.
🧰 Tools
🪛 Gitleaks (8.30.0)
[high] 242-242: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
[high] 250-250: Uncovered a JSON Web Token, which may lead to unauthorized access to web applications and sensitive user data.
(jwt)
🤖 Prompt for AI Agents
In @helm/gopie/values.yaml around lines 215 - 289, Summary: The stateful env
block duplicates hardcoded secret values (e.g., GOPIE_OPENAI_APIKEY,
GOPIE_MOTHERDUCK_TOKEN, GOPIE_POSTGRES_PASSWORD, GOPIE_S3_SECRET_KEY) which is a
security risk and maintenance burden; replace duplicates with secret references
and/or YAML anchors. Fix: remove hardcoded secret values from the stateful "env"
array and replace them with valueFrom: secretKeyRef entries pointing to the
existing Kubernetes Secret(s) used by the deployment (use the same secret
names/keys as the deployment uses for POSTGRES_PASSWORD etc.), or consolidate
common env entries into a YAML anchor (e.g., &commonEnv) and reference it with
aliases in both deployment and stateful sections so non-secret vars are shared
while secrets remain referenced via secretKeyRef; update env entries for
GOPIE_OPENAI_APIKEY, GOPIE_MOTHERDUCK_TOKEN, GOPIE_S3_SECRET_KEY,
GOPIE_POSTGRES_PASSWORD, and any other GOPIE_* secret vars to use secretKeyRef
instead of hardcoded value.
| - name: COMPANION_AWS_ENDPOINT | ||
| value: "https://s3.eu-central-1.wasabisys.com" | ||
| - name: COMPANION_AWS_REGION | ||
| value: "eu-central-1" | ||
| - name: COMPANION_AWS_BUCKET | ||
| value: "prod-factly-gopie-sas-new" | ||
| - name: COMPANION_AWS_KEY | ||
| value: "IHJWCLQNI113NEQODJXT" | ||
| - name: COMPANION_AWS_SECRET | ||
| value: "HeH6eHrfdmS0i55C6UZz0KgJDvp9ODjEfMB4YVVh" | ||
| - name: COMPANION_DOMAIN | ||
| value: "localhost:3020" | ||
| - name: COMPANION_PROTOCOL | ||
| value: "http" | ||
| - name: COMPANION_DATADIR | ||
| value: "/" | ||
| - name: COMPANION_SELF_ENDPOINT | ||
| value: "localhost:3020" |
There was a problem hiding this comment.
AWS credentials exposed in web environment configuration.
Lines 510-513 contain hardcoded AWS access and secret keys for the companion service. Move these to a Kubernetes Secret.
🤖 Prompt for AI Agents
In @helm/gopie/values.yaml around lines 504 - 521, The values.yaml currently
hardcodes AWS credentials via COMPANION_AWS_KEY and COMPANION_AWS_SECRET; remove
those two entries and instead reference a Kubernetes Secret: create a Secret
(e.g., companion-aws-credentials) containing keys aws_access_key_id and
aws_secret_access_key and update the companion deployment/helm chart to set
COMPANION_AWS_KEY and COMPANION_AWS_SECRET using env valueFrom secretKeyRef (or
envFrom for the whole secret) so the chart no longer stores secrets in plain
text; keep other companion variables (COMPANION_AWS_ENDPOINT,
COMPANION_AWS_REGION, COMPANION_AWS_BUCKET, COMPANION_DOMAIN,
COMPANION_PROTOCOL, COMPANION_DATADIR, COMPANION_SELF_ENDPOINT) as non-secret
config.
| env: | ||
| - name: CHAT_GOPIE_API_ENDPOINT | ||
| value: "http://gopie-server:8001" | ||
| - name: CHAT_E2B_API_KEY | ||
| value: "e2b_d07b6fb3ffab800b95c68b7646ca1d88cce8dd74" | ||
| - name: CHAT_LLM_GATEWAY_PROVIDER | ||
| value: "openrouter" | ||
| - name: CHAT_EMBEDDING_GATEWAY_PROVIDER | ||
| value: "openai" | ||
| - name: OPENAI_API_KEY | ||
| value: "" | ||
| - name: CHAT_OPENROUTER_API_KEY | ||
| value: "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" | ||
| - name: CHAT_OPENROUTER_BASE_URL | ||
| value: "https://openrouter.ai/api/v1" | ||
| - name: CHAT_FAST_MODEL | ||
| value: "google/gemini-2.5-flash" | ||
| - name: CHAT_BALANCED_MODEL | ||
| value: "google/gemini-2.5-flash" | ||
| - name: CHAT_ADVANCED_MODEL | ||
| value: "google/gemini-2.5-pro" | ||
| - name: CHAT_S3_HOST | ||
| value: "http://minio:9000" | ||
| - name: CHAT_S3_ACCESS_KEY | ||
| value: "minioadmin" | ||
| - name: CHAT_S3_SECRET_KEY | ||
| value: "minioadmin" | ||
| - name: CHAT_S3_BUCKET | ||
| value: "gopie" | ||
| - name: CHAT_S3_REGION | ||
| value: "us-central-1" | ||
| - name: CHAT_DEFAULT_LLM_MODEL | ||
| value: "google/gemini-2.5-flash" | ||
| - name: CHAT_DEFAULT_EMBEDDING_MODEL | ||
| value: "text-embedding-3-large" | ||
| - name: CHAT_QDRANT_HOST | ||
| value: "gopie-qdrant" | ||
| - name: CHAT_QDRANT_COLLECTION | ||
| value: "dataset_collection_prod" | ||
| - name: CHAT_QDRANT_PORT | ||
| value: "6333" | ||
| - name: CHAT_QDRANT_TOP_K | ||
| value: "5" | ||
| - name: CHAT_LANGSMITH_TRACING | ||
| value: "true" | ||
| - name: CHAT_LANGSMITH_ENDPOINT | ||
| value: "https://api.smith.langchain.com" | ||
| - name: CHAT_LANGSMITH_API_KEY | ||
| value: "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" | ||
| - name: CHAT_LANGSMITH_PROJECT | ||
| value: "gopie-chat-server-local" | ||
| - name: CHAT_LANGSMITH_PROMPT | ||
| value: "false" |
There was a problem hiding this comment.
Chatserver API keys exposed in plain text.
Sensitive API keys are hardcoded:
- Line 682:
CHAT_E2B_API_KEY - Line 690:
CHAT_OPENROUTER_API_KEY - Line 726:
CHAT_LANGSMITH_API_KEY
These should also use secretKeyRef to reference Kubernetes Secrets.
| env: | ||
| - name: COMPANION_AWS_ENDPOINT | ||
| value: "https://s3.eu-central-1.wasabisys.com" | ||
| - name: COMPANION_AWS_REGION | ||
| value: "eu-central-1" | ||
| - name: COMPANION_AWS_BUCKET | ||
| value: "prod-factly-gopie-sas-new" | ||
| - name: COMPANION_AWS_KEY | ||
| value: "IHJWCLQNI113NEQODJXT" | ||
| - name: COMPANION_AWS_SECRET | ||
| value: "HeH6eHrfdmS0i55C6UZz0KgJDvp9ODjEfMB4YVVh" | ||
| - name: COMPANION_DOMAIN | ||
| value: "localhost:3020" | ||
| - name: COMPANION_PROTOCOL | ||
| value: "http" | ||
| - name: COMPANION_DATADIR | ||
| value: "/" | ||
| - name: COMPANION_SELF_ENDPOINT | ||
| value: "localhost:3020" |
There was a problem hiding this comment.
Companion AWS credentials duplicated and exposed.
The same AWS credentials appear here as in the web section (lines 510-513). Use a shared Kubernetes Secret and reference it consistently across components.
🤖 Prompt for AI Agents
In @helm/gopie/values.yaml around lines 853 - 871, The env block in values.yaml
exposes COMPANION_AWS_KEY and COMPANION_AWS_SECRET in plaintext and duplicates
the same credentials used in the web section; replace these literal values by
referencing a shared Kubernetes Secret (e.g., via env.valueFrom.secretKeyRef or
envFrom) and update both the companion and web sections to use the same secret
name and keys (COMPANION_AWS_KEY, COMPANION_AWS_SECRET) so credentials are not
duplicated in the chart and are pulled from one secure Secret.
Summary by CodeRabbit
New Features
Improvements
✏️ Tip: You can customize this high-level summary in your review settings.