Skip to content

Add cluster-proxy to charts-config.yaml#3220

Open
dislbenn wants to merge 5 commits intostolostron:mainfrom
dislbenn:add-cluster-proxy-charts-config
Open

Add cluster-proxy to charts-config.yaml#3220
dislbenn wants to merge 5 commits intostolostron:mainfrom
dislbenn:add-cluster-proxy-charts-config

Conversation

@dislbenn
Copy link
Copy Markdown
Collaborator

@dislbenn dislbenn commented Apr 28, 2026

Summary

  • Add cluster-proxy component to charts-config.yaml for bundle automation
  • Configure chart automation to pull from stolostron/cluster-proxy repo
  • Set branch to backplane-5.0
  • Configure image mapping for cluster-proxy

Test plan

  • Verify YAML syntax is valid
  • Verify bundle automation workflow processes cluster-proxy chart
  • Verify image mappings resolve correctly

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Introduced cluster-proxy component as the new standard proxy solution in MCE 2.11, replacing cluster-proxy-addon with improved configuration and deployment capabilities
    • Added comprehensive Helm chart and Kubernetes resource templates for cluster-proxy installation and lifecycle management
  • Refactor

    • Deprecated cluster-proxy-addon component; users should migrate to the new cluster-proxy component for continued support

- Add cluster-proxy component from stolostron/cluster-proxy repo
- Configure chart automation for backplane-5.0 branch
- Set image mapping for cluster-proxy to cluster_proxy

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: dislbenn <dbennett@redhat.com>
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 28, 2026

📝 Walkthrough

Walkthrough

This pull request introduces a new cluster-proxy MCE component identifier and corresponding CRD directory, replacing the deprecated cluster-proxy-addon naming throughout the codebase. Changes span API constants, controller reconciliation logic, Helm chart templates, bundle automation configuration, and CRD definitions.

Changes

Cohort / File(s) Summary
API and Constants
api/v1/multiclusterengine_methods.go, pkg/toggle/toggle.go, pkg/utils/utils.go
Added ClusterProxy component identifier and ClusterProxyCRDDir constant; deprecated ClusterProxyAddonCRDDir; extended ComponentCRDDirectories utility to map the new component to its CRD directory.
Controller Reconciliation Logic
controllers/backplaneconfig_controller.go, controllers/toggle_components.go
Updated controller to conditionally invoke ensureClusterProxy/ensureNoClusterProxy (renamed from ensureClusterProxyAddon methods); shifted error reporting and chart resolution to the new component; modified internal engine component CR creation/deletion to use backplanev1.ClusterProxy.
Bundle Automation Configuration
hack/bundle-automation/charts-config.yaml, hack/bundle-automation/chart-values/cluster-proxy/overwriteValues.yaml
Added new bundle-automation component entry to fetch/package cluster-proxy Helm chart from stolostron/cluster-proxy repository; defined Helm override values (replicas, image location, feature flags for service/kube-api proxies and impersonation).
Helm Chart Metadata and Templates
pkg/templates/charts/toggle/cluster-proxy/Chart.yaml, pkg/templates/charts/toggle/cluster-proxy/values.yaml, pkg/templates/charts/toggle/cluster-proxy/templates/*.yaml
Introduced complete new Helm chart for cluster-proxy addon with Chart metadata, default values, and 11 resource templates including deployments (manager and user-server), service account, RBAC rules/bindings, ClusterManagementAddOn, Placement, ManagedClusterSetBinding, and corresponding Service definitions.
CRD Definition
pkg/templates/crds/cluster-proxy/managedproxyconfigurations.yaml
Added new ManagedProxyConfiguration CRD (cluster-scoped, proxy.open-cluster-management.io/v1alpha1) with structured spec for proxy authentication, deployment configuration, server/agent behavior, and status tracking via conditions and observed generation.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~40 minutes

Poem

🐰 The proxy hops to a new name so bright,
From addon to proxy, a cleaner sight!
New charts and templates now fill the way,
MCE 2.11 brings fresh new day. 🎩✨

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Add cluster-proxy to charts-config.yaml' is concise and accurately describes the primary change visible in the PR, though it understates the broader scope of the changeset which includes new charts, CRDs, and controller logic updates.
Description check ✅ Passed The PR description provides a brief summary of changes and test plan, but lacks detailed explanation of the changes made and is missing multiple sections from the template such as related issues, screenshots, and comprehensive checklist items.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci Bot requested review from gparvin and ngraham20 April 28, 2026 17:32
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 28, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dislbenn

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

dislbenn and others added 2 commits April 28, 2026 13:33
- Run make regenerate-charts COMPONENT=cluster-proxy
- Add cluster-proxy chart templates from stolostron/cluster-proxy backplane-5.0 branch
- Include manager deployment with POD_NAMESPACE env var
- Add ClusterManagementAddOn, RBAC, and Placement resources
- Add ManagedProxyConfiguration CRD

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: dislbenn <dbennett@redhat.com>
- Add ClusterProxy constant and chart/CRD directory paths
- Mark ClusterProxyAddon as deprecated (moved to ClusterProxy in MCE 2.11)
- Generate cluster-proxy chart with user-deployment and user-service enabled
- Create custom helm values in hack/bundle-automation/chart-values/cluster-proxy
- Add ClusterProxy mappings to controller chart path resolution

This maintains backward compatibility by keeping ClusterProxyAddon constants
while introducing ClusterProxy for future migration.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: dislbenn <dbennett@redhat.com>
@dislbenn dislbenn force-pushed the add-cluster-proxy-charts-config branch from 6568faf to bac1091 Compare April 28, 2026 18:05
- Rename ensureClusterProxyAddon → ensureClusterProxy
- Rename ensureNoClusterProxyAddon → ensureNoClusterProxy
- Update reconciler functions to use backplanev1.ClusterProxy constant
- Add reconciler logic to handle both ClusterProxyAddon (deprecated) and ClusterProxy
- Both components call same functions - ensures backward compatibility during migration

Migration path:
- Existing MCE CRs with ClusterProxyAddon enabled will continue to work
- New ClusterProxy chart (pkg/templates/charts/toggle/cluster-proxy) deployed
- ClusterProxyAddon marked deprecated, will be removed in future release

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: dislbenn <dbennett@redhat.com>
@dislbenn dislbenn force-pushed the add-cluster-proxy-charts-config branch from bac1091 to 7529135 Compare April 28, 2026 18:06
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 28, 2026

@dislbenn: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/test-unit a1786d3 link true /test test-unit

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
controllers/toggle_components.go (1)

1692-1731: ⚠️ Potential issue | 🟠 Major

Key the missing-CRD status under cluster-proxy.

This still records the failure under cluster-proxy-addon, which leaves the migration half-renamed and reports the wrong component in MCE status. Update the key to the new component name.

🛠️ Suggested fix
-                r.StatusManager.AddComponent(clusterManagementAddOnNotFoundStatus("cluster-proxy-addon", mce.Spec.TargetNamespace))
+                r.StatusManager.AddComponent(clusterManagementAddOnNotFoundStatus("cluster-proxy", mce.Spec.TargetNamespace))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@controllers/toggle_components.go` around lines 1692 - 1731, The missing-CRD
error is being recorded under the old key "cluster-proxy-addon"; update the call
in the applyTemplate loop so the status is keyed to the new component name
"cluster-proxy" instead. In the block inside the for _, template := range
templates loop (where missingCRDErrorOccured is set and
r.StatusManager.AddComponent(clusterManagementAddOnNotFoundStatus(...)) is
invoked), change the first argument to clusterManagementAddOnNotFoundStatus to
use "cluster-proxy" (keeping mce.Spec.TargetNamespace unchanged) so the error is
reported on the cluster-proxy component in the MCE status.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@api/v1/multiclusterengine_methods.go`:
- Around line 42-43: The new ClusterProxy constant is declared but not added to
the component lists used for validation/feature gating; update the collections
that enumerate supported components by adding "ClusterProxy" (and if needed keep
the legacy "ClusterProxyAddon") to AllComponents and to MCEComponents so the
component is recognized by validation and component-gated logic; locate the
component registry variables/arrays (e.g., AllComponents, MCEComponents) and
append ClusterProxy where other components like ConsoleMCE are listed.

In `@controllers/backplaneconfig_controller.go`:
- Around line 1288-1311: The external-management check currently uses || with
negations which allows reconciliation when only one alias is externally managed;
change the condition in the ClusterProxy reconciliation block to require both
aliases be not externally managed (use && between the two
!r.isComponentExternallyManaged(...) calls) so reconciliation runs only when
neither backplanev1.ClusterProxyAddon nor backplanev1.ClusterProxy is externally
managed; apply the same logic to the finalizer/cleanup branch so the finalizer
only acts when neither alias is externally managed, and keep calls to
ensureClusterProxy and ensureNoClusterProxy unchanged except for their guarded
condition.

In
`@pkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-addon-manager-deployment.yaml`:
- Around line 68-69: The chart templates reference .Values.global.pullPolicy in
the cluster-proxy Deployment (imagePullPolicy in
cluster-proxy-addon-manager-deployment.yaml) but that value is not defined,
which can render an empty imagePullPolicy; fix this by either adding a default
pullPolicy under global in the chart values.yaml (e.g., global.pullPolicy:
IfNotPresent) or by defaulting inline in the template (use a fallback when
referencing .Values.global.pullPolicy) so imagePullPolicy always emits a valid
Kubernetes value; update the values.yaml or the template reference accordingly
(look for .Values.global.pullPolicy in
cluster-proxy-addon-manager-deployment.yaml).

In
`@pkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-addon-user-deployment.yaml`:
- Line 46: The deployment template currently hard-codes the agent image flag as
"--agent-image=quay.io/open-cluster-management/cluster-proxy:v5.0"; replace that
literal with a Helm value (e.g. use .Values.agentImage or .Values.image.agent)
so the flag becomes "--agent-image={{ .Values.agentImage }}" (and add a sensible
default in values.yaml), update any README/values docs to mention the new key,
and ensure the template escapes/quotes the value if necessary so the user-server
will advertise the configurable image to agents.
- Line 63: The imagePullPolicy field is templated from .Values.global.pullPolicy
but that value isn't defined in this chart so Helm renders an empty policy;
update the two occurrences where imagePullPolicy uses .Values.global.pullPolicy
to provide a Helm fallback (use the default function) so a sane default like
"IfNotPresent" is applied when global.pullPolicy is missing, ensuring
imagePullPolicy is never empty for the cluster-proxy deployment template.

In
`@pkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-managedproxyconfiguration.yaml`:
- Around line 13-19: The proxyServer.namespace field is hard-coded to
PLACEHOLDER_NAMESPACE; update the template so the CR uses the actual release
namespace (e.g., replace PLACEHOLDER_NAMESPACE with the chart template value for
the namespace such as {{ .Release.Namespace }} or the appropriate Helm/chart
variable) or remove the namespace field entirely if it is populated elsewhere,
ensuring you edit the proxyServer.namespace entry in the
cluster-proxy-managedproxyconfiguration.yaml template.

In
`@pkg/templates/charts/toggle/cluster-proxy/templates/open-cluster-management`:cluster-proxy:addon-manager-clusterrole.yaml:
- Around line 6-194: The ClusterRole addon-manager-clusterrole.yaml is overly
permissive—wildcard verbs are granted for managedproxyconfigurations, secrets,
configmaps, and signers (and configmap/secret rules are duplicated); inspect the
addon manager controller code (controllers handling managedproxyconfigurations,
managedclusteraddons, signer usage, and manifestworks) to determine exact verbs
needed (e.g., get/list/watch, create/update/patch, or status/finalizer-specific
actions) and replace '*' with the minimal verb sets for resources like
managedproxyconfigurations, secrets, configmaps, and signers, remove duplicated
rules, and restrict resourceNames where possible (e.g., signers entries) to
narrow the blast radius.

In
`@pkg/templates/charts/toggle/cluster-proxy/templates/open-cluster-management`:cluster-proxy:addon-manager-role.yaml:
- Around line 7-39: The Role currently uses wildcard verbs ('*') for services,
serviceaccounts, deployments/deployments/scale and leases; replace those '*'
entries with minimal verbs needed by addon-manager: for services and
serviceaccounts use ["get","list","watch","create","update","patch"] (add
"delete" only if deletion is required), for deployments use
["get","list","watch","create","update","patch"] and for deployments/scale
include ["get","update"], and keep leases as ["get","create","update","patch"];
update the Role resource blocks in the template (the entries around apiGroups:''
resources: services, serviceaccounts, and apiGroups: apps resources:
deployments, deployments/scale, and apiGroups: coordination.k8s.io resources:
leases) accordingly.

---

Outside diff comments:
In `@controllers/toggle_components.go`:
- Around line 1692-1731: The missing-CRD error is being recorded under the old
key "cluster-proxy-addon"; update the call in the applyTemplate loop so the
status is keyed to the new component name "cluster-proxy" instead. In the block
inside the for _, template := range templates loop (where missingCRDErrorOccured
is set and
r.StatusManager.AddComponent(clusterManagementAddOnNotFoundStatus(...)) is
invoked), change the first argument to clusterManagementAddOnNotFoundStatus to
use "cluster-proxy" (keeping mce.Spec.TargetNamespace unchanged) so the error is
reported on the cluster-proxy component in the MCE status.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Enterprise

Run ID: 82448151-332a-4aa1-ad00-c4acb597f972

📥 Commits

Reviewing files that changed from the base of the PR and between db02d21 and a1786d3.

📒 Files selected for processing (22)
  • api/v1/multiclusterengine_methods.go
  • controllers/backplaneconfig_controller.go
  • controllers/toggle_components.go
  • hack/bundle-automation/chart-values/cluster-proxy/overwriteValues.yaml
  • hack/bundle-automation/charts-config.yaml
  • pkg/templates/charts/toggle/cluster-proxy/Chart.yaml
  • pkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-addon-manager-deployment.yaml
  • pkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-addon-user-deployment.yaml
  • pkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-addon-user-service.yaml
  • pkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-clustermanagementaddon.yaml
  • pkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-managedproxyconfiguration.yaml
  • pkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-placement-placement.yaml
  • pkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-serviceaccount.yaml
  • pkg/templates/charts/toggle/cluster-proxy/templates/global-managedclustersetbinding.yaml
  • pkg/templates/charts/toggle/cluster-proxy/templates/open-cluster-management:cluster-proxy:addon-manager-clusterrole.yaml
  • pkg/templates/charts/toggle/cluster-proxy/templates/open-cluster-management:cluster-proxy:addon-manager-clusterrolebinding.yaml
  • pkg/templates/charts/toggle/cluster-proxy/templates/open-cluster-management:cluster-proxy:addon-manager-role.yaml
  • pkg/templates/charts/toggle/cluster-proxy/templates/open-cluster-management:cluster-proxy:addon-manager-rolebinding.yaml
  • pkg/templates/charts/toggle/cluster-proxy/values.yaml
  • pkg/templates/crds/cluster-proxy/managedproxyconfigurations.yaml
  • pkg/toggle/toggle.go
  • pkg/utils/utils.go

Comment on lines +42 to 43
ClusterProxy = "cluster-proxy"
ConsoleMCE = "console-mce"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Register cluster-proxy as a valid component.

Adding the constant/CRD dir isn't enough on its own: ClusterProxy still isn't present in AllComponents or MCEComponents, so validation and any component-gated logic will continue to reject it. If ClusterProxyAddon must remain for compatibility, keep both entries.

🛠️ Suggested fix
 var AllComponents = []string{
   AssistedService,
   ClusterAPI,
   ClusterAPIPreview,
   ClusterAPIProviderAWS,
   ClusterAPIProviderAWSPreview,
   // ClusterAPIProviderAzure, Uncomment until stable release is available
   ClusterAPIProviderAzurePreview,
   ClusterAPIProviderMetal,
   ClusterAPIProviderMetalPreview,
   ClusterAPIProviderOAPreview,
   ClusterAPIProviderOA,
   ClusterLifecycle,
   ClusterManager,
   ClusterPermission,
   ClusterProxyAddon,
+  ClusterProxy,
   ConsoleMCE,
   Discovery,
   Hive,
   HyperShift,
@@
 var MCEComponents = []string{
   AssistedService,
   ClusterAPI,
   ClusterAPIProviderAWS,
   // ClusterAPIProviderAzure, Uncomment until stable release is available
   ClusterAPIProviderAzurePreview,
   ClusterAPIProviderMetal,
   ClusterAPIProviderOA,
   ClusterLifecycle,
   ClusterManager,
   ClusterPermission,
   ClusterProxyAddon,
+  ClusterProxy,
   ConsoleMCE,
   Discovery,
   Hive,
   HyperShift,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/v1/multiclusterengine_methods.go` around lines 42 - 43, The new
ClusterProxy constant is declared but not added to the component lists used for
validation/feature gating; update the collections that enumerate supported
components by adding "ClusterProxy" (and if needed keep the legacy
"ClusterProxyAddon") to AllComponents and to MCEComponents so the component is
recognized by validation and component-gated logic; locate the component
registry variables/arrays (e.g., AllComponents, MCEComponents) and append
ClusterProxy where other components like ConsoleMCE are listed.

Comment on lines +1288 to 1311
// Handle ClusterProxyAddon (deprecated) and ClusterProxy - both use same chart
if !r.isComponentExternallyManaged(backplaneConfig, backplanev1.ClusterProxyAddon) ||
!r.isComponentExternallyManaged(backplaneConfig, backplanev1.ClusterProxy) {
if (backplaneConfig.Enabled(backplanev1.ClusterProxyAddon) || backplaneConfig.Enabled(backplanev1.ClusterProxy)) &&
foundation.CanInstallAddons(ctx, r.Client) {
result, err = r.ensureClusterProxy(ctx, backplaneConfig)
if result != (ctrl.Result{}) {
requeue = true
}
if err != nil {
errs[backplanev1.ClusterProxyAddon] = err
errs[backplanev1.ClusterProxy] = err
}
} else {
result, err = r.ensureNoClusterProxyAddon(ctx, backplaneConfig)
result, err = r.ensureNoClusterProxy(ctx, backplaneConfig)
if result != (ctrl.Result{}) {
requeue = true
}
if err != nil {
errs[backplanev1.ClusterProxyAddon] = err
errs[backplanev1.ClusterProxy] = err
}
}
} else {
log.Info(messages.SkippingExternallyManaged, "component", backplanev1.ClusterProxyAddon)
log.Info(messages.SkippingExternallyManaged, "component", backplanev1.ClusterProxy)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Treat the proxy aliases as a single external-management decision.

Using || means this block still reconciles when only one of the two names is marked externally managed, which can recreate/delete the same shared resources anyway. Please skip reconciliation unless neither alias is external, and mirror the same alias handling in finalizer cleanup.

🔧 Suggested fix
-	if !r.isComponentExternallyManaged(backplaneConfig, backplanev1.ClusterProxyAddon) ||
-		!r.isComponentExternallyManaged(backplaneConfig, backplanev1.ClusterProxy) {
+	if !r.isComponentExternallyManaged(backplaneConfig, backplanev1.ClusterProxyAddon) &&
+		!r.isComponentExternallyManaged(backplaneConfig, backplanev1.ClusterProxy) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@controllers/backplaneconfig_controller.go` around lines 1288 - 1311, The
external-management check currently uses || with negations which allows
reconciliation when only one alias is externally managed; change the condition
in the ClusterProxy reconciliation block to require both aliases be not
externally managed (use && between the two !r.isComponentExternallyManaged(...)
calls) so reconciliation runs only when neither backplanev1.ClusterProxyAddon
nor backplanev1.ClusterProxy is externally managed; apply the same logic to the
finalizer/cleanup branch so the finalizer only acts when neither alias is
externally managed, and keep calls to ensureClusterProxy and
ensureNoClusterProxy unchanged except for their guarded condition.

Comment on lines +68 to +69
image: '{{ .Values.global.imageOverrides.cluster_proxy }}'
imagePullPolicy: '{{ .Values.global.pullPolicy }}'
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't template imagePullPolicy from a missing value.

global.pullPolicy is not defined in this chart's values, so Helm will render an empty value here and Kubernetes can reject the PodSpec. Add the missing value to values.yaml or default it inline.

🛠️ Suggested fix
-        imagePullPolicy: '{{ .Values.global.pullPolicy }}'
+        imagePullPolicy: '{{ default "IfNotPresent" .Values.global.pullPolicy }}'
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
image: '{{ .Values.global.imageOverrides.cluster_proxy }}'
imagePullPolicy: '{{ .Values.global.pullPolicy }}'
image: '{{ .Values.global.imageOverrides.cluster_proxy }}'
imagePullPolicy: '{{ default "IfNotPresent" .Values.global.pullPolicy }}'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@pkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-addon-manager-deployment.yaml`
around lines 68 - 69, The chart templates reference .Values.global.pullPolicy in
the cluster-proxy Deployment (imagePullPolicy in
cluster-proxy-addon-manager-deployment.yaml) but that value is not defined,
which can render an empty imagePullPolicy; fix this by either adding a default
pullPolicy under global in the chart values.yaml (e.g., global.pullPolicy:
IfNotPresent) or by defaulting inline in the template (use a fallback when
referencing .Values.global.pullPolicy) so imagePullPolicy always emits a valid
Kubernetes value; update the values.yaml or the template reference accordingly
(look for .Values.global.pullPolicy in
cluster-proxy-addon-manager-deployment.yaml).

- controllers
- --certificates-namespace={{ .Values.global.namespace }}
- --signer-secret-namespace={{ .Values.global.namespace }}
- --agent-image=quay.io/open-cluster-management/cluster-proxy:v5.0
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Parameterize the agent image.

--agent-image is hard-coded to the upstream v5.0 image, so the new image mapping/mirroring path won't affect what the user-server tells agents to pull. Wire this through a Helm value instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@pkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-addon-user-deployment.yaml`
at line 46, The deployment template currently hard-codes the agent image flag as
"--agent-image=quay.io/open-cluster-management/cluster-proxy:v5.0"; replace that
literal with a Helm value (e.g. use .Values.agentImage or .Values.image.agent)
so the flag becomes "--agent-image={{ .Values.agentImage }}" (and add a sensible
default in values.yaml), update any README/values docs to mention the new key,
and ensure the template escapes/quotes the value if necessary so the user-server
will advertise the configurable image to agents.

value: {{ .Values.hubconfig.proxyConfigs.NO_PROXY }}
{{- end }}
image: '{{ .Values.global.imageOverrides.cluster_proxy }}'
imagePullPolicy: '{{ .Values.global.pullPolicy }}'
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix the missing imagePullPolicy value here too.

Both containers template imagePullPolicy from global.pullPolicy, but this chart's values don't define that key. Helm will render an empty policy unless you add a default or populate the value.

🛠️ Suggested fix
-        imagePullPolicy: '{{ .Values.global.pullPolicy }}'
+        imagePullPolicy: '{{ default "IfNotPresent" .Values.global.pullPolicy }}'

Also applies to: 105-105

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@pkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-addon-user-deployment.yaml`
at line 63, The imagePullPolicy field is templated from
.Values.global.pullPolicy but that value isn't defined in this chart so Helm
renders an empty policy; update the two occurrences where imagePullPolicy uses
.Values.global.pullPolicy to provide a Helm fallback (use the default function)
so a sane default like "IfNotPresent" is applied when global.pullPolicy is
missing, ensuring imagePullPolicy is never empty for the cluster-proxy
deployment template.

Comment on lines +13 to +19
proxyServer:
image: quay.io/open-cluster-management/cluster-proxy:v5.0
replicas: 1
namespace: PLACEHOLDER_NAMESPACE
entrypoint:
type: PortForward
port: 8091
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix the hard-coded namespace placeholder.

proxyServer.namespace is still rendered as PLACEHOLDER_NAMESPACE, so the generated CR won't point at the release namespace. Please template this value or drop the field if it is injected elsewhere.

🛠️ Suggested fix
   proxyServer:
     image: quay.io/open-cluster-management/cluster-proxy:v5.0
     replicas: 1
-    namespace: PLACEHOLDER_NAMESPACE
+    namespace: '{{ .Values.global.namespace }}'
     entrypoint:
       type: PortForward
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
proxyServer:
image: quay.io/open-cluster-management/cluster-proxy:v5.0
replicas: 1
namespace: PLACEHOLDER_NAMESPACE
entrypoint:
type: PortForward
port: 8091
proxyServer:
image: quay.io/open-cluster-management/cluster-proxy:v5.0
replicas: 1
namespace: '{{ .Values.global.namespace }}'
entrypoint:
type: PortForward
port: 8091
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@pkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-managedproxyconfiguration.yaml`
around lines 13 - 19, The proxyServer.namespace field is hard-coded to
PLACEHOLDER_NAMESPACE; update the template so the CR uses the actual release
namespace (e.g., replace PLACEHOLDER_NAMESPACE with the chart template value for
the namespace such as {{ .Release.Namespace }} or the appropriate Helm/chart
variable) or remove the namespace field entirely if it is populated elsewhere,
ensuring you edit the proxyServer.namespace entry in the
cluster-proxy-managedproxyconfiguration.yaml template.

Comment on lines +6 to +194
- apiGroups:
- cluster.open-cluster-management.io
resources:
- managedclusters
- managedclustersets
verbs:
- get
- list
- watch
- apiGroups:
- addon.open-cluster-management.io
resources:
- clustermanagementaddons
- managedclusteraddons
- clustermanagementaddons/status
- clustermanagementaddons/finalizers
- managedclusteraddons/status
verbs:
- '*'
- apiGroups:
- addon.open-cluster-management.io
resources:
- addondeploymentconfigs
verbs:
- get
- list
- watch
- apiGroups:
- addon.open-cluster-management.io
resources:
- managedclusteraddons/finalizers
verbs:
- '*'
- apiGroups:
- proxy.open-cluster-management.io
resources:
- managedproxyconfigurations
- managedproxyconfigurations/status
- managedproxyconfigurations/finalizers
verbs:
- '*'
- apiGroups:
- certificates.k8s.io
resources:
- certificatesigningrequests
- certificatesigningrequests/approval
- certificatesigningrequests/status
verbs:
- get
- list
- watch
- update
- patch
- apiGroups:
- certificates.k8s.io
resourceNames:
- open-cluster-management.io/proxy-agent-signer
- kubernetes.io/kube-apiserver-client
resources:
- signers
verbs:
- '*'
- apiGroups:
- ''
resources:
- namespaces
- secrets
- pods
- pods/portforward
verbs:
- '*'
- apiGroups:
- ''
resources:
- serviceaccounts
- services
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- deployments
verbs:
- get
- list
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
- rolebindings
verbs:
- get
- list
- watch
- create
- update
- patch
- apiGroups:
- work.open-cluster-management.io
resources:
- manifestworks
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- '*'
- apiGroups:
- ''
resources:
- configmaps
- secrets
verbs:
- '*'
- apiGroups:
- apps
resources:
- replicasets
verbs:
- get
- apiGroups:
- ''
- events.k8s.io
resources:
- events
verbs:
- create
- patch
- update
- apiGroups:
- ''
resources:
- configmaps
- secrets
verbs:
- '*'
- apiGroups:
- imageregistry.open-cluster-management.io
resources:
- managedclusterimageregistries
- managedclusterimageregistries
verbs:
- get
- list
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterroles
- clusterrolebindings
verbs:
- create
- get
- list
- watch
- update
- delete
- apiGroups:
- authentication.k8s.io
resources:
- tokenreviews
verbs:
- create
- apiGroups:
- multicluster.x-k8s.io
resources:
- clusterprofiles
verbs:
- get
- list
- watch
- apiGroups:
- multicluster.x-k8s.io
resources:
- clusterprofiles/status
verbs:
- update
- patch
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Narrow this ClusterRole before merge.

This role grants wildcard verbs on managedproxyconfigurations, secrets, configmaps, and signer resources, and it repeats some secret/configmap permissions. That is a much wider blast radius than the addon manager likely needs. Please cross-check the controller's actual CRUD needs and trim the rules to the minimum set.

🧰 Tools
🪛 Checkov (3.2.525)

[high] 1-194: Minimize ClusterRoles that grant permissions to approve CertificateSigningRequests

(CKV_K8S_156)

🪛 Trivy (0.69.3)

[error] 68-76: Manage secrets

ClusterRole 'open-cluster-management:cluster-proxy:addon-manager' shouldn't have access to manage resource 'secrets'

Rule: KSV-0041

Learn more

(IaC/Kubernetes)


[error] 124-130: Manage secrets

ClusterRole 'open-cluster-management:cluster-proxy:addon-manager' shouldn't have access to manage resource 'secrets'

Rule: KSV-0041

Learn more

(IaC/Kubernetes)


[error] 146-152: Manage secrets

ClusterRole 'open-cluster-management:cluster-proxy:addon-manager' shouldn't have access to manage resource 'secrets'

Rule: KSV-0041

Learn more

(IaC/Kubernetes)


[error] 68-76: No wildcard verb roles

Role permits wildcard verb on specific resources

Rule: KSV-0045

Learn more

(IaC/Kubernetes)


[error] 124-130: No wildcard verb roles

Role permits wildcard verb on specific resources

Rule: KSV-0045

Learn more

(IaC/Kubernetes)


[error] 146-152: No wildcard verb roles

Role permits wildcard verb on specific resources

Rule: KSV-0045

Learn more

(IaC/Kubernetes)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@pkg/templates/charts/toggle/cluster-proxy/templates/open-cluster-management`:cluster-proxy:addon-manager-clusterrole.yaml
around lines 6 - 194, The ClusterRole addon-manager-clusterrole.yaml is overly
permissive—wildcard verbs are granted for managedproxyconfigurations, secrets,
configmaps, and signers (and configmap/secret rules are duplicated); inspect the
addon manager controller code (controllers handling managedproxyconfigurations,
managedclusteraddons, signer usage, and manifestworks) to determine exact verbs
needed (e.g., get/list/watch, create/update/patch, or status/finalizer-specific
actions) and replace '*' with the minimal verb sets for resources like
managedproxyconfigurations, secrets, configmaps, and signers, remove duplicated
rules, and restrict resourceNames where possible (e.g., signers entries) to
narrow the blast radius.

Comment on lines +7 to +39
- apiGroups:
- ''
resources:
- services
- events
- serviceaccounts
verbs:
- '*'
- apiGroups:
- apps
resources:
- deployments
- deployments/scale
verbs:
- '*'
- apiGroups:
- ''
resources:
- configmaps
verbs:
- get
- create
- update
- patch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- get
- create
- update
- patch
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Trim the RBAC verbs here.

Granting * on services, serviceaccounts, deployments/deployments/scale, and leases is broader than the addon-manager needs and will keep triggering the RBAC scanner warnings. Please scope this Role to the minimal verbs required for the chart.

🧰 Tools
🪛 Trivy (0.69.3)

[error] 15-21: No wildcard verb roles

Role permits wildcard verb on specific resources

Rule: KSV-0045

Learn more

(IaC/Kubernetes)


[error] 7-14: Manage Kubernetes networking

Role 'open-cluster-management:cluster-proxy:addon-manager' should not have access to resources ["services", "endpoints", "endpointslices", "networkpolicies", "ingresses"] for verbs ["create", "update", "patch", "delete", "deletecollection", "impersonate", "*"]

Rule: KSV-0056

Learn more

(IaC/Kubernetes)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@pkg/templates/charts/toggle/cluster-proxy/templates/open-cluster-management`:cluster-proxy:addon-manager-role.yaml
around lines 7 - 39, The Role currently uses wildcard verbs ('*') for services,
serviceaccounts, deployments/deployments/scale and leases; replace those '*'
entries with minimal verbs needed by addon-manager: for services and
serviceaccounts use ["get","list","watch","create","update","patch"] (add
"delete" only if deletion is required), for deployments use
["get","list","watch","create","update","patch"] and for deployments/scale
include ["get","update"], and keep leases as ["get","create","update","patch"];
update the Role resource blocks in the template (the entries around apiGroups:''
resources: services, serviceaccounts, and apiGroups: apps resources:
deployments, deployments/scale, and apiGroups: coordination.k8s.io resources:
leases) accordingly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant