Add cluster-proxy to charts-config.yaml#3220
Add cluster-proxy to charts-config.yaml#3220dislbenn wants to merge 5 commits intostolostron:mainfrom
Conversation
- Add cluster-proxy component from stolostron/cluster-proxy repo - Configure chart automation for backplane-5.0 branch - Set image mapping for cluster-proxy to cluster_proxy Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: dislbenn <dbennett@redhat.com>
📝 WalkthroughWalkthroughThis pull request introduces a new Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~40 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dislbenn The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
- Run make regenerate-charts COMPONENT=cluster-proxy - Add cluster-proxy chart templates from stolostron/cluster-proxy backplane-5.0 branch - Include manager deployment with POD_NAMESPACE env var - Add ClusterManagementAddOn, RBAC, and Placement resources - Add ManagedProxyConfiguration CRD Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: dislbenn <dbennett@redhat.com>
- Add ClusterProxy constant and chart/CRD directory paths - Mark ClusterProxyAddon as deprecated (moved to ClusterProxy in MCE 2.11) - Generate cluster-proxy chart with user-deployment and user-service enabled - Create custom helm values in hack/bundle-automation/chart-values/cluster-proxy - Add ClusterProxy mappings to controller chart path resolution This maintains backward compatibility by keeping ClusterProxyAddon constants while introducing ClusterProxy for future migration. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: dislbenn <dbennett@redhat.com>
6568faf to
bac1091
Compare
- Rename ensureClusterProxyAddon → ensureClusterProxy - Rename ensureNoClusterProxyAddon → ensureNoClusterProxy - Update reconciler functions to use backplanev1.ClusterProxy constant - Add reconciler logic to handle both ClusterProxyAddon (deprecated) and ClusterProxy - Both components call same functions - ensures backward compatibility during migration Migration path: - Existing MCE CRs with ClusterProxyAddon enabled will continue to work - New ClusterProxy chart (pkg/templates/charts/toggle/cluster-proxy) deployed - ClusterProxyAddon marked deprecated, will be removed in future release Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: dislbenn <dbennett@redhat.com>
bac1091 to
7529135
Compare
|
@dislbenn: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
controllers/toggle_components.go (1)
1692-1731:⚠️ Potential issue | 🟠 MajorKey the missing-CRD status under
cluster-proxy.This still records the failure under
cluster-proxy-addon, which leaves the migration half-renamed and reports the wrong component in MCE status. Update the key to the new component name.🛠️ Suggested fix
- r.StatusManager.AddComponent(clusterManagementAddOnNotFoundStatus("cluster-proxy-addon", mce.Spec.TargetNamespace)) + r.StatusManager.AddComponent(clusterManagementAddOnNotFoundStatus("cluster-proxy", mce.Spec.TargetNamespace))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controllers/toggle_components.go` around lines 1692 - 1731, The missing-CRD error is being recorded under the old key "cluster-proxy-addon"; update the call in the applyTemplate loop so the status is keyed to the new component name "cluster-proxy" instead. In the block inside the for _, template := range templates loop (where missingCRDErrorOccured is set and r.StatusManager.AddComponent(clusterManagementAddOnNotFoundStatus(...)) is invoked), change the first argument to clusterManagementAddOnNotFoundStatus to use "cluster-proxy" (keeping mce.Spec.TargetNamespace unchanged) so the error is reported on the cluster-proxy component in the MCE status.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api/v1/multiclusterengine_methods.go`:
- Around line 42-43: The new ClusterProxy constant is declared but not added to
the component lists used for validation/feature gating; update the collections
that enumerate supported components by adding "ClusterProxy" (and if needed keep
the legacy "ClusterProxyAddon") to AllComponents and to MCEComponents so the
component is recognized by validation and component-gated logic; locate the
component registry variables/arrays (e.g., AllComponents, MCEComponents) and
append ClusterProxy where other components like ConsoleMCE are listed.
In `@controllers/backplaneconfig_controller.go`:
- Around line 1288-1311: The external-management check currently uses || with
negations which allows reconciliation when only one alias is externally managed;
change the condition in the ClusterProxy reconciliation block to require both
aliases be not externally managed (use && between the two
!r.isComponentExternallyManaged(...) calls) so reconciliation runs only when
neither backplanev1.ClusterProxyAddon nor backplanev1.ClusterProxy is externally
managed; apply the same logic to the finalizer/cleanup branch so the finalizer
only acts when neither alias is externally managed, and keep calls to
ensureClusterProxy and ensureNoClusterProxy unchanged except for their guarded
condition.
In
`@pkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-addon-manager-deployment.yaml`:
- Around line 68-69: The chart templates reference .Values.global.pullPolicy in
the cluster-proxy Deployment (imagePullPolicy in
cluster-proxy-addon-manager-deployment.yaml) but that value is not defined,
which can render an empty imagePullPolicy; fix this by either adding a default
pullPolicy under global in the chart values.yaml (e.g., global.pullPolicy:
IfNotPresent) or by defaulting inline in the template (use a fallback when
referencing .Values.global.pullPolicy) so imagePullPolicy always emits a valid
Kubernetes value; update the values.yaml or the template reference accordingly
(look for .Values.global.pullPolicy in
cluster-proxy-addon-manager-deployment.yaml).
In
`@pkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-addon-user-deployment.yaml`:
- Line 46: The deployment template currently hard-codes the agent image flag as
"--agent-image=quay.io/open-cluster-management/cluster-proxy:v5.0"; replace that
literal with a Helm value (e.g. use .Values.agentImage or .Values.image.agent)
so the flag becomes "--agent-image={{ .Values.agentImage }}" (and add a sensible
default in values.yaml), update any README/values docs to mention the new key,
and ensure the template escapes/quotes the value if necessary so the user-server
will advertise the configurable image to agents.
- Line 63: The imagePullPolicy field is templated from .Values.global.pullPolicy
but that value isn't defined in this chart so Helm renders an empty policy;
update the two occurrences where imagePullPolicy uses .Values.global.pullPolicy
to provide a Helm fallback (use the default function) so a sane default like
"IfNotPresent" is applied when global.pullPolicy is missing, ensuring
imagePullPolicy is never empty for the cluster-proxy deployment template.
In
`@pkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-managedproxyconfiguration.yaml`:
- Around line 13-19: The proxyServer.namespace field is hard-coded to
PLACEHOLDER_NAMESPACE; update the template so the CR uses the actual release
namespace (e.g., replace PLACEHOLDER_NAMESPACE with the chart template value for
the namespace such as {{ .Release.Namespace }} or the appropriate Helm/chart
variable) or remove the namespace field entirely if it is populated elsewhere,
ensuring you edit the proxyServer.namespace entry in the
cluster-proxy-managedproxyconfiguration.yaml template.
In
`@pkg/templates/charts/toggle/cluster-proxy/templates/open-cluster-management`:cluster-proxy:addon-manager-clusterrole.yaml:
- Around line 6-194: The ClusterRole addon-manager-clusterrole.yaml is overly
permissive—wildcard verbs are granted for managedproxyconfigurations, secrets,
configmaps, and signers (and configmap/secret rules are duplicated); inspect the
addon manager controller code (controllers handling managedproxyconfigurations,
managedclusteraddons, signer usage, and manifestworks) to determine exact verbs
needed (e.g., get/list/watch, create/update/patch, or status/finalizer-specific
actions) and replace '*' with the minimal verb sets for resources like
managedproxyconfigurations, secrets, configmaps, and signers, remove duplicated
rules, and restrict resourceNames where possible (e.g., signers entries) to
narrow the blast radius.
In
`@pkg/templates/charts/toggle/cluster-proxy/templates/open-cluster-management`:cluster-proxy:addon-manager-role.yaml:
- Around line 7-39: The Role currently uses wildcard verbs ('*') for services,
serviceaccounts, deployments/deployments/scale and leases; replace those '*'
entries with minimal verbs needed by addon-manager: for services and
serviceaccounts use ["get","list","watch","create","update","patch"] (add
"delete" only if deletion is required), for deployments use
["get","list","watch","create","update","patch"] and for deployments/scale
include ["get","update"], and keep leases as ["get","create","update","patch"];
update the Role resource blocks in the template (the entries around apiGroups:''
resources: services, serviceaccounts, and apiGroups: apps resources:
deployments, deployments/scale, and apiGroups: coordination.k8s.io resources:
leases) accordingly.
---
Outside diff comments:
In `@controllers/toggle_components.go`:
- Around line 1692-1731: The missing-CRD error is being recorded under the old
key "cluster-proxy-addon"; update the call in the applyTemplate loop so the
status is keyed to the new component name "cluster-proxy" instead. In the block
inside the for _, template := range templates loop (where missingCRDErrorOccured
is set and
r.StatusManager.AddComponent(clusterManagementAddOnNotFoundStatus(...)) is
invoked), change the first argument to clusterManagementAddOnNotFoundStatus to
use "cluster-proxy" (keeping mce.Spec.TargetNamespace unchanged) so the error is
reported on the cluster-proxy component in the MCE status.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Enterprise
Run ID: 82448151-332a-4aa1-ad00-c4acb597f972
📒 Files selected for processing (22)
api/v1/multiclusterengine_methods.gocontrollers/backplaneconfig_controller.gocontrollers/toggle_components.gohack/bundle-automation/chart-values/cluster-proxy/overwriteValues.yamlhack/bundle-automation/charts-config.yamlpkg/templates/charts/toggle/cluster-proxy/Chart.yamlpkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-addon-manager-deployment.yamlpkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-addon-user-deployment.yamlpkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-addon-user-service.yamlpkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-clustermanagementaddon.yamlpkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-managedproxyconfiguration.yamlpkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-placement-placement.yamlpkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-serviceaccount.yamlpkg/templates/charts/toggle/cluster-proxy/templates/global-managedclustersetbinding.yamlpkg/templates/charts/toggle/cluster-proxy/templates/open-cluster-management:cluster-proxy:addon-manager-clusterrole.yamlpkg/templates/charts/toggle/cluster-proxy/templates/open-cluster-management:cluster-proxy:addon-manager-clusterrolebinding.yamlpkg/templates/charts/toggle/cluster-proxy/templates/open-cluster-management:cluster-proxy:addon-manager-role.yamlpkg/templates/charts/toggle/cluster-proxy/templates/open-cluster-management:cluster-proxy:addon-manager-rolebinding.yamlpkg/templates/charts/toggle/cluster-proxy/values.yamlpkg/templates/crds/cluster-proxy/managedproxyconfigurations.yamlpkg/toggle/toggle.gopkg/utils/utils.go
| ClusterProxy = "cluster-proxy" | ||
| ConsoleMCE = "console-mce" |
There was a problem hiding this comment.
Register cluster-proxy as a valid component.
Adding the constant/CRD dir isn't enough on its own: ClusterProxy still isn't present in AllComponents or MCEComponents, so validation and any component-gated logic will continue to reject it. If ClusterProxyAddon must remain for compatibility, keep both entries.
🛠️ Suggested fix
var AllComponents = []string{
AssistedService,
ClusterAPI,
ClusterAPIPreview,
ClusterAPIProviderAWS,
ClusterAPIProviderAWSPreview,
// ClusterAPIProviderAzure, Uncomment until stable release is available
ClusterAPIProviderAzurePreview,
ClusterAPIProviderMetal,
ClusterAPIProviderMetalPreview,
ClusterAPIProviderOAPreview,
ClusterAPIProviderOA,
ClusterLifecycle,
ClusterManager,
ClusterPermission,
ClusterProxyAddon,
+ ClusterProxy,
ConsoleMCE,
Discovery,
Hive,
HyperShift,
@@
var MCEComponents = []string{
AssistedService,
ClusterAPI,
ClusterAPIProviderAWS,
// ClusterAPIProviderAzure, Uncomment until stable release is available
ClusterAPIProviderAzurePreview,
ClusterAPIProviderMetal,
ClusterAPIProviderOA,
ClusterLifecycle,
ClusterManager,
ClusterPermission,
ClusterProxyAddon,
+ ClusterProxy,
ConsoleMCE,
Discovery,
Hive,
HyperShift,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@api/v1/multiclusterengine_methods.go` around lines 42 - 43, The new
ClusterProxy constant is declared but not added to the component lists used for
validation/feature gating; update the collections that enumerate supported
components by adding "ClusterProxy" (and if needed keep the legacy
"ClusterProxyAddon") to AllComponents and to MCEComponents so the component is
recognized by validation and component-gated logic; locate the component
registry variables/arrays (e.g., AllComponents, MCEComponents) and append
ClusterProxy where other components like ConsoleMCE are listed.
| // Handle ClusterProxyAddon (deprecated) and ClusterProxy - both use same chart | ||
| if !r.isComponentExternallyManaged(backplaneConfig, backplanev1.ClusterProxyAddon) || | ||
| !r.isComponentExternallyManaged(backplaneConfig, backplanev1.ClusterProxy) { | ||
| if (backplaneConfig.Enabled(backplanev1.ClusterProxyAddon) || backplaneConfig.Enabled(backplanev1.ClusterProxy)) && | ||
| foundation.CanInstallAddons(ctx, r.Client) { | ||
| result, err = r.ensureClusterProxy(ctx, backplaneConfig) | ||
| if result != (ctrl.Result{}) { | ||
| requeue = true | ||
| } | ||
| if err != nil { | ||
| errs[backplanev1.ClusterProxyAddon] = err | ||
| errs[backplanev1.ClusterProxy] = err | ||
| } | ||
| } else { | ||
| result, err = r.ensureNoClusterProxyAddon(ctx, backplaneConfig) | ||
| result, err = r.ensureNoClusterProxy(ctx, backplaneConfig) | ||
| if result != (ctrl.Result{}) { | ||
| requeue = true | ||
| } | ||
| if err != nil { | ||
| errs[backplanev1.ClusterProxyAddon] = err | ||
| errs[backplanev1.ClusterProxy] = err | ||
| } | ||
| } | ||
| } else { | ||
| log.Info(messages.SkippingExternallyManaged, "component", backplanev1.ClusterProxyAddon) | ||
| log.Info(messages.SkippingExternallyManaged, "component", backplanev1.ClusterProxy) | ||
| } |
There was a problem hiding this comment.
Treat the proxy aliases as a single external-management decision.
Using || means this block still reconciles when only one of the two names is marked externally managed, which can recreate/delete the same shared resources anyway. Please skip reconciliation unless neither alias is external, and mirror the same alias handling in finalizer cleanup.
🔧 Suggested fix
- if !r.isComponentExternallyManaged(backplaneConfig, backplanev1.ClusterProxyAddon) ||
- !r.isComponentExternallyManaged(backplaneConfig, backplanev1.ClusterProxy) {
+ if !r.isComponentExternallyManaged(backplaneConfig, backplanev1.ClusterProxyAddon) &&
+ !r.isComponentExternallyManaged(backplaneConfig, backplanev1.ClusterProxy) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@controllers/backplaneconfig_controller.go` around lines 1288 - 1311, The
external-management check currently uses || with negations which allows
reconciliation when only one alias is externally managed; change the condition
in the ClusterProxy reconciliation block to require both aliases be not
externally managed (use && between the two !r.isComponentExternallyManaged(...)
calls) so reconciliation runs only when neither backplanev1.ClusterProxyAddon
nor backplanev1.ClusterProxy is externally managed; apply the same logic to the
finalizer/cleanup branch so the finalizer only acts when neither alias is
externally managed, and keep calls to ensureClusterProxy and
ensureNoClusterProxy unchanged except for their guarded condition.
| image: '{{ .Values.global.imageOverrides.cluster_proxy }}' | ||
| imagePullPolicy: '{{ .Values.global.pullPolicy }}' |
There was a problem hiding this comment.
Don't template imagePullPolicy from a missing value.
global.pullPolicy is not defined in this chart's values, so Helm will render an empty value here and Kubernetes can reject the PodSpec. Add the missing value to values.yaml or default it inline.
🛠️ Suggested fix
- imagePullPolicy: '{{ .Values.global.pullPolicy }}'
+ imagePullPolicy: '{{ default "IfNotPresent" .Values.global.pullPolicy }}'📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| image: '{{ .Values.global.imageOverrides.cluster_proxy }}' | |
| imagePullPolicy: '{{ .Values.global.pullPolicy }}' | |
| image: '{{ .Values.global.imageOverrides.cluster_proxy }}' | |
| imagePullPolicy: '{{ default "IfNotPresent" .Values.global.pullPolicy }}' |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@pkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-addon-manager-deployment.yaml`
around lines 68 - 69, The chart templates reference .Values.global.pullPolicy in
the cluster-proxy Deployment (imagePullPolicy in
cluster-proxy-addon-manager-deployment.yaml) but that value is not defined,
which can render an empty imagePullPolicy; fix this by either adding a default
pullPolicy under global in the chart values.yaml (e.g., global.pullPolicy:
IfNotPresent) or by defaulting inline in the template (use a fallback when
referencing .Values.global.pullPolicy) so imagePullPolicy always emits a valid
Kubernetes value; update the values.yaml or the template reference accordingly
(look for .Values.global.pullPolicy in
cluster-proxy-addon-manager-deployment.yaml).
| - controllers | ||
| - --certificates-namespace={{ .Values.global.namespace }} | ||
| - --signer-secret-namespace={{ .Values.global.namespace }} | ||
| - --agent-image=quay.io/open-cluster-management/cluster-proxy:v5.0 |
There was a problem hiding this comment.
Parameterize the agent image.
--agent-image is hard-coded to the upstream v5.0 image, so the new image mapping/mirroring path won't affect what the user-server tells agents to pull. Wire this through a Helm value instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@pkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-addon-user-deployment.yaml`
at line 46, The deployment template currently hard-codes the agent image flag as
"--agent-image=quay.io/open-cluster-management/cluster-proxy:v5.0"; replace that
literal with a Helm value (e.g. use .Values.agentImage or .Values.image.agent)
so the flag becomes "--agent-image={{ .Values.agentImage }}" (and add a sensible
default in values.yaml), update any README/values docs to mention the new key,
and ensure the template escapes/quotes the value if necessary so the user-server
will advertise the configurable image to agents.
| value: {{ .Values.hubconfig.proxyConfigs.NO_PROXY }} | ||
| {{- end }} | ||
| image: '{{ .Values.global.imageOverrides.cluster_proxy }}' | ||
| imagePullPolicy: '{{ .Values.global.pullPolicy }}' |
There was a problem hiding this comment.
Fix the missing imagePullPolicy value here too.
Both containers template imagePullPolicy from global.pullPolicy, but this chart's values don't define that key. Helm will render an empty policy unless you add a default or populate the value.
🛠️ Suggested fix
- imagePullPolicy: '{{ .Values.global.pullPolicy }}'
+ imagePullPolicy: '{{ default "IfNotPresent" .Values.global.pullPolicy }}'Also applies to: 105-105
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@pkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-addon-user-deployment.yaml`
at line 63, The imagePullPolicy field is templated from
.Values.global.pullPolicy but that value isn't defined in this chart so Helm
renders an empty policy; update the two occurrences where imagePullPolicy uses
.Values.global.pullPolicy to provide a Helm fallback (use the default function)
so a sane default like "IfNotPresent" is applied when global.pullPolicy is
missing, ensuring imagePullPolicy is never empty for the cluster-proxy
deployment template.
| proxyServer: | ||
| image: quay.io/open-cluster-management/cluster-proxy:v5.0 | ||
| replicas: 1 | ||
| namespace: PLACEHOLDER_NAMESPACE | ||
| entrypoint: | ||
| type: PortForward | ||
| port: 8091 |
There was a problem hiding this comment.
Fix the hard-coded namespace placeholder.
proxyServer.namespace is still rendered as PLACEHOLDER_NAMESPACE, so the generated CR won't point at the release namespace. Please template this value or drop the field if it is injected elsewhere.
🛠️ Suggested fix
proxyServer:
image: quay.io/open-cluster-management/cluster-proxy:v5.0
replicas: 1
- namespace: PLACEHOLDER_NAMESPACE
+ namespace: '{{ .Values.global.namespace }}'
entrypoint:
type: PortForward📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| proxyServer: | |
| image: quay.io/open-cluster-management/cluster-proxy:v5.0 | |
| replicas: 1 | |
| namespace: PLACEHOLDER_NAMESPACE | |
| entrypoint: | |
| type: PortForward | |
| port: 8091 | |
| proxyServer: | |
| image: quay.io/open-cluster-management/cluster-proxy:v5.0 | |
| replicas: 1 | |
| namespace: '{{ .Values.global.namespace }}' | |
| entrypoint: | |
| type: PortForward | |
| port: 8091 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@pkg/templates/charts/toggle/cluster-proxy/templates/cluster-proxy-managedproxyconfiguration.yaml`
around lines 13 - 19, The proxyServer.namespace field is hard-coded to
PLACEHOLDER_NAMESPACE; update the template so the CR uses the actual release
namespace (e.g., replace PLACEHOLDER_NAMESPACE with the chart template value for
the namespace such as {{ .Release.Namespace }} or the appropriate Helm/chart
variable) or remove the namespace field entirely if it is populated elsewhere,
ensuring you edit the proxyServer.namespace entry in the
cluster-proxy-managedproxyconfiguration.yaml template.
| - apiGroups: | ||
| - cluster.open-cluster-management.io | ||
| resources: | ||
| - managedclusters | ||
| - managedclustersets | ||
| verbs: | ||
| - get | ||
| - list | ||
| - watch | ||
| - apiGroups: | ||
| - addon.open-cluster-management.io | ||
| resources: | ||
| - clustermanagementaddons | ||
| - managedclusteraddons | ||
| - clustermanagementaddons/status | ||
| - clustermanagementaddons/finalizers | ||
| - managedclusteraddons/status | ||
| verbs: | ||
| - '*' | ||
| - apiGroups: | ||
| - addon.open-cluster-management.io | ||
| resources: | ||
| - addondeploymentconfigs | ||
| verbs: | ||
| - get | ||
| - list | ||
| - watch | ||
| - apiGroups: | ||
| - addon.open-cluster-management.io | ||
| resources: | ||
| - managedclusteraddons/finalizers | ||
| verbs: | ||
| - '*' | ||
| - apiGroups: | ||
| - proxy.open-cluster-management.io | ||
| resources: | ||
| - managedproxyconfigurations | ||
| - managedproxyconfigurations/status | ||
| - managedproxyconfigurations/finalizers | ||
| verbs: | ||
| - '*' | ||
| - apiGroups: | ||
| - certificates.k8s.io | ||
| resources: | ||
| - certificatesigningrequests | ||
| - certificatesigningrequests/approval | ||
| - certificatesigningrequests/status | ||
| verbs: | ||
| - get | ||
| - list | ||
| - watch | ||
| - update | ||
| - patch | ||
| - apiGroups: | ||
| - certificates.k8s.io | ||
| resourceNames: | ||
| - open-cluster-management.io/proxy-agent-signer | ||
| - kubernetes.io/kube-apiserver-client | ||
| resources: | ||
| - signers | ||
| verbs: | ||
| - '*' | ||
| - apiGroups: | ||
| - '' | ||
| resources: | ||
| - namespaces | ||
| - secrets | ||
| - pods | ||
| - pods/portforward | ||
| verbs: | ||
| - '*' | ||
| - apiGroups: | ||
| - '' | ||
| resources: | ||
| - serviceaccounts | ||
| - services | ||
| verbs: | ||
| - get | ||
| - list | ||
| - watch | ||
| - apiGroups: | ||
| - apps | ||
| resources: | ||
| - deployments | ||
| verbs: | ||
| - get | ||
| - list | ||
| - watch | ||
| - apiGroups: | ||
| - rbac.authorization.k8s.io | ||
| resources: | ||
| - roles | ||
| - rolebindings | ||
| verbs: | ||
| - get | ||
| - list | ||
| - watch | ||
| - create | ||
| - update | ||
| - patch | ||
| - apiGroups: | ||
| - work.open-cluster-management.io | ||
| resources: | ||
| - manifestworks | ||
| verbs: | ||
| - get | ||
| - list | ||
| - watch | ||
| - create | ||
| - update | ||
| - patch | ||
| - delete | ||
| - apiGroups: | ||
| - coordination.k8s.io | ||
| resources: | ||
| - leases | ||
| verbs: | ||
| - '*' | ||
| - apiGroups: | ||
| - '' | ||
| resources: | ||
| - configmaps | ||
| - secrets | ||
| verbs: | ||
| - '*' | ||
| - apiGroups: | ||
| - apps | ||
| resources: | ||
| - replicasets | ||
| verbs: | ||
| - get | ||
| - apiGroups: | ||
| - '' | ||
| - events.k8s.io | ||
| resources: | ||
| - events | ||
| verbs: | ||
| - create | ||
| - patch | ||
| - update | ||
| - apiGroups: | ||
| - '' | ||
| resources: | ||
| - configmaps | ||
| - secrets | ||
| verbs: | ||
| - '*' | ||
| - apiGroups: | ||
| - imageregistry.open-cluster-management.io | ||
| resources: | ||
| - managedclusterimageregistries | ||
| - managedclusterimageregistries | ||
| verbs: | ||
| - get | ||
| - list | ||
| - watch | ||
| - apiGroups: | ||
| - rbac.authorization.k8s.io | ||
| resources: | ||
| - clusterroles | ||
| - clusterrolebindings | ||
| verbs: | ||
| - create | ||
| - get | ||
| - list | ||
| - watch | ||
| - update | ||
| - delete | ||
| - apiGroups: | ||
| - authentication.k8s.io | ||
| resources: | ||
| - tokenreviews | ||
| verbs: | ||
| - create | ||
| - apiGroups: | ||
| - multicluster.x-k8s.io | ||
| resources: | ||
| - clusterprofiles | ||
| verbs: | ||
| - get | ||
| - list | ||
| - watch | ||
| - apiGroups: | ||
| - multicluster.x-k8s.io | ||
| resources: | ||
| - clusterprofiles/status | ||
| verbs: | ||
| - update | ||
| - patch |
There was a problem hiding this comment.
Narrow this ClusterRole before merge.
This role grants wildcard verbs on managedproxyconfigurations, secrets, configmaps, and signer resources, and it repeats some secret/configmap permissions. That is a much wider blast radius than the addon manager likely needs. Please cross-check the controller's actual CRUD needs and trim the rules to the minimum set.
🧰 Tools
🪛 Checkov (3.2.525)
[high] 1-194: Minimize ClusterRoles that grant permissions to approve CertificateSigningRequests
(CKV_K8S_156)
🪛 Trivy (0.69.3)
[error] 68-76: Manage secrets
ClusterRole 'open-cluster-management:cluster-proxy:addon-manager' shouldn't have access to manage resource 'secrets'
Rule: KSV-0041
(IaC/Kubernetes)
[error] 124-130: Manage secrets
ClusterRole 'open-cluster-management:cluster-proxy:addon-manager' shouldn't have access to manage resource 'secrets'
Rule: KSV-0041
(IaC/Kubernetes)
[error] 146-152: Manage secrets
ClusterRole 'open-cluster-management:cluster-proxy:addon-manager' shouldn't have access to manage resource 'secrets'
Rule: KSV-0041
(IaC/Kubernetes)
[error] 68-76: No wildcard verb roles
Role permits wildcard verb on specific resources
Rule: KSV-0045
(IaC/Kubernetes)
[error] 124-130: No wildcard verb roles
Role permits wildcard verb on specific resources
Rule: KSV-0045
(IaC/Kubernetes)
[error] 146-152: No wildcard verb roles
Role permits wildcard verb on specific resources
Rule: KSV-0045
(IaC/Kubernetes)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@pkg/templates/charts/toggle/cluster-proxy/templates/open-cluster-management`:cluster-proxy:addon-manager-clusterrole.yaml
around lines 6 - 194, The ClusterRole addon-manager-clusterrole.yaml is overly
permissive—wildcard verbs are granted for managedproxyconfigurations, secrets,
configmaps, and signers (and configmap/secret rules are duplicated); inspect the
addon manager controller code (controllers handling managedproxyconfigurations,
managedclusteraddons, signer usage, and manifestworks) to determine exact verbs
needed (e.g., get/list/watch, create/update/patch, or status/finalizer-specific
actions) and replace '*' with the minimal verb sets for resources like
managedproxyconfigurations, secrets, configmaps, and signers, remove duplicated
rules, and restrict resourceNames where possible (e.g., signers entries) to
narrow the blast radius.
| - apiGroups: | ||
| - '' | ||
| resources: | ||
| - services | ||
| - events | ||
| - serviceaccounts | ||
| verbs: | ||
| - '*' | ||
| - apiGroups: | ||
| - apps | ||
| resources: | ||
| - deployments | ||
| - deployments/scale | ||
| verbs: | ||
| - '*' | ||
| - apiGroups: | ||
| - '' | ||
| resources: | ||
| - configmaps | ||
| verbs: | ||
| - get | ||
| - create | ||
| - update | ||
| - patch | ||
| - apiGroups: | ||
| - coordination.k8s.io | ||
| resources: | ||
| - leases | ||
| verbs: | ||
| - get | ||
| - create | ||
| - update | ||
| - patch |
There was a problem hiding this comment.
Trim the RBAC verbs here.
Granting * on services, serviceaccounts, deployments/deployments/scale, and leases is broader than the addon-manager needs and will keep triggering the RBAC scanner warnings. Please scope this Role to the minimal verbs required for the chart.
🧰 Tools
🪛 Trivy (0.69.3)
[error] 15-21: No wildcard verb roles
Role permits wildcard verb on specific resources
Rule: KSV-0045
(IaC/Kubernetes)
[error] 7-14: Manage Kubernetes networking
Role 'open-cluster-management:cluster-proxy:addon-manager' should not have access to resources ["services", "endpoints", "endpointslices", "networkpolicies", "ingresses"] for verbs ["create", "update", "patch", "delete", "deletecollection", "impersonate", "*"]
Rule: KSV-0056
(IaC/Kubernetes)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@pkg/templates/charts/toggle/cluster-proxy/templates/open-cluster-management`:cluster-proxy:addon-manager-role.yaml
around lines 7 - 39, The Role currently uses wildcard verbs ('*') for services,
serviceaccounts, deployments/deployments/scale and leases; replace those '*'
entries with minimal verbs needed by addon-manager: for services and
serviceaccounts use ["get","list","watch","create","update","patch"] (add
"delete" only if deletion is required), for deployments use
["get","list","watch","create","update","patch"] and for deployments/scale
include ["get","update"], and keep leases as ["get","create","update","patch"];
update the Role resource blocks in the template (the entries around apiGroups:''
resources: services, serviceaccounts, and apiGroups: apps resources:
deployments, deployments/scale, and apiGroups: coordination.k8s.io resources:
leases) accordingly.
Summary
Test plan
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Refactor