✨ support v1beta1 addon api #368
✨ support v1beta1 addon api #368haoqing0110 wants to merge 4 commits intoopen-cluster-management-io:mainfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughRepository-wide migration of Open Cluster Management addon API from v1alpha1 to v1beta1: imports, clientsets, informers/listers, types, constants, GVRs, controller wiring, tests, and tooling; InstallNamespace moved to annotations and registration/CSR/permission shapes updated to beta1 structures. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related issues
Possibly related PRs
Suggested labels
Suggested reviewers
✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (12)
pkg/addonmanager/controllers/cmaconfig/controller_test.go (1)
349-364:⚠️ Potential issue | 🔴 CriticalFix: Use
ClusterManagementAddOns()instead ofManagedClusterAddOns()informer.Line 349 uses the wrong informer. The
IndexClusterManagementAddonByConfigindex function expectsClusterManagementAddOnobjects, and the test storesClusterManagementAddOnobjects (fromc.cmas). Change.ManagedClusterAddOns()to.ClusterManagementAddOns()to match the correct informer, consistent with line 239 and the indexer setup inpkg/addonmanager/manager.go:117.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/cmaconfig/controller_test.go` around lines 349 - 364, The test is using the wrong informer type: change the informer construction from addonInformers.Addon().V1beta1().ManagedClusterAddOns() to addonInformers.Addon().V1beta1().ClusterManagementAddOns() so the addonInformer (used to populate addonStore and assigned to cmaConfigController.clusterManagementAddonIndexer) matches the index function index.IndexClusterManagementAddonByConfig and the ClusterManagementAddOn objects in c.cmas.pkg/utils/addon_test.go (1)
127-188:⚠️ Potential issue | 🟠 MajorAdd a negative case for incomplete template refs.
The table jumps from non-template refs to fully populated template refs, but the beta1 edge case that matters here is a template ref with
ConfigReferent.Nameset and an emptySpecHash. If that regresses,FilterTemplateBasedAddOnswould start accepting partially configured template addons too early.➕ Suggested test case
{ + name: "addon with template config reference missing spec hash", + mca: &addonapiv1beta1.ManagedClusterAddOn{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-addon", + Namespace: "test-cluster", + }, + Status: addonapiv1beta1.ManagedClusterAddOnStatus{ + ConfigReferences: []addonapiv1beta1.ConfigReference{ + { + ConfigGroupResource: addonapiv1beta1.ConfigGroupResource{ + Group: "addon.open-cluster-management.io", + Resource: "addontemplates", + }, + DesiredConfig: &addonapiv1beta1.ConfigSpecHash{ + ConfigReferent: addonapiv1beta1.ConfigReferent{ + Name: "test-template", + }, + }, + }, + }, + }, + }, + expected: false, + }, + { name: "addon with template config reference",Based on learnings, in template-based addon mode (templateBasedAddOn=true), the FilterTemplateBasedAddOns function intentionally returns false for template-based addons whose Status.ConfigReferences are not yet populated with the proper AddOnTemplate reference and SpecHash.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/addon_test.go` around lines 127 - 188, Add a negative unit test entry to pkg/utils/addon_test.go that covers a template-based addon with an incomplete template ref: create a ManagedClusterAddOn (same pattern as other cases) whose Status.ConfigReferences contains a ConfigGroupResource with Group "addon.open-cluster-management.io" and Resource "addontemplates", and a DesiredConfig whose ConfigReferent.Name is set (e.g., "test-template") but whose SpecHash is empty/zero-value; call FilterTemplateBasedAddOns with templateBasedAddOn=true and assert the result is false. This ensures FilterTemplateBasedAddOns rejects template refs missing SpecHash (refer to the existing test table entries and the FilterTemplateBasedAddOns function name to locate where to add the case).pkg/utils/helpers.go (1)
276-286:⚠️ Potential issue | 🟡 MinorMatch the owner ref API group before returning true.
IsOwnedByCMAonly checksKindandName, so an owner ref from another API group with the same kind/name will be treated as a CMA. Reusing the existing group-aware owner-ref matching logic here would avoid that false positive while still tolerating alpha/beta version skew.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/helpers.go` around lines 276 - 286, IsOwnedByCMA currently returns true based only on owner.Kind and owner.Name, which can false-positive across API groups; update IsOwnedByCMA to also match the owner ref's API group/version (owner.APIVersion) in a group-aware way instead of plain string compare — ideally reuse the existing group-aware owner-ref matching helper used elsewhere (call that helper from IsOwnedByCMA) or parse owner.APIVersion to confirm it belongs to the ClusterManagementAddOn group while still tolerating alpha/beta version skew, keeping the checks for owner.Kind == "ClusterManagementAddOn" and owner.Name == addon.Name.test/integration/kube/assertion_test.go (1)
233-239:⚠️ Potential issue | 🟠 MajorGuard
InstallProgressions[i]before indexing.Line 234 indexes
actual.Status.InstallProgressions[i]without checking the slice length first. During normalEventuallypolling, that can panic before the controller has populated the progression list.🛠️ Return a retryable error until the status is ready
+ if len(actual.Status.InstallProgressions) < len(expect) { + return fmt.Errorf("expected at least %d install progressions, actual: %d", len(expect), len(actual.Status.InstallProgressions)) + } + for i, ec := range expect { cond := meta.FindStatusCondition(actual.Status.InstallProgressions[i].Conditions, ec.Type) if cond == nil ||🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/kube/assertion_test.go` around lines 233 - 239, The test is indexing actual.Status.InstallProgressions[i] without checking the slice length which can panic; update the loop in the assertion (where variables are actual, expect, i, ec and using meta.FindStatusCondition) to first verify len(actual.Status.InstallProgressions) > i and if not return a retryable error (e.g., fmt.Errorf with a message indicating the progression list isn't ready) so Eventually will retry until the InstallProgressions entry exists before comparing cond.Status/Reason/Message.pkg/addonmanager/controllers/agentdeploy/utils.go (1)
60-79:⚠️ Potential issue | 🟡 Minor
addonAddFinalizerskips deprecated-finalizer cleanup when the current finalizer is already present.Line 72 checks
addon.Finalizers, not the filteredrst, so[current, deprecated]returns beforeSetFinalizers(rst)runs. Existing addons that already carry the current finalizer never get rewritten to drop the deprecated alpha entries.♻️ Proposed fix
func addonAddFinalizer(addon *addonapiv1beta1.ManagedClusterAddOn, finalizer string) bool { if addon.Finalizers == nil { addon.SetFinalizers([]string{finalizer}) return true } - var rst []string + var ( + rst []string + hasFinalizer bool + ) for _, f := range addon.Finalizers { // remove deperecated finalizers also if f == addonapiv1alpha1.AddonDeprecatedHostingManifestFinalizer || f == addonapiv1alpha1.AddonDeprecatedPreDeleteHookFinalizer || f == addonapiv1alpha1.AddonDeprecatedHostingPreDeleteHookFinalizer { continue } + if f == finalizer { + hasFinalizer = true + } rst = append(rst, f) } - for _, f := range addon.Finalizers { - if f == finalizer { - return false - } - } + if hasFinalizer { + if len(rst) != len(addon.Finalizers) { + addon.SetFinalizers(rst) + return true + } + return false + } rst = append(rst, finalizer) addon.SetFinalizers(rst) return true }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/agentdeploy/utils.go` around lines 60 - 79, addonAddFinalizer currently skips cleaning deprecated finalizers when the requested finalizer is already present because it checks addon.Finalizers instead of the filtered rst; to fix, after building rst (filtering out deprecated entries) check for the presence of finalizer in rst (or in addon.Finalizers) and if present call addon.SetFinalizers(rst) and return false so deprecated entries are removed, otherwise append finalizer to rst, call addon.SetFinalizers(rst) and return true; reference addonAddFinalizer, addon.Finalizers, rst, finalizer, and addon.SetFinalizers to locate and update the logic.pkg/addonmanager/controllers/cmaconfig/controller.go (1)
161-199:⚠️ Potential issue | 🟠 MajorPropagate spec-hash lookup failures instead of swallowing them.
If
getConfigSpecHashfails here, the method returnsniland the controller patches partial status. That hides transient lister/hash errors and prevents a retry from correcting stale config hashes.🩹 Suggested fix
specHash, err := c.getConfigSpecHash(defaultConfigReference.ConfigGroupResource, defaultConfigReference.DesiredConfig.ConfigReferent) if err != nil { - return nil + return err }specHash, err := c.getConfigSpecHash(configReference.ConfigGroupResource, configReference.DesiredConfig.ConfigReferent) if err != nil { - return nil + return err }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/cmaconfig/controller.go` around lines 161 - 199, The updateConfigSpecHash function is swallowing errors from getConfigSpecHash (returning nil) which causes partial status patching; change both error handling sites in updateConfigSpecHash (the calls that assign specHash from c.getConfigSpecHash for DefaultConfigReferences and InstallProgressions) to return the actual err instead of nil so failures propagate to the controller reconciliation loop; ensure you update both occurrences inside updateConfigSpecHash so the controller can retry and avoid leaving stale/partial cma.Status updates.pkg/addonmanager/controllers/addonconfig/controller.go (1)
190-205:⚠️ Potential issue | 🟠 MajorClear stale config state when the referenced object is deleted.
Lines 204-205 skip the missing config, but they also leave the previous
LastObservedGenerationandDesiredConfig.SpecHashinstatus.configReferences. After a referenced config is removed, downstream controllers can still treat the addon as fully configured from stale status.🩹 Suggested fix
if errors.IsNotFound(err) { + addon.Status.ConfigReferences[index].LastObservedGeneration = 0 + if addon.Status.ConfigReferences[index].DesiredConfig != nil { + addon.Status.ConfigReferences[index].DesiredConfig.SpecHash = "" + } continue }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/addonconfig/controller.go` around lines 190 - 205, When errors.IsNotFound(err) is hit for a configReference, clear the stale status for that reference instead of silently skipping: locate the loop handling configReference in controller.go (the block using variables configReference, lister and status.configReferences) and set the corresponding status.configReferences entry's LastObservedGeneration to 0 and DesiredConfig.SpecHash (or the DesiredConfig field on that status entry) to the empty string (or nil) before continuing; persist that updated status so downstream controllers don't rely on stale generation/hash values.pkg/utils/csr_helpers.go (1)
191-199:⚠️ Potential issue | 🟠 Major
UnionCSRApprovercurrently behaves like an intersection.This returns
falseon the first rejecting approver, so approval requires every approver to pass. That contradicts the function name/comment and can reject valid CSRs when approvers are meant to be alternatives. If the all-of behavior is intentional, the helper should be renamed instead.🔁 Match the advertised union behavior
func UnionCSRApprover(approvers ...agent.CSRApproveFunc) agent.CSRApproveFunc { return func(cluster *clusterv1.ManagedCluster, addon *addonapiv1beta1.ManagedClusterAddOn, csr *certificatesv1.CertificateSigningRequest) bool { for _, approver := range approvers { - if !approver(cluster, addon, csr) { - return false + if approver(cluster, addon, csr) { + return true } } - return true + return false } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/csr_helpers.go` around lines 191 - 199, The implementation of UnionCSRApprover currently requires all approvers to return true (intersection) but should return true if any approver accepts (union); update the function UnionCSRApprover (return type agent.CSRApproveFunc) so it iterates approvers and returns true as soon as any approver(cluster, addon, csr) returns true, and only returns false after checking all approvers; alternatively, if the all-of behavior was intended, rename the function to IntersectionCSRApprover to match behavior.test/integration/kube/registration_test.go (1)
66-182:⚠️ Potential issue | 🟠 MajorThe beta1 registration integration coverage is effectively disabled here.
Both migrated specs are block-commented, so CI no longer verifies the
KubeClient/CustomSignerregistration wiring or theUpdateStatusreconciliation path in this file. Please re-enable them, or mark them pending/skip with an explicit reason so the coverage gap stays visible.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/kube/registration_test.go` around lines 66 - 182, The two integration specs "Should setup registration successfully" and "Should update registration successfully" are currently disabled via block comments, removing CI coverage for KubeClient/CustomSigner registration and UpdateStatus reconciliation; restore test validation by either uncommenting those ginkgo.It blocks (re-enabling the tests as-is) or explicitly marking them pending/skipped with a clear reason, and ensure the tests still call createManagedClusterAddOnwithOwnerRefs(managedClusterName, addon, cma), setKubeClientDriver(managedClusterName, testAddonImpl.name, "csr"), and the gomega.Eventually blocks that fetch ManagedClusterAddOns and compare testAddonImpl.registrations[managedClusterName] to actual.Status.Registrations (and the UpdateStatus path that sets Driver:"csr") so CI will continue to verify KubeClient/CustomSigner registration and UpdateStatus reconciliation.test/integration/kube/cluster_management_addon_test.go (1)
56-102:⚠️ Potential issue | 🟠 MajorRe-enable this beta1 integration spec before merging.
Block-commenting the only scenario in this file removes all coverage for the migrated
ClusterManagementAddOnflow, including registration application and namespace propagation. If this has to stay disabled temporarily, please mark it pending/skip with a reason instead of silently dropping it from CI.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/kube/cluster_management_addon_test.go` around lines 56 - 102, Unblock the commented ginkgo.It("Should update config related object successfully") spec so the ClusterManagementAddOn integration test runs (restore the test block that creates the ClusterManagementAddOn, ManagedClusterAddOn via createManagedClusterAddOnwithOwnerRefs, calls setKubeClientDriver, and asserts status/namespace via hubAddonClient.AddonV1beta1().ManagedClusterAddOns().Get). If you must keep it disabled temporarily, do not leave it in a block comment—either convert it to a pending test with ginkgo.PIt(...) or call ginkgo.Skip("reason") at the top of the test body and include a brief reason for skipping. Ensure the test name, helper calls (createManagedClusterAddOnwithOwnerRefs, setKubeClientDriver) and cleanup (Delete on ClusterManagementAddOns) remain intact.pkg/agent/interface.go (1)
121-146:⚠️ Potential issue | 🟡 MinorUpdate the namespace contract comments for v1beta1.
These docs still point addon authors at
ManagedClusterAddOn.spec.installNamespace, but this migration reads the install namespace from the addon annotation. Leaving the old wording here makes the public API contract inaccurate.Suggested wording
- // Namespace is the namespace where registraiton credential will be put on the managed cluster. It - // will be overridden by installNamespace on ManagedClusterAddon spec if set + // Namespace is the namespace where registration credential will be put on the managed cluster. It + // will be overridden by the ManagedClusterAddOn install-namespace annotation if set. // Deprecated: use AgentInstallNamespace instead Namespace string // AgentInstallNamespace returns the namespace where registration credential will be put on the managed cluster. - // It will override the installNamespace on ManagedClusterAddon spec if set and the returned value is not empty. + // It will override the ManagedClusterAddOn install-namespace annotation if the returned value is not empty.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/agent/interface.go` around lines 121 - 146, Update the doc comments on RegistrationOption.Namespace and RegistrationOption.AgentInstallNamespace to reflect the v1beta1 contract: remove references to ManagedClusterAddOn.spec.installNamespace and instead state that the install namespace is read from the ManagedClusterAddOn annotation (and that AgentInstallNamespace overrides the annotation when non-empty); keep the warning that AgentInstallNamespace must match the namespace used by the addon manifests and reference the AgentInstallNamespaceFunc and Namespace fields so readers can find them easily.pkg/addonfactory/addondeploymentconfig.go (1)
89-107:⚠️ Potential issue | 🟠 MajorDon't drop proxy configs that only set
NoProxyorCABundle.The early return on Line 97 only checks whether
HTTPProxyorHTTPSProxypopulated the map, so a valid config with justNoProxyorCABundlereturnsniland never reaches the chart.Suggested fix
func ToAddOnProxyConfigValues(config addonapiv1beta1.AddOnDeploymentConfig) (Values, error) { proxyConfig := map[string]string{} if len(config.Spec.ProxyConfig.HTTPProxy) > 0 { proxyConfig["HTTP_PROXY"] = config.Spec.ProxyConfig.HTTPProxy } if len(config.Spec.ProxyConfig.HTTPSProxy) > 0 { proxyConfig["HTTPS_PROXY"] = config.Spec.ProxyConfig.HTTPSProxy } - if len(proxyConfig) == 0 { - return nil, nil - } - if len(config.Spec.ProxyConfig.NoProxy) > 0 { proxyConfig["NO_PROXY"] = config.Spec.ProxyConfig.NoProxy } if len(config.Spec.ProxyConfig.CABundle) > 0 { proxyConfig["PROXY_CA_BUNDLE"] = string(config.Spec.ProxyConfig.CABundle) } + if len(proxyConfig) == 0 { + return nil, nil + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonfactory/addondeploymentconfig.go` around lines 89 - 107, The function ToAddOnProxyConfigValues is returning early based only on HTTPProxy/HTTPSProxy so configs that only set NoProxy or CABundle are dropped; fix by populating proxyConfig with NoProxy and CABundle before checking emptiness (i.e., move the len(proxyConfig) == 0 check to after you add config.Spec.ProxyConfig.NoProxy and CABundle or change the check to include those fields) so that proxyConfig (used by the chart) is returned whenever any of HTTPProxy, HTTPSProxy, NoProxy, or CABundle are present.
🧹 Nitpick comments (10)
pkg/addonmanager/controllers/cmaconfig/controller_test.go (1)
375-375: Pre-existing typo:newTestConfingshould benewTestConfig.The function name contains a typo ("Confing" instead of "Config"). This is not introduced by this PR but could be addressed while making these changes.
✏️ Optional fix for the typo
-func newTestConfing(name, namespace string, generation int64) *unstructured.Unstructured { +func newTestConfig(name, namespace string, generation int64) *unstructured.Unstructured {Note: This would also require updating all call sites (lines 43, 110, 152, 219, 291, 340).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/cmaconfig/controller_test.go` at line 375, Rename the misspelled helper function newTestConfing to newTestConfig and update every call site that references newTestConfing to use newTestConfig (search for the symbol newTestConfing in this file to find all usages). Ensure the function signature and any references in tests (e.g., where newTestConfing(...) is invoked) are updated consistently, run go vet/go test to confirm no lingering references, and adjust any imports or comments that mention the old name.pkg/addonmanager/controllers/certificate/csrsign_test.go (1)
119-120: Consider making addon fixtures beta1-typed in the table.These stores now back a beta1 lister, but the test still feeds them as
runtime.Object. Tightening the testcase field to beta1 addons, or asserting the type beforeAdd, would catch any stale alpha1 helper at setup time instead of later through the typed lister.Also applies to: 133-133
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/certificate/csrsign_test.go` around lines 119 - 120, The test is adding runtime.Object fixtures into a store backed by a beta1 typed lister which can hide stale alpha1 helpers; update the table fixtures to construct beta1 ManagedClusterAddOn objects (use the v1beta1 API types) instead of generic runtime.Object, or add an explicit type assertion/convert step right before calling addonInformers.Addon().V1beta1().ManagedClusterAddOns().Informer().GetStore().Add(obj) to ensure obj is a *v1beta1.ManagedClusterAddOn and fail the test early if not—focus changes around the test table entries and the Add(obj) call in csrsign_test.go (and the similar case at the other location) so the store always receives beta1-typed objects.pkg/utils/helpers_test.go (1)
70-75: ExposeGroupinrelatedObjectso the de-dupe key is fully tested.
MergeRelatedObjectstreatsGroupas part of object identity, but every case here leaves it empty. Adding a group-aware helper/case would catch regressions where the sameResource/Name/Namespaceexists in different API groups.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/helpers_test.go` around lines 70 - 75, The test helper relatedObject currently omits Group so MergeRelatedObjects' de-duplication isn't exercised for API groups; update the helper relatedObject to accept a group parameter (or add a new relatedObjectWithGroup helper) and add a test case that creates two ObjectReference entries with identical Name/Namespace/Resource but different Group values, then assert MergeRelatedObjects treats them as distinct. Update any calls in the test file to supply a group where appropriate so the de-duplication behavior including Group is covered.pkg/addonmanager/controllers/agentdeploy/utils.go (1)
672-683: Don't serialize incomplete config references as a valid hash set.Line 673 silently drops refs with
DesiredConfig == nil, so a partially populatedStatus.ConfigReferencesslice becomes a smaller JSON hash map instead of forcing a retry. Returning an error here would preserve the intended "wait until fully configured" behavior.Based on learnings: in template-based addon mode, controllers are expected to wait until
Status.ConfigReferencescontains the proper AddOnTemplate reference and SpecHash before processing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/agentdeploy/utils.go` around lines 672 - 683, The code currently skips entries with v.DesiredConfig == nil when building specHashMap (loop over configReference), which silently shrinks the resulting hash map; change this to return an error instead of continue so the reconciler retries until all Status.ConfigReferences are fully populated; specifically, in the loop that constructs specHashMap (the block referencing configReference, v.DesiredConfig, resourceStr, and specHashMap), detect v.DesiredConfig == nil and return a descriptive error indicating the missing DesiredConfig for the given resource (so callers will requeue), rather than silently omitting the entry.pkg/addonmanager/controllers/registration/controller.go (1)
83-117: Drop the unreachableCustomSignerbranch.
findExistingRegistrationis only invoked after the non-KubeClientearly-continue on Lines 123-126, so theCustomSignerpath and its comments can never execute. Removing it would make the beta1 merge logic easier to reason about.Also applies to: 123-126
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/registration/controller.go` around lines 83 - 117, The helper function findExistingRegistration contains an unreachable branch for CustomSigner because callers skip non-KubeClient configs earlier; remove the CustomSigner branch and its comments to avoid dead code and clarify logic: locate the findExistingRegistration closure and delete the else-if block that checks config.Type == addonapiv1beta1.CustomSigner and its loop over existingRegistrations (and any accompanying comments about matching by SignerName), leaving only the KubeClient matching logic that uses newConfigs, existingRegistrations, and the kubeClientIndex/currentIndex counters.pkg/utils/addon_config_test.go (1)
98-132: Restore the hash-mismatch case or remove the dead block.This is still the only negative-path coverage for a mismatched config hash, but it is commented out and still uses the pre-beta1
ConfigReferentshape. Leaving it disabled means this migration no longer exercises the stale-config guard.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/addon_config_test.go` around lines 98 - 132, Restore the commented-out negative test case that covers a mismatched config hash: re-enable the test block and update its use of the pre-beta1 ConfigReferent shape to the current addonapiv1beta1.ConfigReference shape used elsewhere in tests. Specifically, keep the test using newTestAddOnDeploymentConfigGetter to return an AddOnDeploymentConfig, attach a ConfigReference entry with Group "addon.open-cluster-management.io", Resource "addondeploymentconfigs", Name "test1" and DesiredConfig.SpecHash set to "wronghash" on the ManagedClusterAddOn.Status.ConfigReferences, and assert the expected result is the empty string; alternatively if you prefer removal, delete the dead commented block entirely so the stale-config guard negative path is not silently dropped.pkg/addonfactory/helper.go (1)
24-26: Call out this exported signature break in the upgrade notes.
GetValuesFromAddonAnnotationnow requires*addonapiv1beta1.ManagedClusterAddOn, so downstream addon factories will need source changes as part of the beta1 migration. Please document public API breaks like this in the release/migration notes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonfactory/helper.go` around lines 24 - 26, Document the exported signature change for GetValuesFromAddonAnnotation in the release/migration notes: state that its second parameter now expects *addonapiv1beta1.ManagedClusterAddOn (instead of the previous type), list that downstream addon factories must update their call sites and imports to the beta1 API, and provide a short migration snippet and recommended steps to update code that references GetValuesFromAddonAnnotation so consumers can make the required source changes during the upgrade.pkg/addonmanager/controllers/registration/controller_test.go (1)
979-1001: AssertKubeClient.Driverin the kubeclient cases.Several table rows here differ only by
Driver("csr","token", empty), but this block never checks it. A regression that drops or rewrites the beta1 driver would still pass these tests.🧪 Tighten the kubeclient assertions
} else { + if result[i].KubeClient.Driver != expected.KubeClient.Driver { + t.Errorf("registration[%d].KubeClient.Driver: expected %q, got %q", i, expected.KubeClient.Driver, result[i].KubeClient.Driver) + } expectedUser := expected.KubeClient.Subject.User resultUser := result[i].KubeClient.Subject.User if resultUser != expectedUser {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/registration/controller_test.go` around lines 979 - 1001, The kubeclient branch of the table-driven test fails to assert the KubeClient.Driver field, so add a check comparing expected.KubeClient.Driver and result[i].KubeClient.Driver whenever both KubeClient values are non-nil (mirror the existing Subject.User/Groups checks); if one is nil and the other not, keep the existing nil-equality check pattern. Locate the assertions around KubeClient handling (references: KubeClient, expected.KubeClient, result[i].KubeClient, KubeClient.Subject.User/Groups) and insert a Driver equality assertion that reports registration[%d].KubeClient.Driver in the test error message.pkg/addonmanager/controllers/agentdeploy/hosted_sync_test.go (1)
499-507: Useaddonapiv1beta1.SchemeGroupVersion.String()for owner-reference API versions.Hardcoding
"addon.open-cluster-management.io/v1beta1"makes these test fixtures brittle on API version changes. Use the constant from the imported package instead so owner refs stay aligned with the v1beta1 package version.♻️ Suggested cleanup
- APIVersion: "addon.open-cluster-management.io/v1beta1", + APIVersion: addonapiv1beta1.SchemeGroupVersion.String(),Applies to lines 501 and 568.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/agentdeploy/hosted_sync_test.go` around lines 499 - 507, Replace the hardcoded owner-reference API version string with the package constant: in the test where work.SetOwnerReferences(...) builds metav1.OwnerReference (the blocks using APIVersion: "addon.open-cluster-management.io/v1beta1"), change APIVersion to use addonapiv1beta1.SchemeGroupVersion.String() (keep Kind "ManagedClusterAddOn", Name "test", UID "", and the Controller/BlockOwnerDeletion pointers such as pTrue unchanged); apply the same replacement for the second occurrence around the other OwnerReference construction so both fixtures use addonapiv1beta1.SchemeGroupVersion.String().test/integration/kube/agent_deploy_test.go (1)
379-387: Use the beta1 availability constant in the remaining assertions.This file already uses
addonapiv1beta1.ManagedClusterAddOnConditionAvailableelsewhere (line 271); the raw"Available"literals here should be replaced to maintain consistency and prevent issues with future API changes.♻️ Suggested cleanup
- if !meta.IsStatusConditionFalse(addon.Status.Conditions, "Available") { + if !meta.IsStatusConditionFalse(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnConditionAvailable) { return fmt.Errorf("Unexpected addon available condition, %v", addon.Status.Conditions) }- if !meta.IsStatusConditionTrue(addon.Status.Conditions, "Available") { + if !meta.IsStatusConditionTrue(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnConditionAvailable) { return fmt.Errorf("Unexpected addon available condition, %v", addon.Status.Conditions) }- if !meta.IsStatusConditionPresentAndEqual(addon.Status.Conditions, "Available", metav1.ConditionFalse) { + if !meta.IsStatusConditionPresentAndEqual(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnConditionAvailable, metav1.ConditionFalse) { return fmt.Errorf("Unexpected addon available condition, %v", addon.Status.Conditions) }Applies to lines: 385, 443, 563, 657, 761, 819, 931, 1020
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/kube/agent_deploy_test.go` around lines 379 - 387, Replace the string literal "Available" used in status condition checks with the beta API constant addonapiv1beta1.ManagedClusterAddOnConditionAvailable: locate usages around the meta.IsStatusConditionFalse/meta.IsStatusConditionTrue checks (e.g., the closure using hubAddonClient.AddonV1beta1().ManagedClusterAddOns and inspections of addon.Status.Conditions) and swap the raw "Available" argument for addonapiv1beta1.ManagedClusterAddOnConditionAvailable in all assertions listed (lines indicated in the review) so the condition checks consistently use the typed constant.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/addonmanager/controllers/agentdeploy/healthcheck_sync.go`:
- Around line 89-90: The current guard only checks for presence of the
ManagedClusterAddOnManifestApplied condition and still runs the availability
probe when that condition is False/Unknown; change the checks that use
meta.FindStatusCondition(addon.Status.Conditions,
addonapiv1beta1.ManagedClusterAddOnManifestApplied) == nil to instead fetch the
condition and require its Status == metav1.ConditionTrue before proceeding
(e.g., cond := meta.FindStatusCondition(...); if cond == nil || cond.Status !=
metav1.ConditionTrue { return nil }). Apply the same change in the second
occurrence that guards the availability probe so the probe never overwrites
state when the manifest has not been applied.
In `@pkg/addonmanager/controllers/cmamanagedby/controller.go`:
- Around line 86-92: The code currently sets the lifecycle annotation key to an
empty string which leaves the key present and still treated as "self-managed";
change the mutation on cmaCopy.Annotations to remove the key entirely: after
confirming cmaCopy.Annotations is non-nil and matches
AddonLifecycleSelfManageAnnotationValue, call delete(cmaCopy.Annotations,
addonapiv1beta1.AddonLifecycleAnnotationKey) before invoking
c.addonPatcher.PatchLabelAnnotations so ManagedByAddonManager's comma-ok check
will treat the annotation as absent; keep the existing nil-check and the
PatchLabelAnnotations call.
In `@pkg/utils/addon_config_test.go`:
- Around line 16-23: The fake getter testadcGetter.Get currently always returns
g.adc which lets tests pass even when the code requests the wrong
AddOnDeploymentConfig; update testadcGetter.Get to validate the requested
namespace and name against g.adc.Namespace and g.adc.Name (and/or
DesiredConfig.ConfigReferent values used by the code under test) and return a
not-found/error when they don't match so tests fail if the wrong referent is
requested; update newTestAddOnDeploymentConfigGetter usage in tests if needed to
construct the expected adc for validation.
In `@pkg/utils/permission.go`:
- Around line 327-330: The ensureAddonOwnerReference helper sets a beta1 owner
APIVersion but existing Role/RoleBinding objects created with v1alpha1 keep the
old ownerRef because ApplyRole and ApplyRoleBinding do not reconcile
OwnerReferences on update; update ApplyRole and ApplyRoleBinding to
merge/replace the resource.OwnerReferences with the result of
ensureAddonOwnerReference (or call ensureAddonOwnerReference before update and
ensure the OwnerReferences slice is updated on both create and update paths) so
OwnerReferences are reconciled on updates as well (referencing
ensureAddonOwnerReference, ApplyRole, and ApplyRoleBinding to locate the code).
- Around line 273-297: The empty-subject comparison in
BuildSubjectsFromRegistration is using the wrong type: change the DeepEqual
comparison to compare *subject against the zero value of
addonapiv1beta1.BaseSubject (not addonapiv1beta1.Subject) so the check can
actually detect an empty BaseSubject; keep the existing nil check and
equality.Semantic.DeepEqual(*subject, addonapiv1beta1.BaseSubject{}) to return
nil for empty registrations.
In `@test/e2e/helloworld_helm_test.go`:
- Around line 296-298: The test currently replaces all annotations via
newAddon.SetAnnotations which can drop preexisting keys (like install namespace)
— instead, read the existing annotations from newAddon (e.g., via
newAddon.GetAnnotations() or the Annotations map), merge or update the
"addon.open-cluster-management.io/values" entry with
`{"global":{"imagePullSecret":"mySecret"}}`, and then call SetAnnotations with
the merged map so other metadata annotations are preserved; reference the
newAddon variable and the SetAnnotations/GetAnnotations methods to locate and
modify this behavior.
In `@test/integration/kube/assertion_test.go`:
- Around line 94-100: The retry body in updateClusterManagementAddOn (and
likewise in updateManagedClusterAddOnStatus) mutates the retrieved object before
checking the Get error; change the gomega.Eventually callback to return error
(func() error) so you first call
hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Get(ctx, new.Name,
metav1.GetOptions{}), check and return that err if non-nil, then mutate
old.Spec/old.Annotations and call Update with the same passed ctx, returning any
update error so Eventually can retry and preserve the real failure.
- Around line 135-157: The current logic only updates an existing KubeClient
registration and silently does nothing if Registrations contains other types;
modify the block handling addon.Status.Registrations so that it preserves all
existing entries and, if no entry with Type == addonapiv1beta1.KubeClient is
found, appends a new addonapiv1beta1.RegistrationConfig with Type:
addonapiv1beta1.KubeClient and KubeClient:
&addonapiv1beta1.KubeClientConfig{Driver: driver}; keep the existing loop that
sets KubeClient.Driver when found, and then call
hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).UpdateStatus(...)
as before to persist the change.
---
Outside diff comments:
In `@pkg/addonfactory/addondeploymentconfig.go`:
- Around line 89-107: The function ToAddOnProxyConfigValues is returning early
based only on HTTPProxy/HTTPSProxy so configs that only set NoProxy or CABundle
are dropped; fix by populating proxyConfig with NoProxy and CABundle before
checking emptiness (i.e., move the len(proxyConfig) == 0 check to after you add
config.Spec.ProxyConfig.NoProxy and CABundle or change the check to include
those fields) so that proxyConfig (used by the chart) is returned whenever any
of HTTPProxy, HTTPSProxy, NoProxy, or CABundle are present.
In `@pkg/addonmanager/controllers/addonconfig/controller.go`:
- Around line 190-205: When errors.IsNotFound(err) is hit for a configReference,
clear the stale status for that reference instead of silently skipping: locate
the loop handling configReference in controller.go (the block using variables
configReference, lister and status.configReferences) and set the corresponding
status.configReferences entry's LastObservedGeneration to 0 and
DesiredConfig.SpecHash (or the DesiredConfig field on that status entry) to the
empty string (or nil) before continuing; persist that updated status so
downstream controllers don't rely on stale generation/hash values.
In `@pkg/addonmanager/controllers/agentdeploy/utils.go`:
- Around line 60-79: addonAddFinalizer currently skips cleaning deprecated
finalizers when the requested finalizer is already present because it checks
addon.Finalizers instead of the filtered rst; to fix, after building rst
(filtering out deprecated entries) check for the presence of finalizer in rst
(or in addon.Finalizers) and if present call addon.SetFinalizers(rst) and return
false so deprecated entries are removed, otherwise append finalizer to rst, call
addon.SetFinalizers(rst) and return true; reference addonAddFinalizer,
addon.Finalizers, rst, finalizer, and addon.SetFinalizers to locate and update
the logic.
In `@pkg/addonmanager/controllers/cmaconfig/controller_test.go`:
- Around line 349-364: The test is using the wrong informer type: change the
informer construction from
addonInformers.Addon().V1beta1().ManagedClusterAddOns() to
addonInformers.Addon().V1beta1().ClusterManagementAddOns() so the addonInformer
(used to populate addonStore and assigned to
cmaConfigController.clusterManagementAddonIndexer) matches the index function
index.IndexClusterManagementAddonByConfig and the ClusterManagementAddOn objects
in c.cmas.
In `@pkg/addonmanager/controllers/cmaconfig/controller.go`:
- Around line 161-199: The updateConfigSpecHash function is swallowing errors
from getConfigSpecHash (returning nil) which causes partial status patching;
change both error handling sites in updateConfigSpecHash (the calls that assign
specHash from c.getConfigSpecHash for DefaultConfigReferences and
InstallProgressions) to return the actual err instead of nil so failures
propagate to the controller reconciliation loop; ensure you update both
occurrences inside updateConfigSpecHash so the controller can retry and avoid
leaving stale/partial cma.Status updates.
In `@pkg/agent/interface.go`:
- Around line 121-146: Update the doc comments on RegistrationOption.Namespace
and RegistrationOption.AgentInstallNamespace to reflect the v1beta1 contract:
remove references to ManagedClusterAddOn.spec.installNamespace and instead state
that the install namespace is read from the ManagedClusterAddOn annotation (and
that AgentInstallNamespace overrides the annotation when non-empty); keep the
warning that AgentInstallNamespace must match the namespace used by the addon
manifests and reference the AgentInstallNamespaceFunc and Namespace fields so
readers can find them easily.
In `@pkg/utils/addon_test.go`:
- Around line 127-188: Add a negative unit test entry to pkg/utils/addon_test.go
that covers a template-based addon with an incomplete template ref: create a
ManagedClusterAddOn (same pattern as other cases) whose Status.ConfigReferences
contains a ConfigGroupResource with Group "addon.open-cluster-management.io" and
Resource "addontemplates", and a DesiredConfig whose ConfigReferent.Name is set
(e.g., "test-template") but whose SpecHash is empty/zero-value; call
FilterTemplateBasedAddOns with templateBasedAddOn=true and assert the result is
false. This ensures FilterTemplateBasedAddOns rejects template refs missing
SpecHash (refer to the existing test table entries and the
FilterTemplateBasedAddOns function name to locate where to add the case).
In `@pkg/utils/csr_helpers.go`:
- Around line 191-199: The implementation of UnionCSRApprover currently requires
all approvers to return true (intersection) but should return true if any
approver accepts (union); update the function UnionCSRApprover (return type
agent.CSRApproveFunc) so it iterates approvers and returns true as soon as any
approver(cluster, addon, csr) returns true, and only returns false after
checking all approvers; alternatively, if the all-of behavior was intended,
rename the function to IntersectionCSRApprover to match behavior.
In `@pkg/utils/helpers.go`:
- Around line 276-286: IsOwnedByCMA currently returns true based only on
owner.Kind and owner.Name, which can false-positive across API groups; update
IsOwnedByCMA to also match the owner ref's API group/version (owner.APIVersion)
in a group-aware way instead of plain string compare — ideally reuse the
existing group-aware owner-ref matching helper used elsewhere (call that helper
from IsOwnedByCMA) or parse owner.APIVersion to confirm it belongs to the
ClusterManagementAddOn group while still tolerating alpha/beta version skew,
keeping the checks for owner.Kind == "ClusterManagementAddOn" and owner.Name ==
addon.Name.
In `@test/integration/kube/assertion_test.go`:
- Around line 233-239: The test is indexing actual.Status.InstallProgressions[i]
without checking the slice length which can panic; update the loop in the
assertion (where variables are actual, expect, i, ec and using
meta.FindStatusCondition) to first verify len(actual.Status.InstallProgressions)
> i and if not return a retryable error (e.g., fmt.Errorf with a message
indicating the progression list isn't ready) so Eventually will retry until the
InstallProgressions entry exists before comparing cond.Status/Reason/Message.
In `@test/integration/kube/cluster_management_addon_test.go`:
- Around line 56-102: Unblock the commented ginkgo.It("Should update config
related object successfully") spec so the ClusterManagementAddOn integration
test runs (restore the test block that creates the ClusterManagementAddOn,
ManagedClusterAddOn via createManagedClusterAddOnwithOwnerRefs, calls
setKubeClientDriver, and asserts status/namespace via
hubAddonClient.AddonV1beta1().ManagedClusterAddOns().Get). If you must keep it
disabled temporarily, do not leave it in a block comment—either convert it to a
pending test with ginkgo.PIt(...) or call ginkgo.Skip("reason") at the top of
the test body and include a brief reason for skipping. Ensure the test name,
helper calls (createManagedClusterAddOnwithOwnerRefs, setKubeClientDriver) and
cleanup (Delete on ClusterManagementAddOns) remain intact.
In `@test/integration/kube/registration_test.go`:
- Around line 66-182: The two integration specs "Should setup registration
successfully" and "Should update registration successfully" are currently
disabled via block comments, removing CI coverage for KubeClient/CustomSigner
registration and UpdateStatus reconciliation; restore test validation by either
uncommenting those ginkgo.It blocks (re-enabling the tests as-is) or explicitly
marking them pending/skipped with a clear reason, and ensure the tests still
call createManagedClusterAddOnwithOwnerRefs(managedClusterName, addon, cma),
setKubeClientDriver(managedClusterName, testAddonImpl.name, "csr"), and the
gomega.Eventually blocks that fetch ManagedClusterAddOns and compare
testAddonImpl.registrations[managedClusterName] to actual.Status.Registrations
(and the UpdateStatus path that sets Driver:"csr") so CI will continue to verify
KubeClient/CustomSigner registration and UpdateStatus reconciliation.
---
Nitpick comments:
In `@pkg/addonfactory/helper.go`:
- Around line 24-26: Document the exported signature change for
GetValuesFromAddonAnnotation in the release/migration notes: state that its
second parameter now expects *addonapiv1beta1.ManagedClusterAddOn (instead of
the previous type), list that downstream addon factories must update their call
sites and imports to the beta1 API, and provide a short migration snippet and
recommended steps to update code that references GetValuesFromAddonAnnotation so
consumers can make the required source changes during the upgrade.
In `@pkg/addonmanager/controllers/agentdeploy/hosted_sync_test.go`:
- Around line 499-507: Replace the hardcoded owner-reference API version string
with the package constant: in the test where work.SetOwnerReferences(...) builds
metav1.OwnerReference (the blocks using APIVersion:
"addon.open-cluster-management.io/v1beta1"), change APIVersion to use
addonapiv1beta1.SchemeGroupVersion.String() (keep Kind "ManagedClusterAddOn",
Name "test", UID "", and the Controller/BlockOwnerDeletion pointers such as
pTrue unchanged); apply the same replacement for the second occurrence around
the other OwnerReference construction so both fixtures use
addonapiv1beta1.SchemeGroupVersion.String().
In `@pkg/addonmanager/controllers/agentdeploy/utils.go`:
- Around line 672-683: The code currently skips entries with v.DesiredConfig ==
nil when building specHashMap (loop over configReference), which silently
shrinks the resulting hash map; change this to return an error instead of
continue so the reconciler retries until all Status.ConfigReferences are fully
populated; specifically, in the loop that constructs specHashMap (the block
referencing configReference, v.DesiredConfig, resourceStr, and specHashMap),
detect v.DesiredConfig == nil and return a descriptive error indicating the
missing DesiredConfig for the given resource (so callers will requeue), rather
than silently omitting the entry.
In `@pkg/addonmanager/controllers/certificate/csrsign_test.go`:
- Around line 119-120: The test is adding runtime.Object fixtures into a store
backed by a beta1 typed lister which can hide stale alpha1 helpers; update the
table fixtures to construct beta1 ManagedClusterAddOn objects (use the v1beta1
API types) instead of generic runtime.Object, or add an explicit type
assertion/convert step right before calling
addonInformers.Addon().V1beta1().ManagedClusterAddOns().Informer().GetStore().Add(obj)
to ensure obj is a *v1beta1.ManagedClusterAddOn and fail the test early if
not—focus changes around the test table entries and the Add(obj) call in
csrsign_test.go (and the similar case at the other location) so the store always
receives beta1-typed objects.
In `@pkg/addonmanager/controllers/cmaconfig/controller_test.go`:
- Line 375: Rename the misspelled helper function newTestConfing to
newTestConfig and update every call site that references newTestConfing to use
newTestConfig (search for the symbol newTestConfing in this file to find all
usages). Ensure the function signature and any references in tests (e.g., where
newTestConfing(...) is invoked) are updated consistently, run go vet/go test to
confirm no lingering references, and adjust any imports or comments that mention
the old name.
In `@pkg/addonmanager/controllers/registration/controller_test.go`:
- Around line 979-1001: The kubeclient branch of the table-driven test fails to
assert the KubeClient.Driver field, so add a check comparing
expected.KubeClient.Driver and result[i].KubeClient.Driver whenever both
KubeClient values are non-nil (mirror the existing Subject.User/Groups checks);
if one is nil and the other not, keep the existing nil-equality check pattern.
Locate the assertions around KubeClient handling (references: KubeClient,
expected.KubeClient, result[i].KubeClient, KubeClient.Subject.User/Groups) and
insert a Driver equality assertion that reports
registration[%d].KubeClient.Driver in the test error message.
In `@pkg/addonmanager/controllers/registration/controller.go`:
- Around line 83-117: The helper function findExistingRegistration contains an
unreachable branch for CustomSigner because callers skip non-KubeClient configs
earlier; remove the CustomSigner branch and its comments to avoid dead code and
clarify logic: locate the findExistingRegistration closure and delete the
else-if block that checks config.Type == addonapiv1beta1.CustomSigner and its
loop over existingRegistrations (and any accompanying comments about matching by
SignerName), leaving only the KubeClient matching logic that uses newConfigs,
existingRegistrations, and the kubeClientIndex/currentIndex counters.
In `@pkg/utils/addon_config_test.go`:
- Around line 98-132: Restore the commented-out negative test case that covers a
mismatched config hash: re-enable the test block and update its use of the
pre-beta1 ConfigReferent shape to the current addonapiv1beta1.ConfigReference
shape used elsewhere in tests. Specifically, keep the test using
newTestAddOnDeploymentConfigGetter to return an AddOnDeploymentConfig, attach a
ConfigReference entry with Group "addon.open-cluster-management.io", Resource
"addondeploymentconfigs", Name "test1" and DesiredConfig.SpecHash set to
"wronghash" on the ManagedClusterAddOn.Status.ConfigReferences, and assert the
expected result is the empty string; alternatively if you prefer removal, delete
the dead commented block entirely so the stale-config guard negative path is not
silently dropped.
In `@pkg/utils/helpers_test.go`:
- Around line 70-75: The test helper relatedObject currently omits Group so
MergeRelatedObjects' de-duplication isn't exercised for API groups; update the
helper relatedObject to accept a group parameter (or add a new
relatedObjectWithGroup helper) and add a test case that creates two
ObjectReference entries with identical Name/Namespace/Resource but different
Group values, then assert MergeRelatedObjects treats them as distinct. Update
any calls in the test file to supply a group where appropriate so the
de-duplication behavior including Group is covered.
In `@test/integration/kube/agent_deploy_test.go`:
- Around line 379-387: Replace the string literal "Available" used in status
condition checks with the beta API constant
addonapiv1beta1.ManagedClusterAddOnConditionAvailable: locate usages around the
meta.IsStatusConditionFalse/meta.IsStatusConditionTrue checks (e.g., the closure
using hubAddonClient.AddonV1beta1().ManagedClusterAddOns and inspections of
addon.Status.Conditions) and swap the raw "Available" argument for
addonapiv1beta1.ManagedClusterAddOnConditionAvailable in all assertions listed
(lines indicated in the review) so the condition checks consistently use the
typed constant.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 21a5ea49-1eb6-4f83-bc0c-68933d11634a
⛔ Files ignored due to path filters (2)
vendor/open-cluster-management.io/api/addon/v1beta1/types_clustermanagementaddon.gois excluded by!vendor/**vendor/open-cluster-management.io/api/addon/v1beta1/types_managedclusteraddon.gois excluded by!vendor/**
📒 Files selected for processing (83)
cmd/example/helloworld/main.gocmd/example/helloworld_helm/main.goexamples/helloworld/helloworld.goexamples/helloworld/helloworld_test.goexamples/helloworld_agent/agent.goexamples/helloworld_helm/helloworld_helm.goexamples/helloworld_helm/helloworld_helm_test.goexamples/rbac/rbac.gopkg/addonfactory/addondeploymentconfig.gopkg/addonfactory/addondeploymentconfig_test.gopkg/addonfactory/addonfactory.gopkg/addonfactory/helm_agentaddon.gopkg/addonfactory/helm_agentaddon_test.gopkg/addonfactory/helper.gopkg/addonfactory/template_agentaddon.gopkg/addonfactory/template_agentaddon_test.gopkg/addonfactory/test_helper.gopkg/addonmanager/addontesting/helpers.gopkg/addonmanager/base_manager.gopkg/addonmanager/cloudevents/manager.gopkg/addonmanager/constants/constants.gopkg/addonmanager/controllers/addonconfig/controller.gopkg/addonmanager/controllers/addonconfig/controller_test.gopkg/addonmanager/controllers/agentdeploy/controller.gopkg/addonmanager/controllers/agentdeploy/default_hook_sync.gopkg/addonmanager/controllers/agentdeploy/default_hook_sync_test.gopkg/addonmanager/controllers/agentdeploy/default_sync.gopkg/addonmanager/controllers/agentdeploy/default_sync_test.gopkg/addonmanager/controllers/agentdeploy/healthcheck_sync.gopkg/addonmanager/controllers/agentdeploy/healthcheck_sync_test.gopkg/addonmanager/controllers/agentdeploy/hosted_hook_sync.gopkg/addonmanager/controllers/agentdeploy/hosted_hook_sync_test.gopkg/addonmanager/controllers/agentdeploy/hosted_sync.gopkg/addonmanager/controllers/agentdeploy/hosted_sync_test.gopkg/addonmanager/controllers/agentdeploy/util_test.gopkg/addonmanager/controllers/agentdeploy/utils.gopkg/addonmanager/controllers/certificate/csrapprove.gopkg/addonmanager/controllers/certificate/csrapprove_test.gopkg/addonmanager/controllers/certificate/csrsign.gopkg/addonmanager/controllers/certificate/csrsign_test.gopkg/addonmanager/controllers/cmaconfig/controller.gopkg/addonmanager/controllers/cmaconfig/controller_test.gopkg/addonmanager/controllers/cmamanagedby/controller.gopkg/addonmanager/controllers/cmamanagedby/controller_test.gopkg/addonmanager/controllers/registration/controller.gopkg/addonmanager/controllers/registration/controller_test.gopkg/addonmanager/manager.gopkg/agent/interface.gopkg/index/index.gopkg/utils/addon.gopkg/utils/addon_config.gopkg/utils/addon_config_test.gopkg/utils/addon_test.gopkg/utils/csr_helper_test.gopkg/utils/csr_helpers.gopkg/utils/gvr.gopkg/utils/helpers.gopkg/utils/helpers_test.gopkg/utils/image.gopkg/utils/image_test.gopkg/utils/permission.gopkg/utils/permission_test.gopkg/utils/probe_helper.gotest/e2e/helloworld_helm_test.gotest/e2e/helloworld_test.gotest/e2ehosted/helloworld_hosted_test.gotest/e2etoken/helloworld_token_test.gotest/integration/cloudevents/agent_deploy_test.gotest/integration/cloudevents/agent_hook_deploy_test.gotest/integration/cloudevents/agent_hosting_deploy_test.gotest/integration/cloudevents/agent_hosting_hook_deploy_test.gotest/integration/cloudevents/suite_test.gotest/integration/kube/agent_deploy_test.gotest/integration/kube/agent_hook_deploy_test.gotest/integration/kube/agent_hosting_deploy_test.gotest/integration/kube/agent_hosting_hook_deploy_test.gotest/integration/kube/assertion_test.gotest/integration/kube/cluster_management_addon_test.gotest/integration/kube/csr_test.gotest/integration/kube/deleted_managed_cluster_create_addon_test.gotest/integration/kube/multiworks_test.gotest/integration/kube/registration_test.gotest/integration/kube/suite_test.go
| func (g *testadcGetter) Get(ctx context.Context, | ||
| namespace, name string) (*addonapiv1alpha1.AddOnDeploymentConfig, error) { | ||
| namespace, name string) (*addonapiv1beta1.AddOnDeploymentConfig, error) { | ||
| return g.adc, nil | ||
| } | ||
|
|
||
| // newTestAddOnDeploymentConfigGetter returns a AddOnDeploymentConfigGetter for testing | ||
| func newTestAddOnDeploymentConfigGetter(adc *addonapiv1alpha1.AddOnDeploymentConfig) AddOnDeploymentConfigGetter { | ||
| func newTestAddOnDeploymentConfigGetter(adc *addonapiv1beta1.AddOnDeploymentConfig) AddOnDeploymentConfigGetter { | ||
| return &testadcGetter{adc: adc} |
There was a problem hiding this comment.
Make the fake getter validate the requested config.
Get always returns g.adc, so these cases still pass if the beta1 migration reads the wrong referent field or asks for the wrong AddOnDeploymentConfig. That leaves DesiredConfig.ConfigReferent effectively unverified.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/utils/addon_config_test.go` around lines 16 - 23, The fake getter
testadcGetter.Get currently always returns g.adc which lets tests pass even when
the code requests the wrong AddOnDeploymentConfig; update testadcGetter.Get to
validate the requested namespace and name against g.adc.Namespace and g.adc.Name
(and/or DesiredConfig.ConfigReferent values used by the code under test) and
return a not-found/error when they don't match so tests fail if the wrong
referent is requested; update newTestAddOnDeploymentConfigGetter usage in tests
if needed to construct the expected adc for validation.
| newAddon.SetAnnotations(map[string]string{ | ||
| "addon.open-cluster-management.io/values": `{"global":{"imagePullSecret":"mySecret"}}`, | ||
| }) |
There was a problem hiding this comment.
Merge the new value into the existing annotation map.
In beta1, install namespace and other addon behavior now live in metadata.annotations. Replacing the whole map here can drop those keys and change the addon behavior under test.
🩹 Proposed fix
- newAddon.SetAnnotations(map[string]string{
- "addon.open-cluster-management.io/values": `{"global":{"imagePullSecret":"mySecret"}}`,
- })
+ annotations := newAddon.GetAnnotations()
+ if annotations == nil {
+ annotations = map[string]string{}
+ }
+ annotations["addon.open-cluster-management.io/values"] = `{"global":{"imagePullSecret":"mySecret"}}`
+ newAddon.SetAnnotations(annotations)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| newAddon.SetAnnotations(map[string]string{ | |
| "addon.open-cluster-management.io/values": `{"global":{"imagePullSecret":"mySecret"}}`, | |
| }) | |
| annotations := newAddon.GetAnnotations() | |
| if annotations == nil { | |
| annotations = map[string]string{} | |
| } | |
| annotations["addon.open-cluster-management.io/values"] = `{"global":{"imagePullSecret":"mySecret"}}` | |
| newAddon.SetAnnotations(annotations) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/e2e/helloworld_helm_test.go` around lines 296 - 298, The test currently
replaces all annotations via newAddon.SetAnnotations which can drop preexisting
keys (like install namespace) — instead, read the existing annotations from
newAddon (e.g., via newAddon.GetAnnotations() or the Annotations map), merge or
update the "addon.open-cluster-management.io/values" entry with
`{"global":{"imagePullSecret":"mySecret"}}`, and then call SetAnnotations with
the merged map so other metadata annotations are preserved; reference the
newAddon variable and the SetAnnotations/GetAnnotations methods to locate and
modify this behavior.
| func updateClusterManagementAddOn(ctx context.Context, new *addonapiv1beta1.ClusterManagementAddOn) { | ||
| gomega.Eventually(func() bool { | ||
| old, err := hubAddonClient.AddonV1alpha1().ClusterManagementAddOns().Get(context.Background(), new.Name, metav1.GetOptions{}) | ||
| old, err := hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Get(context.Background(), new.Name, metav1.GetOptions{}) | ||
| old.Spec = new.Spec | ||
| old.Annotations = new.Annotations | ||
| _, err = hubAddonClient.AddonV1alpha1().ClusterManagementAddOns().Update(context.Background(), old, metav1.UpdateOptions{}) | ||
| _, err = hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Update(context.Background(), old, metav1.UpdateOptions{}) | ||
| if err == nil { |
There was a problem hiding this comment.
Check the Get error before mutating old.
Line 96 and Line 164 write into old immediately after Get. If the read fails, these helpers can panic inside Eventually instead of retrying cleanly. Switching the poll body to func() error also preserves the real failure, and lets the API calls use the passed ctx.
🛠️ Safer retry pattern
func updateClusterManagementAddOn(ctx context.Context, new *addonapiv1beta1.ClusterManagementAddOn) {
- gomega.Eventually(func() bool {
- old, err := hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Get(context.Background(), new.Name, metav1.GetOptions{})
- old.Spec = new.Spec
- old.Annotations = new.Annotations
- _, err = hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Update(context.Background(), old, metav1.UpdateOptions{})
- if err == nil {
- return true
- }
- return false
- }, eventuallyTimeout, eventuallyInterval).Should(gomega.BeTrue())
+ gomega.Eventually(func() error {
+ old, err := hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Get(ctx, new.Name, metav1.GetOptions{})
+ if err != nil {
+ return err
+ }
+ old.Spec = new.Spec
+ old.Annotations = new.Annotations
+ _, err = hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Update(ctx, old, metav1.UpdateOptions{})
+ return err
+ }, eventuallyTimeout, eventuallyInterval).ShouldNot(gomega.HaveOccurred())
}Apply the same pattern in updateManagedClusterAddOnStatus.
Also applies to: 162-167
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/integration/kube/assertion_test.go` around lines 94 - 100, The retry
body in updateClusterManagementAddOn (and likewise in
updateManagedClusterAddOnStatus) mutates the retrieved object before checking
the Get error; change the gomega.Eventually callback to return error (func()
error) so you first call
hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Get(ctx, new.Name,
metav1.GetOptions{}), check and return that err if non-nil, then mutate
old.Spec/old.Annotations and call Update with the same passed ctx, returning any
update error so Eventually can retry and preserve the real failure.
There was a problem hiding this comment.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (7)
test/e2e/helloworld_helm_test.go (1)
49-57:⚠️ Potential issue | 🟡 MinorReturn success after the create path.
When Line 52 creates the addon after a
NotFound, Line 57 still returns the originalGeterror. That makes the firstEventuallypoll fail even after a successful create.Suggested fix
_, err = hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(managedClusterName).Get(context.Background(), helloWorldHelmAddonName, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { _, cerr := hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(managedClusterName).Create(context.Background(), &helloworldhelmAddon, metav1.CreateOptions{}) if cerr != nil { return cerr } + return nil } return err }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/helloworld_helm_test.go` around lines 49 - 57, The Get path for hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(managedClusterName).Get(...) returns the original Get error even after you successfully create the addon; change the NotFound branch so that after Create(context.Background(), &helloworldhelmAddon, metav1.CreateOptions{}) succeeds you return success (nil) or otherwise clear the error instead of falling through to "return err" — update the logic around helloWorldHelmAddonName and helloworldhelmAddon so the function returns nil right after a successful Create.pkg/agent/interface.go (1)
136-145:⚠️ Potential issue | 🟡 MinorUpdate the install namespace contract text.
These comments still point readers to
ManagedClusterAddOn.spec.installNamespace, but the rest of this migration moves install-namespace handling to annotations. Leaving the old field here will send implementers to the wrong source of truth.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/agent/interface.go` around lines 136 - 145, Update the doc comments for the Namespace field and AgentInstallNamespace to stop referencing ManagedClusterAddOn.spec.installNamespace and instead state that install-namespace is now provided via annotations (e.g., the install-namespace annotation on the ManagedClusterAddOn resource); update the wording in the comment on AgentInstallNamespace (and the Deprecated Namespace comment) to explain that the annotation overrides the namespace and that the addon agent manifests must use the returned namespace, and remove or replace any guidance directing readers to ManagedClusterAddOn.spec.installNamespace.pkg/addonmanager/controllers/cmaconfig/controller.go (1)
175-178:⚠️ Potential issue | 🟠 MajorPropagate config lookup failures instead of returning success.
Line 176 and Line 196 convert
getConfigSpecHashfailures intonil. That makessynctreat transient lister/read errors as success, skip the retry, and potentially patch a partially updated status.Suggested fix
specHash, err := c.getConfigSpecHash(defaultConfigReference.ConfigGroupResource, defaultConfigReference.DesiredConfig.ConfigReferent) if err != nil { - return nil + return err } @@ specHash, err := c.getConfigSpecHash(configReference.ConfigGroupResource, configReference.DesiredConfig.ConfigReferent) if err != nil { - return nil + return err }Also applies to: 195-198
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/cmaconfig/controller.go` around lines 175 - 178, The code is swallowing errors from getConfigSpecHash by returning nil (success) inside the sync/reconcile path; change the handlers that call getConfigSpecHash (where defaultConfigReference is used) to propagate the error instead of returning nil so transient lister/read errors cause a retry. Specifically, update the branches that call c.getConfigSpecHash(defaultConfigReference.ConfigGroupResource, defaultConfigReference.DesiredConfig.ConfigReferent) (and the other similar call) to return the error up the call stack (e.g., return err) rather than nil, ensuring sync/patch logic only proceeds on true success.pkg/addonmanager/controllers/cmaconfig/controller_test.go (1)
347-354:⚠️ Potential issue | 🟠 MajorUse the
ClusterManagementAddOnsinformer inTestEnqueue.Line 349 wires
ManagedClusterAddOns(), but the test fixtures containClusterManagementAddOnobjects. The controller indexesClusterManagementAddOnin production (controller.go:49). This mismatch means the test adds the wrong object type to the cache and can pass while the real wiring is broken.Suggested fix
- addonInformer := addonInformers.Addon().V1beta1().ManagedClusterAddOns().Informer() + addonInformer := addonInformers.Addon().V1beta1().ClusterManagementAddOns().Informer()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/cmaconfig/controller_test.go` around lines 347 - 354, TestEnqueue is using the ManagedClusterAddOns informer but the fixtures are ClusterManagementAddOn objects and the real controller (cmaConfigController) indexes ClusterManagementAddOn; change the informer wiring in the test to use addoninformers.Addon().V1beta1().ClusterManagementAddOns().Informer() so the indexer assigned to cmaConfigController.clusterManagementAddonIndexer matches the fixture type and the production controller behavior.pkg/addonmanager/controllers/agentdeploy/utils.go (1)
60-78:⚠️ Potential issue | 🟡 MinorPersist the deprecated-finalizer cleanup even when the target finalizer already exists.
If
addon.Finalizersalready containsfinalizer, this returns beforerstis written back. In the upgrade case where the slice contains both the beta finalizer and deprecated alpha finalizers, the helper reports “no change” and leaves the deprecated entries behind.🧹 Proposed fix
func addonAddFinalizer(addon *addonapiv1beta1.ManagedClusterAddOn, finalizer string) bool { if addon.Finalizers == nil { addon.SetFinalizers([]string{finalizer}) return true } - var rst []string + var ( + rst []string + hasTarget bool + ) for _, f := range addon.Finalizers { + if f == finalizer { + hasTarget = true + } // remove deprecated finalizers also if f == addonapiv1alpha1.AddonDeprecatedHostingManifestFinalizer || f == addonapiv1alpha1.AddonDeprecatedPreDeleteHookFinalizer || f == addonapiv1alpha1.AddonDeprecatedHostingPreDeleteHookFinalizer { continue @@ - for _, f := range addon.Finalizers { - if f == finalizer { - return false - } - } - - rst = append(rst, finalizer) - addon.SetFinalizers(rst) - return true + if hasTarget { + if len(rst) != len(addon.Finalizers) { + addon.SetFinalizers(rst) + return true + } + return false + } + + addon.SetFinalizers(append(rst, finalizer)) + return true }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/agentdeploy/utils.go` around lines 60 - 78, The current logic iterates addon.Finalizers to build rst without deprecated entries but returns early if the target finalizer is already present, so deprecated finalizers aren't persisted; change the flow in the helper that reads addon.Finalizers and uses finalizer (and the deprecated constants addonapiv1alpha1.AddonDeprecatedHostingManifestFinalizer, AddonDeprecatedPreDeleteHookFinalizer, AddonDeprecatedHostingPreDeleteHookFinalizer) to always write back the cleaned rst via addon.SetFinalizers(rst), and only avoid appending a duplicate finalizer if finalizer already exists (i.e., detect presence but do not return before calling addon.SetFinalizers). Ensure you reference addon.Finalizers, finalizer, and addon.SetFinalizers in the fix.pkg/addonfactory/template_agentaddon.go (1)
126-143:⚠️ Potential issue | 🟠 MajorEmpty namespace from
agentInstallNamespacehook cannot signal "use template namespace" due to the length check on line 140.The code correctly uses
InstallNamespaceAnnotationfrom the v1beta1 API, but the empty string handling is incorrect. According toAddOnDeploymentConfigSpec.AgentInstallNamespacedocumentation, template-type addons should be able to set an empty string""to use the namespace defined in the addonTemplate. However, the current logic at lines 140-142 ignores empty returns from theagentInstallNamespacehook:if len(ns) > 0prevents the hook's explicit empty value from overriding the annotation or default. This breaks the intended behavior where returning""from the hook should mean "keep the template namespace." Fix the condition to distinguish between "hook not called" and "hook returned empty string," allowing empty values to be properly preserved.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonfactory/template_agentaddon.go` around lines 126 - 143, The agentInstallNamespace hook return is being ignored when it intentionally returns an empty string because the code checks `if len(ns) > 0` before assigning; update the logic in the block that calls `a.agentInstallNamespace(addon)` so that on successful call (err == nil) you always set `installNamespace = ns` (even when ns == "") instead of only when len(ns) > 0, preserving the hook's explicit empty-string signal to use the template namespace; keep the existing nil-check for `a.agentInstallNamespace` and the error handling around the call.test/integration/kube/assertion_test.go (1)
91-102:⚠️ Potential issue | 🟡 MinorCheck the
Geterror before mutatingold.Line 94 writes into
oldimmediately afterGeton line 93. If the read fails, this helper will panic insideEventuallyinstead of retrying cleanly. Theerrvariable is shadowed by the:=on line 93 and then reassigned on line 96.🛠️ Safer retry pattern
func updateClusterManagementAddOn(ctx context.Context, new *addonapiv1beta1.ClusterManagementAddOn) { - gomega.Eventually(func() bool { - old, err := hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Get(context.Background(), new.Name, metav1.GetOptions{}) + gomega.Eventually(func() error { + old, err := hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Get(ctx, new.Name, metav1.GetOptions{}) + if err != nil { + return err + } old.Spec = new.Spec old.Annotations = new.Annotations - _, err = hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Update(context.Background(), old, metav1.UpdateOptions{}) - if err == nil { - return true - } - return false - }, eventuallyTimeout, eventuallyInterval).Should(gomega.BeTrue()) + _, err = hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Update(ctx, old, metav1.UpdateOptions{}) + return err + }, eventuallyTimeout, eventuallyInterval).ShouldNot(gomega.HaveOccurred()) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/kube/assertion_test.go` around lines 91 - 102, In updateClusterManagementAddOn, avoid mutating old before checking the Get error and stop shadowing err: call hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Get into variables where err is checked immediately (do not use := that shadows outer err), return false and retry if err != nil or old == nil, and only then set old.Spec and old.Annotations and call Update; ensure the Get error path returns false so Eventually will retry rather than panic.
♻️ Duplicate comments (3)
test/e2e/helloworld_helm_test.go (1)
296-298:⚠️ Potential issue | 🟠 MajorDon't replace the full annotation map.
Line 296 overwrites all existing annotations. In beta1, addon behavior such as install namespace can also live in
metadata.annotations, so this can silently drop required keys and change what the test is exercising.Suggested fix
- newAddon.SetAnnotations(map[string]string{ - "addon.open-cluster-management.io/values": `{"global":{"imagePullSecret":"mySecret"}}`, - }) + annotations := newAddon.GetAnnotations() + if annotations == nil { + annotations = map[string]string{} + } + annotations["addon.open-cluster-management.io/values"] = `{"global":{"imagePullSecret":"mySecret"}}` + newAddon.SetAnnotations(annotations)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/helloworld_helm_test.go` around lines 296 - 298, The test currently calls newAddon.SetAnnotations(...) which replaces the entire annotations map and can drop other important keys (e.g., install namespace). Instead merge the new key into the existing annotations: retrieve existing annotations from newAddon (e.g., via newAddon.GetAnnotations()), create or reuse the map, set the "addon.open-cluster-management.io/values" entry to `{"global":{"imagePullSecret":"mySecret"}}`, then call newAddon.SetAnnotations(mergedMap) so only that key is added/updated and no other annotations are lost.pkg/utils/permission.go (1)
273-298:⚠️ Potential issue | 🟠 MajorThe empty-subject check is still comparing mismatched types.
The variable
subjectis*addonapiv1beta1.BaseSubject(as assigned on lines 282 and 290), but line 296 compares*subjectagainstaddonapiv1beta1.Subject{}. SinceBaseSubjectandSubjectare distinct Go types (SubjectembedsBaseSubject),equality.Semantic.DeepEqual(*subject, addonapiv1beta1.Subject{})will never be true, even when the subject is empty.This means empty subjects will not be filtered out as intended.
🐛 Proposed fix
// If no registration config found or subject is empty, return nil - if subject == nil || equality.Semantic.DeepEqual(*subject, addonapiv1beta1.Subject{}) { + if subject == nil || (subject.User == "" && len(subject.Groups) == 0) { return nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/permission.go` around lines 273 - 298, In BuildSubjectsFromRegistration, the empty-subject check compares *subject (type addonapiv1beta1.BaseSubject) to addonapiv1beta1.Subject{}, which are different types; change the comparison to compare against addonapiv1beta1.BaseSubject{} (or construct a Subject from the BaseSubject if you intend to compare full Subject) so equality.Semantic.DeepEqual(*subject, addonapiv1beta1.BaseSubject{}) correctly detects empty BaseSubject; keep the existing nil check for subject as-is.test/integration/kube/assertion_test.go (1)
159-169:⚠️ Potential issue | 🟡 MinorCheck the
Geterror before mutatingold.Same issue as
updateClusterManagementAddOn: Line 162 writes intooldimmediately afterGeton line 161. If the read fails, this helper will panic insideEventuallyinstead of retrying cleanly.🛠️ Safer retry pattern
func updateManagedClusterAddOnStatus(ctx context.Context, new *addonapiv1beta1.ManagedClusterAddOn) { - gomega.Eventually(func() bool { - old, err := hubAddonClient.AddonV1beta1().ManagedClusterAddOns(new.Namespace).Get(context.Background(), new.Name, metav1.GetOptions{}) + gomega.Eventually(func() error { + old, err := hubAddonClient.AddonV1beta1().ManagedClusterAddOns(new.Namespace).Get(ctx, new.Name, metav1.GetOptions{}) + if err != nil { + return err + } old.Status = new.Status - _, err = hubAddonClient.AddonV1beta1().ManagedClusterAddOns(old.Namespace).UpdateStatus(context.Background(), old, metav1.UpdateOptions{}) - if err == nil { - return true - } - return false - }, eventuallyTimeout, eventuallyInterval).Should(gomega.BeTrue()) + _, err = hubAddonClient.AddonV1beta1().ManagedClusterAddOns(old.Namespace).UpdateStatus(ctx, old, metav1.UpdateOptions{}) + return err + }, eventuallyTimeout, eventuallyInterval).ShouldNot(gomega.HaveOccurred()) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/kube/assertion_test.go` around lines 159 - 169, In updateManagedClusterAddOnStatus, don't mutate old immediately after the Get call; check the error returned by hubAddonClient.AddonV1beta1().ManagedClusterAddOns(...).Get(...) and if err != nil return false so Eventually will retry safely instead of panicking when old is nil, otherwise proceed to set old.Status and call UpdateStatus; update the closure to validate the GET error before modifying or using old.
🧹 Nitpick comments (7)
pkg/addonmanager/controllers/agentdeploy/default_hook_sync_test.go (1)
44-53: Consider extracting the hardcoded APIVersion string to a constant.The APIVersion
"addon.open-cluster-management.io/v1beta1"is hardcoded in multiple places (lines 46, 168, 235). While acceptable for test code, extracting this to a constant would reduce maintenance burden if the API version changes again.♻️ Optional: Extract APIVersion constant
+const testAddonAPIVersion = "addon.open-cluster-management.io/v1beta1" + func getDeployWork() *workapiv1.ManifestWork { // ... work.SetOwnerReferences([]metav1.OwnerReference{ { - APIVersion: "addon.open-cluster-management.io/v1beta1", + APIVersion: testAddonAPIVersion, Kind: "ManagedClusterAddOn",Apply the same change at lines 168 and 235.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/agentdeploy/default_hook_sync_test.go` around lines 44 - 53, Extract the repeated APIVersion string into a shared constant (e.g., const testManagedClusterAddOnAPIVersion = "addon.open-cluster-management.io/v1beta1") and replace the hardcoded literal occurrences where OwnerReferences are constructed in default_hook_sync_test.go (for example in the calls to work.SetOwnerReferences and any other OwnerReference literals used in that test file). Update all uses in the same file to reference the new constant so the APIVersion is defined in one place and easy to change.test/integration/cloudevents/agent_hook_deploy_test.go (1)
259-269: Use the v1beta1 constant instead of hardcoded string.Line 265 uses the hardcoded string
"HookManifestCompleted"while the constantaddonapiv1beta1.ManagedClusterAddOnHookManifestCompletedexists and is used consistently throughout the codebase for the same condition check. Replace with the constant for consistency and maintainability.♻️ Suggested change
- if !meta.IsStatusConditionFalse(addon.Status.Conditions, "HookManifestCompleted") { + if !meta.IsStatusConditionFalse(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnHookManifestCompleted) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/cloudevents/agent_hook_deploy_test.go` around lines 259 - 269, Replace the hardcoded condition string in the test's meta.IsStatusConditionFalse call with the shared constant: use addonapiv1beta1.ManagedClusterAddOnHookManifestCompleted instead of "HookManifestCompleted" inside the Eventually closure that checks addon.Status.Conditions (the call to meta.IsStatusConditionFalse); this keeps the test consistent with the rest of the codebase.pkg/addonmanager/controllers/agentdeploy/default_sync_test.go (1)
189-190: Useaddonapiv1beta1.GroupVersion.String()for the owner reference APIVersion.The three hardcoded
"addon.open-cluster-management.io/v1beta1"literals at lines 189-190, 312-313, and 368-369 create manual migration points. Replace them with the exportedGroupVersionhelper from the addon API package so future API bumps are centralized in the package rather than scattered across test fixtures.Lines to update: 189, 312, 368
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/agentdeploy/default_sync_test.go` around lines 189 - 190, Replace the three hardcoded APIVersion string literals used in owner references with the centralized GroupVersion helper from the addon API package: instead of "addon.open-cluster-management.io/v1beta1" set the APIVersion to addonapiv1beta1.GroupVersion.String(); update the three occurrences in default_sync_test.go where an owner reference APIVersion is assigned (the APIVersion field alongside Kind: "ManagedClusterAddOn") so future API version bumps are handled by the addonapiv1beta1.GroupVersion definition.test/integration/kube/cluster_management_addon_test.go (1)
41-44: Assert the beta1 registration payload directly.This test now exercises
RegistrationConfig.Typeand the kube-client driver, but it only verifiesRegistrationAppliedandStatus.Namespace. Please also assert the returned registration entry carries the expected beta1 values, otherwise a broken field mapping can slip through unnoticed.Also applies to: 85-99
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/kube/cluster_management_addon_test.go` around lines 41 - 44, The test currently only checks RegistrationApplied and Status.Namespace but doesn't verify the actual registration payload; update the assertions after the code that sets testAddonImpl.registrations[managedClusterName] (and the analogous block around lines 85-99) to fetch the stored registration entry and assert its fields match the expected addonapiv1beta1.RegistrationConfig (e.g., that Type == addonapiv1beta1.KubeClient and any other populated fields are equal to the expected values). Locate the verification code around testAddonImpl.registrations and managedClusterName and add explicit equality checks on the returned registration object rather than relying solely on RegistrationApplied/Status.Namespace.test/integration/kube/agent_hook_deploy_test.go (2)
122-131: Use a non-default install namespace here.This sets the beta1 install-namespace annotation to
"default", and this test already includes hook content targeting"default". That means the spec can still pass even if the controller ignores the new annotation path entirely. Use a sentinel namespace here and assert the generated work or addon status reflects it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/kube/agent_hook_deploy_test.go` around lines 122 - 131, The test currently sets addonapiv1beta1.InstallNamespaceAnnotation to "default", which masks the new annotation-path behavior; update the ManagedClusterAddOn object (the addon variable created for testAddonImpl.name) to use a sentinel namespace (e.g., "sentinel-install-namespace") instead of "default", then adjust the assertions that inspect the generated Work or the ManagedClusterAddOn status to expect that sentinel namespace; ensure createManagedClusterAddOnwithOwnerRefs is still used to create the addon and that any checks referencing the install namespace use addonapiv1beta1.InstallNamespaceAnnotation and the sentinel value.
173-180: Replace raw beta1 condition names with exported constants.The rest of this migration already moved to beta1 types and clients, but these assertions still depend on string literals.
addon/v1beta1exportsManagedClusterAddOnManifestAppliedandManagedClusterAddOnHookManifestCompleted, so using them here keeps the test aligned with the API surface. (pkg.go.dev)♻️ Proposed cleanup
- if !meta.IsStatusConditionTrue(addon.Status.Conditions, "ManifestApplied") { + if !meta.IsStatusConditionTrue(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnManifestApplied) { return fmt.Errorf("Unexpected addon applied condition, %v", addon.Status.Conditions) } @@ - if !meta.IsStatusConditionFalse(addon.Status.Conditions, "HookManifestCompleted") { + if !meta.IsStatusConditionFalse(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnHookManifestCompleted) { return fmt.Errorf("Unexpected addon applied condition, %v", addon.Status.Conditions) }Also applies to: 203-210
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/kube/agent_hook_deploy_test.go` around lines 173 - 180, Replace the hard-coded condition name strings used in the meta.IsStatusConditionTrue checks with the exported constants from addon/v1beta1: use addonv1beta1.ManagedClusterAddOnManifestApplied instead of "ManifestApplied" and addonv1beta1.ManagedClusterAddOnHookManifestCompleted for the hook-completed checks; update the meta.IsStatusConditionTrue calls (the checks on addon.Status.Conditions in the Eventually blocks around the meta.IsStatusConditionTrue usages) to reference these constants and add the corresponding import alias (addonv1beta1) where needed; apply the same replacement to the other occurrence mentioned (lines 203-210).hack/fix-crd-storage-version.sh (1)
37-46: The storage version detection relies on ordering assumptions.The script assumes the first
storage:entry corresponds to v1alpha1 and the last to v1beta1. If the CRD structure changes or versions are reordered, this detection will silently produce incorrect results.Consider adding a more robust check that verifies the version name alongside the storage flag, or adding a comment documenting this assumption.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hack/fix-crd-storage-version.sh` around lines 37 - 46, The current check using first_storage and second_storage reads the first and last "storage:" entries from ${CRD_FILE} and assumes they map to v1alpha1 and v1beta1 respectively; update the detection to locate the "storage:" value for each specific version name instead of relying on order (e.g., search for the line(s) containing "name: v1alpha1" and "name: v1beta1" and then read the subsequent/associated "storage:" value), adjust the logic around the variables first_storage and second_storage to reflect version-specific checks, and add a short comment explaining the assumption if you keep any ordering-based logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@hack/fix-crd-storage-version.sh`:
- Around line 50-56: The three in-place sed calls that modify "${CRD_FILE}"
using 'sed -i' (the lines that swap storage: true/false and use
TEMP_TRUE_MARKER) are not portable to macOS; change them to a portable approach
by either (A) detecting Darwin and using the macOS-friendly 'sed -i '' ...'
invocation for those three replacements, or (B) replace each in-place sed call
with a safe temp-file pattern that writes the sed output to a mktemp-created
file and then moves it back into "${CRD_FILE}" (ensuring TEMP_TRUE_MARKER and
the three replacement steps remain identical); update the script to handle
failures and cleanup the temp file.
In `@pkg/addonfactory/test_helper.go`:
- Around line 20-27: The annotations map in test_helper.go redundantly assigns
AnnotationValuesName twice; remove the duplicate assignment and keep a single
initialization of annotations with AnnotationValuesName: values (and the
conditional addition of addonapiv1beta1.InstallNamespaceAnnotation when
installNamespace is non-empty) so only one assignment to AnnotationValuesName
remains.
In `@pkg/addonmanager/controllers/addonconfig/controller.go`:
- Around line 190-202: The loop currently skips entries when
configReference.DesiredConfig is nil; change it to first try
configReference.DesiredConfig and if that's nil then fall back to
configReference.LastAppliedConfig (ensuring LastAppliedConfig != nil) when
forming the lookup Name/Namespace used with lister.Get or
lister.Namespace(...).Get so we still locate previously applied configs and can
update LastObservedGeneration for all status entries; however only perform the
spec-hash/spec-write logic when DesiredConfig was present (i.e., keep spec-hash
writes restricted to spec-matched configs). Update the block around
configReference.DesiredConfig/LastAppliedConfig, the lister lookup, and later
generation/spec-hash handling (references: configReference, DesiredConfig,
LastAppliedConfig, lister, LastObservedGeneration).
In `@pkg/addonmanager/controllers/agentdeploy/default_sync_test.go`:
- Around line 162-163: The test currently only calls meta.IsStatusConditionFalse
on addOn.Status.Conditions for
addonapiv1beta1.ManagedClusterAddOnManifestApplied which will pass if the
condition is missing or Unknown; change the assertions to fetch the exact
condition via meta.FindStatusCondition(addOn.Status.Conditions,
string(addonapiv1beta1.ManagedClusterAddOnManifestApplied)) and assert the
condition is non-nil and that condition.Status equals metav1.ConditionFalse (for
the error-path) and metav1.ConditionTrue (for the success-path) and that
condition.Reason equals
addonapiv1beta1.AddonManifestAppliedReasonWorkApplyFailed in the error case (and
the expected success reason in the success case) instead of relying on
meta.IsStatusConditionFalse.
In `@pkg/addonmanager/controllers/agentdeploy/utils.go`:
- Around line 133-136: The label check is using the annotation constant
addonapiv1beta1.AddonPreDeleteHookAnnotationKey; change the label lookup to use
the v1alpha1 label constant (e.g. addonapiv1alpha1.AddonPreDeleteHookLabelKey)
so legacy label keys like "open-cluster-management.io/addon-pre-delete" are
recognized; update the hasPreDeleteLabel assignment that reads from labels[...]
to use the v1alpha1 label constant while leaving the annotation check
(annotations[...] with addonapiv1beta1.AddonPreDeleteHookAnnotationKey)
unchanged.
In `@test/e2ehosted/helloworld_hosted_test.go`:
- Around line 49-67: The test only adds the beta1 hosting/install-namespace
annotations when creating a ManagedClusterAddOn but neglects to migrate existing
objects; change the Eventually block handling the Get result so that if Get
succeeds but the returned addon is missing
addonapiv1beta1.HostingClusterNameAnnotationKey or
addonapiv1beta1.InstallNamespaceAnnotation, you update/patch that existing
object to include those two annotations (using
hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(hostedManagedClusterName).Update
or Patch and retry-on-conflict logic), instead of skipping migration—refer to
the Get/Create calls and the ManagedClusterAddOn object handling in the
anonymous function to locate where to add this update/patch flow.
In `@test/integration-test.mk`:
- Around line 72-74: The test-integration recipe currently runs
test-kube-integration then runs restore-crd-storage-version but will skip
restoration if the tests fail; update the test-integration target (referencing
test-integration, test-kube-integration, and restore-crd-storage-version) so
restore-crd-storage-version always runs regardless of test outcome—for example
run test-kube-integration inside a subshell with a trap/EXIT handler that
invokes restore-crd-storage-version, or restructure the recipe to capture the
test exit status and always call restore-crd-storage-version before exiting with
that status.
In `@test/integration/cloudevents/agent_deploy_test.go`:
- Around line 277-279: The test currently fails if any
ManagedClusterAddOnConditionProgressing condition exists; update the checks to
only fail when that condition is explicitly True. Replace the current nil-only
check (using meta.FindStatusCondition(addon.Status.Conditions,
addonapiv1beta1.ManagedClusterAddOnConditionProgressing)) with a guard that
returns an error only if the found condition is non-nil AND its Status equals
metav1.ConditionTrue (i.e., cond != nil && cond.Status == metav1.ConditionTrue).
Apply the same change for the other occurrences referenced (around the other
line groups) so Progressing=False or absent are both accepted.
- Around line 748-763: The helper createManagedClusterAddOnwithOwnerRefs
currently creates the ManagedClusterAddOn and only then adds the controller
owner reference via MergeOwnerRefs and an Update, which can create a race;
instead set the owner reference on the addon object before calling
hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).Create(...).
Concretely, compute the controller owner (the owner metav1.NewControllerRef(cma,
...)) and merge/assign it into addon.OwnerReferences (using the same
MergeOwnerRefs or direct assignment) and then call Create; remove the subsequent
Update path that patches the created object.
In `@test/integration/cloudevents/agent_hook_deploy_test.go`:
- Line 227: Fix the typo in the test error message string: search for the
assertion/log that uses the text "addon is expceted to be deleted" (near the
Delete call on
hubAddonClient.AddonV1beta1().ManagedClusterAddOns(managedClusterName).Delete
and surrounding assertions in the same test) and change it to "addon is expected
to be deleted"; update all occurrences in that block (lines around 323–331) so
the message is spelled correctly wherever the test reports the expected deletion
of testAddonImpl.name.
In `@test/integration/kube/cluster_management_addon_test.go`:
- Around line 69-70: The test creates a cluster-scoped ClusterManagementAddOn
via hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Create(...) and only
relies on AfterEach to delete it, which can leave the resource if the test fails
early; fix this by calling ginkgo.DeferCleanup immediately after the Create
returns successfully (i.e., right after cma, err := ... and the Expect that err
is nil) to register deletion of the created cma (use the same hubAddonClient and
cma identifier) instead of waiting for the tail of the test body or AfterEach.
---
Outside diff comments:
In `@pkg/addonfactory/template_agentaddon.go`:
- Around line 126-143: The agentInstallNamespace hook return is being ignored
when it intentionally returns an empty string because the code checks `if
len(ns) > 0` before assigning; update the logic in the block that calls
`a.agentInstallNamespace(addon)` so that on successful call (err == nil) you
always set `installNamespace = ns` (even when ns == "") instead of only when
len(ns) > 0, preserving the hook's explicit empty-string signal to use the
template namespace; keep the existing nil-check for `a.agentInstallNamespace`
and the error handling around the call.
In `@pkg/addonmanager/controllers/agentdeploy/utils.go`:
- Around line 60-78: The current logic iterates addon.Finalizers to build rst
without deprecated entries but returns early if the target finalizer is already
present, so deprecated finalizers aren't persisted; change the flow in the
helper that reads addon.Finalizers and uses finalizer (and the deprecated
constants addonapiv1alpha1.AddonDeprecatedHostingManifestFinalizer,
AddonDeprecatedPreDeleteHookFinalizer,
AddonDeprecatedHostingPreDeleteHookFinalizer) to always write back the cleaned
rst via addon.SetFinalizers(rst), and only avoid appending a duplicate finalizer
if finalizer already exists (i.e., detect presence but do not return before
calling addon.SetFinalizers). Ensure you reference addon.Finalizers, finalizer,
and addon.SetFinalizers in the fix.
In `@pkg/addonmanager/controllers/cmaconfig/controller_test.go`:
- Around line 347-354: TestEnqueue is using the ManagedClusterAddOns informer
but the fixtures are ClusterManagementAddOn objects and the real controller
(cmaConfigController) indexes ClusterManagementAddOn; change the informer wiring
in the test to use
addoninformers.Addon().V1beta1().ClusterManagementAddOns().Informer() so the
indexer assigned to cmaConfigController.clusterManagementAddonIndexer matches
the fixture type and the production controller behavior.
In `@pkg/addonmanager/controllers/cmaconfig/controller.go`:
- Around line 175-178: The code is swallowing errors from getConfigSpecHash by
returning nil (success) inside the sync/reconcile path; change the handlers that
call getConfigSpecHash (where defaultConfigReference is used) to propagate the
error instead of returning nil so transient lister/read errors cause a retry.
Specifically, update the branches that call
c.getConfigSpecHash(defaultConfigReference.ConfigGroupResource,
defaultConfigReference.DesiredConfig.ConfigReferent) (and the other similar
call) to return the error up the call stack (e.g., return err) rather than nil,
ensuring sync/patch logic only proceeds on true success.
In `@pkg/agent/interface.go`:
- Around line 136-145: Update the doc comments for the Namespace field and
AgentInstallNamespace to stop referencing
ManagedClusterAddOn.spec.installNamespace and instead state that
install-namespace is now provided via annotations (e.g., the install-namespace
annotation on the ManagedClusterAddOn resource); update the wording in the
comment on AgentInstallNamespace (and the Deprecated Namespace comment) to
explain that the annotation overrides the namespace and that the addon agent
manifests must use the returned namespace, and remove or replace any guidance
directing readers to ManagedClusterAddOn.spec.installNamespace.
In `@test/e2e/helloworld_helm_test.go`:
- Around line 49-57: The Get path for
hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(managedClusterName).Get(...)
returns the original Get error even after you successfully create the addon;
change the NotFound branch so that after Create(context.Background(),
&helloworldhelmAddon, metav1.CreateOptions{}) succeeds you return success (nil)
or otherwise clear the error instead of falling through to "return err" — update
the logic around helloWorldHelmAddonName and helloworldhelmAddon so the function
returns nil right after a successful Create.
In `@test/integration/kube/assertion_test.go`:
- Around line 91-102: In updateClusterManagementAddOn, avoid mutating old before
checking the Get error and stop shadowing err: call
hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Get into variables where
err is checked immediately (do not use := that shadows outer err), return false
and retry if err != nil or old == nil, and only then set old.Spec and
old.Annotations and call Update; ensure the Get error path returns false so
Eventually will retry rather than panic.
---
Duplicate comments:
In `@pkg/utils/permission.go`:
- Around line 273-298: In BuildSubjectsFromRegistration, the empty-subject check
compares *subject (type addonapiv1beta1.BaseSubject) to
addonapiv1beta1.Subject{}, which are different types; change the comparison to
compare against addonapiv1beta1.BaseSubject{} (or construct a Subject from the
BaseSubject if you intend to compare full Subject) so
equality.Semantic.DeepEqual(*subject, addonapiv1beta1.BaseSubject{}) correctly
detects empty BaseSubject; keep the existing nil check for subject as-is.
In `@test/e2e/helloworld_helm_test.go`:
- Around line 296-298: The test currently calls newAddon.SetAnnotations(...)
which replaces the entire annotations map and can drop other important keys
(e.g., install namespace). Instead merge the new key into the existing
annotations: retrieve existing annotations from newAddon (e.g., via
newAddon.GetAnnotations()), create or reuse the map, set the
"addon.open-cluster-management.io/values" entry to
`{"global":{"imagePullSecret":"mySecret"}}`, then call
newAddon.SetAnnotations(mergedMap) so only that key is added/updated and no
other annotations are lost.
In `@test/integration/kube/assertion_test.go`:
- Around line 159-169: In updateManagedClusterAddOnStatus, don't mutate old
immediately after the Get call; check the error returned by
hubAddonClient.AddonV1beta1().ManagedClusterAddOns(...).Get(...) and if err !=
nil return false so Eventually will retry safely instead of panicking when old
is nil, otherwise proceed to set old.Status and call UpdateStatus; update the
closure to validate the GET error before modifying or using old.
---
Nitpick comments:
In `@hack/fix-crd-storage-version.sh`:
- Around line 37-46: The current check using first_storage and second_storage
reads the first and last "storage:" entries from ${CRD_FILE} and assumes they
map to v1alpha1 and v1beta1 respectively; update the detection to locate the
"storage:" value for each specific version name instead of relying on order
(e.g., search for the line(s) containing "name: v1alpha1" and "name: v1beta1"
and then read the subsequent/associated "storage:" value), adjust the logic
around the variables first_storage and second_storage to reflect
version-specific checks, and add a short comment explaining the assumption if
you keep any ordering-based logic.
In `@pkg/addonmanager/controllers/agentdeploy/default_hook_sync_test.go`:
- Around line 44-53: Extract the repeated APIVersion string into a shared
constant (e.g., const testManagedClusterAddOnAPIVersion =
"addon.open-cluster-management.io/v1beta1") and replace the hardcoded literal
occurrences where OwnerReferences are constructed in default_hook_sync_test.go
(for example in the calls to work.SetOwnerReferences and any other
OwnerReference literals used in that test file). Update all uses in the same
file to reference the new constant so the APIVersion is defined in one place and
easy to change.
In `@pkg/addonmanager/controllers/agentdeploy/default_sync_test.go`:
- Around line 189-190: Replace the three hardcoded APIVersion string literals
used in owner references with the centralized GroupVersion helper from the addon
API package: instead of "addon.open-cluster-management.io/v1beta1" set the
APIVersion to addonapiv1beta1.GroupVersion.String(); update the three
occurrences in default_sync_test.go where an owner reference APIVersion is
assigned (the APIVersion field alongside Kind: "ManagedClusterAddOn") so future
API version bumps are handled by the addonapiv1beta1.GroupVersion definition.
In `@test/integration/cloudevents/agent_hook_deploy_test.go`:
- Around line 259-269: Replace the hardcoded condition string in the test's
meta.IsStatusConditionFalse call with the shared constant: use
addonapiv1beta1.ManagedClusterAddOnHookManifestCompleted instead of
"HookManifestCompleted" inside the Eventually closure that checks
addon.Status.Conditions (the call to meta.IsStatusConditionFalse); this keeps
the test consistent with the rest of the codebase.
In `@test/integration/kube/agent_hook_deploy_test.go`:
- Around line 122-131: The test currently sets
addonapiv1beta1.InstallNamespaceAnnotation to "default", which masks the new
annotation-path behavior; update the ManagedClusterAddOn object (the addon
variable created for testAddonImpl.name) to use a sentinel namespace (e.g.,
"sentinel-install-namespace") instead of "default", then adjust the assertions
that inspect the generated Work or the ManagedClusterAddOn status to expect that
sentinel namespace; ensure createManagedClusterAddOnwithOwnerRefs is still used
to create the addon and that any checks referencing the install namespace use
addonapiv1beta1.InstallNamespaceAnnotation and the sentinel value.
- Around line 173-180: Replace the hard-coded condition name strings used in the
meta.IsStatusConditionTrue checks with the exported constants from
addon/v1beta1: use addonv1beta1.ManagedClusterAddOnManifestApplied instead of
"ManifestApplied" and addonv1beta1.ManagedClusterAddOnHookManifestCompleted for
the hook-completed checks; update the meta.IsStatusConditionTrue calls (the
checks on addon.Status.Conditions in the Eventually blocks around the
meta.IsStatusConditionTrue usages) to reference these constants and add the
corresponding import alias (addonv1beta1) where needed; apply the same
replacement to the other occurrence mentioned (lines 203-210).
In `@test/integration/kube/cluster_management_addon_test.go`:
- Around line 41-44: The test currently only checks RegistrationApplied and
Status.Namespace but doesn't verify the actual registration payload; update the
assertions after the code that sets
testAddonImpl.registrations[managedClusterName] (and the analogous block around
lines 85-99) to fetch the stored registration entry and assert its fields match
the expected addonapiv1beta1.RegistrationConfig (e.g., that Type ==
addonapiv1beta1.KubeClient and any other populated fields are equal to the
expected values). Locate the verification code around
testAddonImpl.registrations and managedClusterName and add explicit equality
checks on the returned registration object rather than relying solely on
RegistrationApplied/Status.Namespace.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c1930747-0d65-4a54-9c5f-d30a5fbb60b2
⛔ Files ignored due to path filters (6)
go.sumis excluded by!**/*.sumvendor/modules.txtis excluded by!vendor/**vendor/open-cluster-management.io/api/addon/v1beta1/types_managedclusteraddon.gois excluded by!vendor/**vendor/open-cluster-management.io/api/operator/v1/0000_01_operator.open-cluster-management.io_clustermanagers.crd.yamlis excluded by!vendor/**vendor/open-cluster-management.io/api/operator/v1/types_clustermanager.gois excluded by!vendor/**vendor/open-cluster-management.io/api/operator/v1/zz_generated.deepcopy.gois excluded by!vendor/**
📒 Files selected for processing (63)
README.mdexamples/helloworld/helloworld_test.goexamples/helloworld_agent/agent.goexamples/helloworld_helm/helloworld_helm.goexamples/helloworld_helm/helloworld_helm_test.gogo.modhack/fix-crd-storage-version.shpkg/addonfactory/helm_agentaddon.gopkg/addonfactory/helm_agentaddon_test.gopkg/addonfactory/template_agentaddon.gopkg/addonfactory/template_agentaddon_test.gopkg/addonfactory/test_helper.gopkg/addonmanager/addontesting/helpers.gopkg/addonmanager/base_manager.gopkg/addonmanager/cloudevents/manager.gopkg/addonmanager/constants/constants.gopkg/addonmanager/controllers/addonconfig/controller.gopkg/addonmanager/controllers/addonconfig/controller_test.gopkg/addonmanager/controllers/agentdeploy/controller.gopkg/addonmanager/controllers/agentdeploy/default_hook_sync_test.gopkg/addonmanager/controllers/agentdeploy/default_sync_test.gopkg/addonmanager/controllers/agentdeploy/hosted_hook_sync_test.gopkg/addonmanager/controllers/agentdeploy/hosted_sync_test.gopkg/addonmanager/controllers/agentdeploy/util_test.gopkg/addonmanager/controllers/agentdeploy/utils.gopkg/addonmanager/controllers/certificate/csrapprove_test.gopkg/addonmanager/controllers/certificate/csrsign_test.gopkg/addonmanager/controllers/cmaconfig/controller.gopkg/addonmanager/controllers/cmaconfig/controller_test.gopkg/addonmanager/controllers/cmamanagedby/controller.gopkg/addonmanager/controllers/cmamanagedby/controller_test.gopkg/addonmanager/controllers/registration/controller.gopkg/addonmanager/controllers/registration/controller_test.gopkg/addonmanager/manager.gopkg/agent/interface.gopkg/index/index.gopkg/utils/addon_config.gopkg/utils/addon_config_test.gopkg/utils/addon_test.gopkg/utils/helpers.gopkg/utils/permission.gopkg/utils/permission_test.gotest/e2e/helloworld_helm_test.gotest/e2e/helloworld_test.gotest/e2ehosted/helloworld_hosted_test.gotest/e2etoken/helloworld_token_test.gotest/integration-test.mktest/integration/cloudevents/agent_deploy_test.gotest/integration/cloudevents/agent_hook_deploy_test.gotest/integration/cloudevents/agent_hosting_deploy_test.gotest/integration/cloudevents/agent_hosting_hook_deploy_test.gotest/integration/cloudevents/suite_test.gotest/integration/kube/agent_deploy_test.gotest/integration/kube/agent_hook_deploy_test.gotest/integration/kube/agent_hosting_deploy_test.gotest/integration/kube/agent_hosting_hook_deploy_test.gotest/integration/kube/assertion_test.gotest/integration/kube/cluster_management_addon_test.gotest/integration/kube/csr_test.gotest/integration/kube/deleted_managed_cluster_create_addon_test.gotest/integration/kube/multiworks_test.gotest/integration/kube/registration_test.gotest/integration/kube/suite_test.go
💤 Files with no reviewable changes (2)
- pkg/addonmanager/controllers/cmamanagedby/controller_test.go
- pkg/addonmanager/controllers/cmamanagedby/controller.go
🚧 Files skipped from review as they are similar to previous changes (8)
- pkg/addonmanager/controllers/agentdeploy/util_test.go
- test/integration/kube/csr_test.go
- pkg/utils/helpers.go
- pkg/addonmanager/controllers/certificate/csrsign_test.go
- pkg/addonmanager/controllers/agentdeploy/hosted_hook_sync_test.go
- pkg/addonmanager/controllers/agentdeploy/hosted_sync_test.go
- examples/helloworld_agent/agent.go
- pkg/addonmanager/controllers/registration/controller_test.go
| sed -i 's/storage: true/storage: TEMP_TRUE_MARKER/g' "${CRD_FILE}" | ||
|
|
||
| # Step 2: Change "storage: false" to "storage: true" | ||
| sed -i 's/storage: false/storage: true/g' "${CRD_FILE}" | ||
|
|
||
| # Step 3: Change the marker back to "storage: false" | ||
| sed -i 's/storage: TEMP_TRUE_MARKER/storage: false/g' "${CRD_FILE}" |
There was a problem hiding this comment.
sed -i may fail on macOS without an empty string argument.
The sed -i syntax differs between GNU sed (Linux) and BSD sed (macOS). If developers need to run this script locally on macOS, it will fail.
Consider using a portable approach or documenting the Linux-only requirement.
🔧 Portable alternative using a temp file
-sed -i 's/storage: true/storage: TEMP_TRUE_MARKER/g' "${CRD_FILE}"
-sed -i 's/storage: false/storage: true/g' "${CRD_FILE}"
-sed -i 's/storage: TEMP_TRUE_MARKER/storage: false/g' "${CRD_FILE}"
+# Portable sed that works on both Linux and macOS
+sed 's/storage: true/storage: TEMP_TRUE_MARKER/g' "${CRD_FILE}" > "${CRD_FILE}.tmp" && mv "${CRD_FILE}.tmp" "${CRD_FILE}"
+sed 's/storage: false/storage: true/g' "${CRD_FILE}" > "${CRD_FILE}.tmp" && mv "${CRD_FILE}.tmp" "${CRD_FILE}"
+sed 's/storage: TEMP_TRUE_MARKER/storage: false/g' "${CRD_FILE}" > "${CRD_FILE}.tmp" && mv "${CRD_FILE}.tmp" "${CRD_FILE}"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| sed -i 's/storage: true/storage: TEMP_TRUE_MARKER/g' "${CRD_FILE}" | |
| # Step 2: Change "storage: false" to "storage: true" | |
| sed -i 's/storage: false/storage: true/g' "${CRD_FILE}" | |
| # Step 3: Change the marker back to "storage: false" | |
| sed -i 's/storage: TEMP_TRUE_MARKER/storage: false/g' "${CRD_FILE}" | |
| # Portable sed that works on both Linux and macOS | |
| sed 's/storage: true/storage: TEMP_TRUE_MARKER/g' "${CRD_FILE}" > "${CRD_FILE}.tmp" && mv "${CRD_FILE}.tmp" "${CRD_FILE}" | |
| # Step 2: Change "storage: false" to "storage: true" | |
| sed 's/storage: false/storage: true/g' "${CRD_FILE}" > "${CRD_FILE}.tmp" && mv "${CRD_FILE}.tmp" "${CRD_FILE}" | |
| # Step 3: Change the marker back to "storage: false" | |
| sed 's/storage: TEMP_TRUE_MARKER/storage: false/g' "${CRD_FILE}" > "${CRD_FILE}.tmp" && mv "${CRD_FILE}.tmp" "${CRD_FILE}" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@hack/fix-crd-storage-version.sh` around lines 50 - 56, The three in-place sed
calls that modify "${CRD_FILE}" using 'sed -i' (the lines that swap storage:
true/false and use TEMP_TRUE_MARKER) are not portable to macOS; change them to a
portable approach by either (A) detecting Darwin and using the macOS-friendly
'sed -i '' ...' invocation for those three replacements, or (B) replace each
in-place sed call with a safe temp-file pattern that writes the sed output to a
mktemp-created file and then moves it back into "${CRD_FILE}" (ensuring
TEMP_TRUE_MARKER and the three replacement steps remain identical); update the
script to handle failures and cleanup the temp file.
| annotations := map[string]string{ | ||
| AnnotationValuesName: values, | ||
| } | ||
| // In v1beta1, InstallNamespace is stored in annotation instead of Spec | ||
| if len(installNamespace) > 0 { | ||
| annotations[addonapiv1beta1.InstallNamespaceAnnotation] = installNamespace | ||
| } | ||
| annotations[AnnotationValuesName] = values |
There was a problem hiding this comment.
Remove duplicate annotation assignment.
AnnotationValuesName is set twice with the same value - once at line 21 and again at line 27. The second assignment is redundant.
🔧 Proposed fix
func NewFakeManagedClusterAddon(name, clusterName, installNamespace, values string) *addonapiv1beta1.ManagedClusterAddOn {
annotations := map[string]string{
AnnotationValuesName: values,
}
// In v1beta1, InstallNamespace is stored in annotation instead of Spec
if len(installNamespace) > 0 {
annotations[addonapiv1beta1.InstallNamespaceAnnotation] = installNamespace
}
- annotations[AnnotationValuesName] = values
return &addonapiv1beta1.ManagedClusterAddOn{🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/addonfactory/test_helper.go` around lines 20 - 27, The annotations map in
test_helper.go redundantly assigns AnnotationValuesName twice; remove the
duplicate assignment and keep a single initialization of annotations with
AnnotationValuesName: values (and the conditional addition of
addonapiv1beta1.InstallNamespaceAnnotation when installNamespace is non-empty)
so only one assignment to AnnotationValuesName remains.
| // In v1beta1, ConfigReference doesn't have Name/Namespace directly | ||
| // They are in DesiredConfig field | ||
| if configReference.DesiredConfig == nil { | ||
| continue | ||
| } | ||
|
|
||
| var config *unstructured.Unstructured | ||
| var err error | ||
| if configReference.Namespace == "" { | ||
| config, err = lister.Get(configReference.Name) | ||
| if configReference.DesiredConfig.Namespace == "" { | ||
| config, err = lister.Get(configReference.DesiredConfig.Name) | ||
| } else { | ||
| config, err = lister.Namespace(configReference.Namespace).Get(configReference.Name) | ||
| config, err = lister.Namespace(configReference.DesiredConfig.Namespace).Get(configReference.DesiredConfig.Name) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find the controller file and examine the method context
find . -path "*/addonmanager/controllers/addonconfig/controller.go" -type fRepository: open-cluster-management-io/addon-framework
Length of output: 142
🏁 Script executed:
# Search for the updateConfigSpecHashAndGenerations method to understand the full context
rg -n "updateConfigSpecHashAndGenerations" --type go -A 50 -B 5 | head -100Repository: open-cluster-management-io/addon-framework
Length of output: 6364
🏁 Script executed:
# Find ConfigReference type definition in v1beta1
find . -name "*.go" -type f | xargs grep -l "type ConfigReference" | head -5Repository: open-cluster-management-io/addon-framework
Length of output: 252
🏁 Script executed:
# Search for v1beta1 addon API definitions
find . -path "*/addon/v1beta1*" -type f | head -20Repository: open-cluster-management-io/addon-framework
Length of output: 2140
🏁 Script executed:
# Read the v1beta1 ConfigReference type definition
cat -n ./vendor/open-cluster-management.io/api/addon/v1beta1/types_managedclusteraddon.go | head -300Repository: open-cluster-management-io/addon-framework
Length of output: 15222
🏁 Script executed:
# Search for ConfigReference struct definition specifically
rg -n "type ConfigReference struct" ./vendor/open-cluster-management.io/api/addon/v1beta1/types_managedclusteraddon.go -A 20Repository: open-cluster-management-io/addon-framework
Length of output: 948
🏁 Script executed:
# Look for LastAppliedConfig field references in the v1beta1 types file
rg -n "LastAppliedConfig" ./vendor/open-cluster-management.io/api/addon/v1beta1/types_managedclusteraddon.goRepository: open-cluster-management-io/addon-framework
Length of output: 151
🏁 Script executed:
# Find ConfigSpecHash type definition
rg -n "type ConfigSpecHash struct" ./vendor/open-cluster-management.io/api/addon/v1beta1/types_managedclusteraddon.go -A 15Repository: open-cluster-management-io/addon-framework
Length of output: 68
🏁 Script executed:
# Also check for ConfigReferent type since it's embedded in AddOnConfig
rg -n "type ConfigReferent" ./vendor/open-cluster-management.io/api/addon/v1beta1/types_managedclusteraddon.go -A 10Repository: open-cluster-management-io/addon-framework
Length of output: 68
🏁 Script executed:
# Let's look for ConfigGroupResource as well
rg -n "type ConfigGroupResource" ./vendor/open-cluster-management.io/api/addon/v1beta1/types_managedclusteraddon.go -A 10Repository: open-cluster-management-io/addon-framework
Length of output: 68
🏁 Script executed:
# Search for ConfigSpecHash definition across all v1beta1 files
rg -n "type ConfigSpecHash" ./vendor/open-cluster-management.io/api/addon/v1beta1/ -A 15Repository: open-cluster-management-io/addon-framework
Length of output: 2091
🏁 Script executed:
# Search for ConfigReferent and ConfigGroupResource across v1beta1
rg -n "type ConfigReferent|type ConfigGroupResource" ./vendor/open-cluster-management.io/api/addon/v1beta1/ -A 10Repository: open-cluster-management-io/addon-framework
Length of output: 2722
🏁 Script executed:
# Check v1alpha1 definitions since there may be a conversion
rg -n "type ConfigReference|type ConfigSpecHash|type ConfigReferent|type ConfigGroupResource" ./vendor/open-cluster-management.io/api/addon/v1alpha1/ -A 10Repository: open-cluster-management-io/addon-framework
Length of output: 5616
🏁 Script executed:
# Let's check the full context around the method to understand the complete flow
cat -n ./pkg/addonmanager/controllers/addonconfig/controller.go | sed -n '174,240p'Repository: open-cluster-management-io/addon-framework
Length of output: 2314
🏁 Script executed:
# Verify if there are any tests that might clarify the expected behavior
find . -name "*test*.go" -type f | xargs grep -l "updateConfigSpecHashAndGenerations" 2>/dev/null | head -3Repository: open-cluster-management-io/addon-framework
Length of output: 68
🏁 Script executed:
# Check if there are any conversion utilities between v1alpha1 and v1beta1
cat -n ./vendor/open-cluster-management.io/api/addon/v1beta1/conversion.go | head -100Repository: open-cluster-management-io/addon-framework
Length of output: 4348
Use LastAppliedConfig as fallback locator for config lookups.
The code currently skips updating LastObservedGeneration when DesiredConfig is nil, even though LastAppliedConfig may contain valid config coordinates. The inline comment states generation should be updated "for all the configs in status", but the early return prevents this for any entry without a DesiredConfig. Use DesiredConfig first and fall back to LastAppliedConfig to ensure generation tracking for configs that were previously applied but no longer appear in the desired spec, while keeping the spec-hash write restricted to spec-matched configs.
Suggested patch
- // In v1beta1, ConfigReference doesn't have Name/Namespace directly
- // They are in DesiredConfig field
- if configReference.DesiredConfig == nil {
+ targetConfig := configReference.DesiredConfig
+ if targetConfig == nil {
+ targetConfig = configReference.LastAppliedConfig
+ }
+ if targetConfig == nil {
continue
}
var config *unstructured.Unstructured
var err error
- if configReference.DesiredConfig.Namespace == "" {
- config, err = lister.Get(configReference.DesiredConfig.Name)
+ if targetConfig.Namespace == "" {
+ config, err = lister.Get(targetConfig.Name)
} else {
- config, err = lister.Namespace(configReference.DesiredConfig.Namespace).Get(configReference.DesiredConfig.Name)
+ config, err = lister.Namespace(targetConfig.Namespace).Get(targetConfig.Name)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // In v1beta1, ConfigReference doesn't have Name/Namespace directly | |
| // They are in DesiredConfig field | |
| if configReference.DesiredConfig == nil { | |
| continue | |
| } | |
| var config *unstructured.Unstructured | |
| var err error | |
| if configReference.Namespace == "" { | |
| config, err = lister.Get(configReference.Name) | |
| if configReference.DesiredConfig.Namespace == "" { | |
| config, err = lister.Get(configReference.DesiredConfig.Name) | |
| } else { | |
| config, err = lister.Namespace(configReference.Namespace).Get(configReference.Name) | |
| config, err = lister.Namespace(configReference.DesiredConfig.Namespace).Get(configReference.DesiredConfig.Name) | |
| } | |
| targetConfig := configReference.DesiredConfig | |
| if targetConfig == nil { | |
| targetConfig = configReference.LastAppliedConfig | |
| } | |
| if targetConfig == nil { | |
| continue | |
| } | |
| var config *unstructured.Unstructured | |
| var err error | |
| if targetConfig.Namespace == "" { | |
| config, err = lister.Get(targetConfig.Name) | |
| } else { | |
| config, err = lister.Namespace(targetConfig.Namespace).Get(targetConfig.Name) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/addonmanager/controllers/addonconfig/controller.go` around lines 190 -
202, The loop currently skips entries when configReference.DesiredConfig is nil;
change it to first try configReference.DesiredConfig and if that's nil then fall
back to configReference.LastAppliedConfig (ensuring LastAppliedConfig != nil)
when forming the lookup Name/Namespace used with lister.Get or
lister.Namespace(...).Get so we still locate previously applied configs and can
update LastObservedGeneration for all status entries; however only perform the
spec-hash/spec-write logic when DesiredConfig was present (i.e., keep spec-hash
writes restricted to spec-matched configs). Update the block around
configReference.DesiredConfig/LastAppliedConfig, the lister lookup, and later
generation/spec-hash handling (references: configReference, DesiredConfig,
LastAppliedConfig, lister, LastObservedGeneration).
| // TODO: deprecate PreDeleteHookLabel in the future release. | ||
| _, hasPreDeleteLabel := labels[addonapiv1alpha1.AddonPreDeleteHookLabelKey] | ||
| _, hasPreDeleteAnnotation := annotations[addonapiv1alpha1.AddonPreDeleteHookAnnotationKey] | ||
| _, hasPreDeleteLabel := labels[addonapiv1beta1.AddonPreDeleteHookAnnotationKey] | ||
| _, hasPreDeleteAnnotation := annotations[addonapiv1beta1.AddonPreDeleteHookAnnotationKey] | ||
| if !hasPreDeleteLabel && !hasPreDeleteAnnotation { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's find the relevant API definition files
find . -type f -name "*.go" | grep -E "(v1alpha1|v1beta1)" | head -20Repository: open-cluster-management-io/addon-framework
Length of output: 1419
🏁 Script executed:
# Search for the constant definitions
rg "AddonPreDeleteHookLabelKey|AddonPreDeleteHookAnnotationKey" -t goRepository: open-cluster-management-io/addon-framework
Length of output: 1966
🏁 Script executed:
# Examine the specific file in question
cat -n pkg/addonmanager/controllers/agentdeploy/utils.go | sed -n '125,145p'Repository: open-cluster-management-io/addon-framework
Length of output: 916
🏁 Script executed:
# Check the imports in utils.go to see if v1alpha1 is available
head -50 pkg/addonmanager/controllers/agentdeploy/utils.go | grep -E "^import|addonapiv1"Repository: open-cluster-management-io/addon-framework
Length of output: 619
Use the v1alpha1 label key when checking for pre-delete hook labels.
Line 134 uses addonapiv1beta1.AddonPreDeleteHookAnnotationKey to look up labels, but this constant is the annotation key (addon.open-cluster-management.io/addon-pre-delete), not the label key. The v1alpha1 API defines a separate label key (open-cluster-management.io/addon-pre-delete). This mismatch causes existing hook Jobs/Pods marked with the legacy label to be unrecognized, skipping their hook work. The TODO comment on line 133 indicates label support is still intended during the beta1 migration.
Proposed fix
- _, hasPreDeleteLabel := labels[addonapiv1beta1.AddonPreDeleteHookAnnotationKey]
+ _, hasPreDeleteLabel := labels[addonapiv1alpha1.AddonPreDeleteHookLabelKey]
_, hasPreDeleteAnnotation := annotations[addonapiv1beta1.AddonPreDeleteHookAnnotationKey]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // TODO: deprecate PreDeleteHookLabel in the future release. | |
| _, hasPreDeleteLabel := labels[addonapiv1alpha1.AddonPreDeleteHookLabelKey] | |
| _, hasPreDeleteAnnotation := annotations[addonapiv1alpha1.AddonPreDeleteHookAnnotationKey] | |
| _, hasPreDeleteLabel := labels[addonapiv1beta1.AddonPreDeleteHookAnnotationKey] | |
| _, hasPreDeleteAnnotation := annotations[addonapiv1beta1.AddonPreDeleteHookAnnotationKey] | |
| if !hasPreDeleteLabel && !hasPreDeleteAnnotation { | |
| // TODO: deprecate PreDeleteHookLabel in the future release. | |
| _, hasPreDeleteLabel := labels[addonapiv1alpha1.AddonPreDeleteHookLabelKey] | |
| _, hasPreDeleteAnnotation := annotations[addonapiv1beta1.AddonPreDeleteHookAnnotationKey] | |
| if !hasPreDeleteLabel && !hasPreDeleteAnnotation { |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/addonmanager/controllers/agentdeploy/utils.go` around lines 133 - 136,
The label check is using the annotation constant
addonapiv1beta1.AddonPreDeleteHookAnnotationKey; change the label lookup to use
the v1alpha1 label constant (e.g. addonapiv1alpha1.AddonPreDeleteHookLabelKey)
so legacy label keys like "open-cluster-management.io/addon-pre-delete" are
recognized; update the hasPreDeleteLabel assignment that reads from labels[...]
to use the v1alpha1 label constant while leaving the annotation check
(annotations[...] with addonapiv1beta1.AddonPreDeleteHookAnnotationKey)
unchanged.
| test-integration: update-crd-storage-version test-kube-integration | ||
| @$(MAKE) restore-crd-storage-version | ||
| .PHONY: test-integration |
There was a problem hiding this comment.
CRD restoration may be skipped if tests fail.
If test-kube-integration fails, the recipe will exit before executing restore-crd-storage-version, leaving the CRD in a modified state. This could cause confusion during subsequent runs or local development.
Consider using a trap or restructuring to ensure restoration happens regardless of test outcome.
🔧 Suggested fix using a subshell with trap
-test-integration: update-crd-storage-version test-kube-integration
- @$(MAKE) restore-crd-storage-version
+test-integration: update-crd-storage-version
+ @( trap '$(MAKE) restore-crd-storage-version' EXIT; $(MAKE) test-kube-integration )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| test-integration: update-crd-storage-version test-kube-integration | |
| @$(MAKE) restore-crd-storage-version | |
| .PHONY: test-integration | |
| test-integration: update-crd-storage-version | |
| @( trap '$(MAKE) restore-crd-storage-version' EXIT; $(MAKE) test-kube-integration ) | |
| .PHONY: test-integration |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/integration-test.mk` around lines 72 - 74, The test-integration recipe
currently runs test-kube-integration then runs restore-crd-storage-version but
will skip restoration if the tests fail; update the test-integration target
(referencing test-integration, test-kube-integration, and
restore-crd-storage-version) so restore-crd-storage-version always runs
regardless of test outcome—for example run test-kube-integration inside a
subshell with a trap/EXIT handler that invokes restore-crd-storage-version, or
restructure the recipe to capture the test exit status and always call
restore-crd-storage-version before exiting with that status.
| if cond := meta.FindStatusCondition(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnConditionProgressing); cond != nil { | ||
| return fmt.Errorf("expected no addon progressing condition, %v", addon.Status.Conditions) | ||
| } |
There was a problem hiding this comment.
Allow Progressing=False instead of requiring the condition to be absent.
These checks fail if the controller starts publishing an explicit ManagedClusterAddOnConditionProgressing=False, which is still a healthy terminal state. The test should only reject Progressing=True.
💡 Suggested change
- if cond := meta.FindStatusCondition(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnConditionProgressing); cond != nil {
- return fmt.Errorf("expected no addon progressing condition, %v", addon.Status.Conditions)
- }
+ if meta.IsStatusConditionTrue(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnConditionProgressing) {
+ return fmt.Errorf("unexpected addon progressing condition, %v", addon.Status.Conditions)
+ }Also applies to: 310-312, 448-450, 578-580, 738-740
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/integration/cloudevents/agent_deploy_test.go` around lines 277 - 279,
The test currently fails if any ManagedClusterAddOnConditionProgressing
condition exists; update the checks to only fail when that condition is
explicitly True. Replace the current nil-only check (using
meta.FindStatusCondition(addon.Status.Conditions,
addonapiv1beta1.ManagedClusterAddOnConditionProgressing)) with a guard that
returns an error only if the found condition is non-nil AND its Status equals
metav1.ConditionTrue (i.e., cond != nil && cond.Status == metav1.ConditionTrue).
Apply the same change for the other occurrences referenced (around the other
line groups) so Progressing=False or absent are both accepted.
| func createManagedClusterAddOnwithOwnerRefs(namespace string, addon *addonapiv1beta1.ManagedClusterAddOn, cma *addonapiv1beta1.ClusterManagementAddOn) { | ||
| addon, err := hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).Create(context.Background(), addon, metav1.CreateOptions{}) | ||
| gomega.Expect(err).ToNot(gomega.HaveOccurred()) | ||
|
|
||
| addonCopy := addon.DeepCopy() | ||
|
|
||
| // This is to assume that addon-manager has already added the OwnerReferences. | ||
| owner := metav1.NewControllerRef(cma, schema.GroupVersionKind{ | ||
| Group: addonapiv1alpha1.GroupName, | ||
| Version: addonapiv1alpha1.GroupVersion.Version, | ||
| Group: addonapiv1beta1.GroupName, | ||
| Version: addonapiv1beta1.GroupVersion.Version, | ||
| Kind: "ClusterManagementAddOn", | ||
| }) | ||
| modified := utils.MergeOwnerRefs(&addonCopy.OwnerReferences, *owner, false) | ||
| if modified { | ||
| _, err = hubAddonClient.AddonV1alpha1().ManagedClusterAddOns(addonCopy.Namespace).Update(context.Background(), addonCopy, metav1.UpdateOptions{}) | ||
| _, err = hubAddonClient.AddonV1beta1().ManagedClusterAddOns(addonCopy.Namespace).Update(context.Background(), addonCopy, metav1.UpdateOptions{}) | ||
| gomega.Expect(err).ToNot(gomega.HaveOccurred()) |
There was a problem hiding this comment.
Set the owner reference before Create.
This helper creates the ManagedClusterAddOn first and only adds the controller owner ref in a follow-up Update. That leaves a reconciliation window where controllers can observe the object without the owner ref, so the test no longer matches the “owner controller already ran” assumption and can become flaky.
🔧 Suggested change
func createManagedClusterAddOnwithOwnerRefs(namespace string, addon *addonapiv1beta1.ManagedClusterAddOn, cma *addonapiv1beta1.ClusterManagementAddOn) {
- addon, err := hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).Create(context.Background(), addon, metav1.CreateOptions{})
- gomega.Expect(err).ToNot(gomega.HaveOccurred())
-
addonCopy := addon.DeepCopy()
// This is to assume that addon-manager has already added the OwnerReferences.
owner := metav1.NewControllerRef(cma, schema.GroupVersionKind{
Group: addonapiv1beta1.GroupName,
Version: addonapiv1beta1.GroupVersion.Version,
Kind: "ClusterManagementAddOn",
})
- modified := utils.MergeOwnerRefs(&addonCopy.OwnerReferences, *owner, false)
- if modified {
- _, err = hubAddonClient.AddonV1beta1().ManagedClusterAddOns(addonCopy.Namespace).Update(context.Background(), addonCopy, metav1.UpdateOptions{})
- gomega.Expect(err).ToNot(gomega.HaveOccurred())
- }
+ utils.MergeOwnerRefs(&addonCopy.OwnerReferences, *owner, false)
+
+ _, err := hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).Create(context.Background(), addonCopy, metav1.CreateOptions{})
+ gomega.Expect(err).ToNot(gomega.HaveOccurred())
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/integration/cloudevents/agent_deploy_test.go` around lines 748 - 763,
The helper createManagedClusterAddOnwithOwnerRefs currently creates the
ManagedClusterAddOn and only then adds the controller owner reference via
MergeOwnerRefs and an Update, which can create a race; instead set the owner
reference on the addon object before calling
hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).Create(...).
Concretely, compute the controller owner (the owner metav1.NewControllerRef(cma,
...)) and merge/assign it into addon.OwnerReferences (using the same
MergeOwnerRefs or direct assignment) and then call Create; remove the subsequent
Update path that patches the created object.
|
|
||
| // delete addon, hook manifestwork will be applied, addon is deleting and addon status will be updated. | ||
| err = hubAddonClient.AddonV1alpha1().ManagedClusterAddOns(managedClusterName).Delete(context.Background(), testAddonImpl.name, metav1.DeleteOptions{}) | ||
| err = hubAddonClient.AddonV1beta1().ManagedClusterAddOns(managedClusterName).Delete(context.Background(), testAddonImpl.name, metav1.DeleteOptions{}) |
There was a problem hiding this comment.
Fix typo in error message.
Line 330 has a typo: "addon is expceted to be deleted" should be "addon is expected to be deleted".
📝 Proposed fix
- return fmt.Errorf("addon is expceted to be deleted")
+ return fmt.Errorf("addon is expected to be deleted")Also applies to: 323-331
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/integration/cloudevents/agent_hook_deploy_test.go` at line 227, Fix the
typo in the test error message string: search for the assertion/log that uses
the text "addon is expceted to be deleted" (near the Delete call on
hubAddonClient.AddonV1beta1().ManagedClusterAddOns(managedClusterName).Delete
and surrounding assertions in the same test) and change it to "addon is expected
to be deleted"; update all occurrences in that block (lines around 323–331) so
the message is spelled correctly wherever the test reports the expected deletion
of testAddonImpl.name.
| cma, err := hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Create(context.Background(), cma, metav1.CreateOptions{}) | ||
| gomega.Expect(err).ToNot(gomega.HaveOccurred()) |
There was a problem hiding this comment.
Register cleanup immediately after creating the cluster-scoped add-on.
If this spec fails before Line 101, the ClusterManagementAddOn is left behind because AfterEach does not remove it. That can make the next run fail with AlreadyExists. Please use ginkgo.DeferCleanup right after the create succeeds instead of relying on the tail of the test body.
Suggested cleanup pattern
cma, err := hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Create(context.Background(), cma, metav1.CreateOptions{})
gomega.Expect(err).ToNot(gomega.HaveOccurred())
+ginkgo.DeferCleanup(func() {
+ err := hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Delete(
+ context.Background(), testAddonImpl.name, metav1.DeleteOptions{},
+ )
+ gomega.Expect(err).ToNot(gomega.HaveOccurred())
+})
...
-err = hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Delete(context.Background(), testAddonImpl.name, metav1.DeleteOptions{})
-gomega.Expect(err).ToNot(gomega.HaveOccurred())Also applies to: 101-102
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/integration/kube/cluster_management_addon_test.go` around lines 69 - 70,
The test creates a cluster-scoped ClusterManagementAddOn via
hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Create(...) and only
relies on AfterEach to delete it, which can leave the resource if the test fails
early; fix this by calling ginkgo.DeferCleanup immediately after the Create
returns successfully (i.e., right after cma, err := ... and the Expect that err
is nil) to register deletion of the created cma (use the same hubAddonClient and
cma identifier) instead of waiting for the tail of the test body or AfterEach.
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
test/e2ehosted/helloworld_hosted_test.go (1)
124-124:⚠️ Potential issue | 🟡 MinorTypo: "addo" → "addon".
📝 Proposed fix
- // Ensure helloworld addo is functioning. + // Ensure helloworld addon is functioning.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2ehosted/helloworld_hosted_test.go` at line 124, Update the inline comment that currently reads "Ensure helloworld addo is functioning." to correct the typo to "Ensure helloworld addon is functioning."; locate the string in test/e2ehosted/helloworld_hosted_test.go (search for the exact comment text) and replace "addo" with "addon" leaving the rest of the comment unchanged.test/e2e/helloworld_test.go (2)
531-556:⚠️ Potential issue | 🟡 MinorMake the deployment-config helpers reconcile existing objects.
Right now an existing
AddOnDeploymentConfigshort-circuits the helper, so a dirty cluster can make these tests run against stale config instead of the beta1 payload built here. That makes reruns order-dependent and can mask migration regressions.Suggested direction
- _, err := hubAddOnClient.AddonV1beta1().AddOnDeploymentConfigs(namespace).Get(context.Background(), deployConfigName, metav1.GetOptions{}) - if errors.IsNotFound(err) { - _, err := hubAddOnClient.AddonV1beta1().AddOnDeploymentConfigs(namespace).Create(...) - return err - } - return err + existing, err := hubAddOnClient.AddonV1beta1().AddOnDeploymentConfigs(namespace).Get(context.Background(), deployConfigName, metav1.GetOptions{}) + if errors.IsNotFound(err) { + _, err := hubAddOnClient.AddonV1beta1().AddOnDeploymentConfigs(namespace).Create(...) + return err + } + if err != nil { + return err + } + + desired := existing.DeepCopy() + desired.Spec = addonapiv1beta1.AddOnDeploymentConfigSpec{ + // expected spec for this helper + } + if !equality.Semantic.DeepEqual(existing.Spec, desired.Spec) { + _, err = hubAddOnClient.AddonV1beta1().AddOnDeploymentConfigs(namespace).Update(context.Background(), desired, metav1.UpdateOptions{}) + } + return errSame pattern applies to the other
prepare*AddOnDeploymentConfighelpers.Also applies to: 560-584, 587-610, 613-653, 656-679
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/helloworld_test.go` around lines 531 - 556, The helper currently returns early if an AddOnDeploymentConfig already exists, which leaves stale configs in tests; change the logic in the prepare*AddOnDeploymentConfig helpers so that after fetching via hubAddOnClient.AddonV1beta1().AddOnDeploymentConfigs(...).Get(...) you either Create when errors.IsNotFound(err) or reconcile by updating the existing object's Spec (NodePlacement, Tolerations, AgentInstallNamespace, etc.) and calling Update (or Patch) on hubAddOnClient.AddonV1beta1().AddOnDeploymentConfigs(namespace).Update(...) to ensure the cluster state matches the desired deployConfigName/namespace/managedClusterName payload; apply the same reconciliation pattern to the other helpers mentioned (the blocks around lines 560-584, 587-610, 613-653, 656-679).
533-549:⚠️ Potential issue | 🟡 MinorUse the helper argument in the
Createcalls.These helpers
Getby the caller-suppliednamespacebutCreateviamanagedClusterName. That only works while both happen to match. If a future case passes a different namespace, the create path will target the wrong resource URL.Proposed fix
- if _, err := hubAddOnClient.AddonV1beta1().AddOnDeploymentConfigs(managedClusterName).Create( + if _, err := hubAddOnClient.AddonV1beta1().AddOnDeploymentConfigs(namespace).Create(Apply the same change in all five
prepare*AddOnDeploymentConfighelpers.Also applies to: 563-576, 589-602, 615-645, 659-671
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/helloworld_test.go` around lines 533 - 549, In the prepare*AddOnDeploymentConfig helper functions the Create call uses managedClusterName instead of the helper's namespace parameter, which can target the wrong resource; update the AddonV1beta1().AddOnDeploymentConfigs(...).Create(...) calls to pass the helper's namespace variable (not managedClusterName) for all prepare*AddOnDeploymentConfig helpers (the same change for each of the five helpers), keeping the existing AddOnDeploymentConfig spec (NodePlacement, Tolerations, AgentInstallNamespace) intact.pkg/addonmanager/controllers/cmaconfig/controller.go (1)
161-199:⚠️ Potential issue | 🟠 MajorPropagate spec-hash lookup failures.
Both
getConfigSpecHasherror paths returnnil, so transient lister/hash failures are treated as a successful sync. That prevents a requeue and can leaveDesiredConfig.SpecHashstale until some unrelated event happens.💡 Minimal fix
- specHash, err := c.getConfigSpecHash(defaultConfigReference.ConfigGroupResource, defaultConfigReference.DesiredConfig.ConfigReferent) - if err != nil { - return nil - } + specHash, err := c.getConfigSpecHash(defaultConfigReference.ConfigGroupResource, defaultConfigReference.DesiredConfig.ConfigReferent) + if err != nil { + return err + } cma.Status.DefaultConfigReferences[i].DesiredConfig.SpecHash = specHash } @@ - specHash, err := c.getConfigSpecHash(configReference.ConfigGroupResource, configReference.DesiredConfig.ConfigReferent) - if err != nil { - return nil - } + specHash, err := c.getConfigSpecHash(configReference.ConfigGroupResource, configReference.DesiredConfig.ConfigReferent) + if err != nil { + return err + } cma.Status.InstallProgressions[i].ConfigReferences[j].DesiredConfig.SpecHash = specHash } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/cmaconfig/controller.go` around lines 161 - 199, In updateConfigSpecHash, the error returns inside the loops swallow getConfigSpecHash failures by returning nil; change those to return the actual error so transient lister/hash errors cause a requeue and the controller can retry. Specifically, in cmaConfigController.updateConfigSpecHash where you call c.getConfigSpecHash(...) for DefaultConfigReferences and InstallProgressions, replace the two occurrences of "return nil" on error with "return err" (propagating the error from getConfigSpecHash) so DesiredConfig.SpecHash isn't left stale when lookups fail.
♻️ Duplicate comments (12)
test/integration-test.mk (1)
72-73:⚠️ Potential issue | 🟡 MinorCRD restoration will be skipped if tests fail.
The previous review correctly identified this issue. If
test-kube-integrationfails, the recipe won't execute, leaving the CRD in a modified state. The suggested trap-based fix ensures cleanup regardless of test outcome.🔧 Recommended fix using trap
-test-integration: update-crd-storage-version test-kube-integration - @$(MAKE) restore-crd-storage-version +test-integration: update-crd-storage-version + @( trap '$(MAKE) restore-crd-storage-version' EXIT; $(MAKE) test-kube-integration )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration-test.mk` around lines 72 - 73, The test-integration recipe can skip CRD restoration if test-kube-integration fails; wrap the recipe commands so the shell registers a trap that always runs restore-crd-storage-version on EXIT (or ERR/EXIT) before running update-crd-storage-version and test-kube-integration, then clear the trap after successful completion; update the test-integration target to invoke the trap-backed shell sequence so restore-crd-storage-version is executed regardless of test outcome.test/e2ehosted/helloworld_hosted_test.go (1)
49-80:⚠️ Potential issue | 🟠 MajorExisting addons are not reconciled to beta1 shape.
If a
helloworldhostedaddon already exists from a prior v1alpha1 run (or a partial test run), the code skips the Create path but never updates the existing object with the required beta1 annotations (HostingClusterNameAnnotationKey,InstallNamespaceAnnotation). The status condition check at line 75 will then poll indefinitely because the controller won't recognize the addon as hosted without those annotations.Add an update/patch path when the addon exists but is missing the required annotations.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2ehosted/helloworld_hosted_test.go` around lines 49 - 80, The current Get/Create block for helloWorldHostedAddonName skips updating existing addons so prior v1alpha1 objects lacking addonapiv1beta1.HostingClusterNameAnnotationKey and addonapiv1beta1.InstallNamespaceAnnotation never get reconciled; modify the logic after the successful Get (the object returned by hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(...).Get) to detect missing annotations and perform an Update or Patch to add those two annotations (preserving resourceVersion/owner info), then handle and return any update errors; keep the existing Create path unchanged and ensure the function still returns nil on success.pkg/addonmanager/controllers/agentdeploy/utils.go (1)
133-135:⚠️ Potential issue | 🟠 MajorUse the legacy label key for pre-delete hook detection.
Line 134 still looks up
labelswithaddonapiv1beta1.AddonPreDeleteHookAnnotationKey. That only matches the beta1 annotation name, so objects carrying the legacy pre-delete label will be skipped even though the TODO says label compatibility is still required. Keep the beta1 annotation check on Line 135, but switch the label lookup back to the alpha1 label constant.Proposed fix
- _, hasPreDeleteLabel := labels[addonapiv1beta1.AddonPreDeleteHookAnnotationKey] + _, hasPreDeleteLabel := labels[addonapiv1alpha1.AddonPreDeleteHookLabelKey] _, hasPreDeleteAnnotation := annotations[addonapiv1beta1.AddonPreDeleteHookAnnotationKey]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/agentdeploy/utils.go` around lines 133 - 135, The label lookup currently uses the beta1 annotation constant (addonapiv1beta1.AddonPreDeleteHookAnnotationKey) against the labels map, so legacy alpha label-bearing objects are skipped; change the first lookup to use the legacy alpha label constant (e.g. addonapiv1alpha1.AddonPreDeleteHookLabel) while keeping the second line that checks annotations using addonapiv1beta1.AddonPreDeleteHookAnnotationKey; update the variables referenced (labels, annotations) in the PreDeleteHook detection code in pkg/addonmanager/controllers/agentdeploy/utils.go accordingly.pkg/utils/permission.go (2)
273-297:⚠️ Potential issue | 🟡 MinorUse
BaseSubject{}for the empty-subject guard.
subjectis populated fromKubeClient.Subject.BaseSubjectorCustomSigner.Subject.BaseSubject, so its concrete type here isBaseSubject. Comparing it toSubject{}misses the documented empty-subject case for kube-client registrations. (raw.githubusercontent.com)💡 Minimal fix
- if subject == nil || equality.Semantic.DeepEqual(*subject, addonapiv1beta1.Subject{}) { + if subject == nil || equality.Semantic.DeepEqual(*subject, addonapiv1beta1.BaseSubject{}) { return nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/permission.go` around lines 273 - 297, The empty-subject guard in BuildSubjectsFromRegistration is comparing *subject (a BaseSubject) against addonapiv1beta1.Subject{}, which is the wrong concrete type; update the check to compare against addonapiv1beta1.BaseSubject{} instead (i.e., replace equality.Semantic.DeepEqual(*subject, addonapiv1beta1.Subject{}) with equality.Semantic.DeepEqual(*subject, addonapiv1beta1.BaseSubject{})) so the kube-client registration empty case is detected correctly.
327-330:⚠️ Potential issue | 🟠 MajorExisting RBAC objects won't pick up the beta1 ownerRef.
This helper now writes a beta1
APIVersion, butApplyRoleandApplyRoleBindingstill short-circuit on rules/subjects/roleRef only. Roles and RoleBindings created under v1alpha1 will keep the old owner reference forever unless some other field changes.💡 Minimal fix
// ApplyRole existingCopy := existing.DeepCopy() contentSame := equality.Semantic.DeepEqual(existingCopy.Rules, required.Rules) -if contentSame { +ownerRefsSame := equality.Semantic.DeepEqual(existingCopy.OwnerReferences, required.OwnerReferences) +if contentSame && ownerRefsSame { return existingCopy, false, nil } existingCopy.Rules = required.Rules +existingCopy.OwnerReferences = required.OwnerReferences// ApplyRoleBinding subjectsAreSame := equality.Semantic.DeepEqual(existingCopy.Subjects, requiredCopy.Subjects) roleRefIsSame := equality.Semantic.DeepEqual(existingCopy.RoleRef, requiredCopy.RoleRef) -if subjectsAreSame && roleRefIsSame { +ownerRefsSame := equality.Semantic.DeepEqual(existingCopy.OwnerReferences, requiredCopy.OwnerReferences) +if subjectsAreSame && roleRefIsSame && ownerRefsSame { return existingCopy, false, nil } existingCopy.Subjects = requiredCopy.Subjects existingCopy.RoleRef = requiredCopy.RoleRef +existingCopy.OwnerReferences = requiredCopy.OwnerReferences🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/permission.go` around lines 327 - 330, ensureAddonOwnerReference now writes a beta1 APIVersion but ApplyRole and ApplyRoleBinding only diff on rules/subjects/roleRef, so roles/rolebindings created with v1alpha1 keep old ownerRefs; update ApplyRole and ApplyRoleBinding to include OwnerReferences (or the specific ownerRef APIVersion/UID) in the equality check/compare logic and treat a changed ownerRef as a meaningful diff that triggers an update (or explicitly reconcile the ownerRef after create/update), referencing the functions ensureAddonOwnerReference, ApplyRole and ApplyRoleBinding to locate the change.pkg/addonmanager/controllers/agentdeploy/default_sync_test.go (1)
154-164:⚠️ Potential issue | 🟡 MinorAssert
ManifestAppliedstatus and reason explicitly.These checks still only gate on
meta.IsStatusConditionFalse. The success-path cases pass if the condition is missing orUnknown, and none of the three blocks verifies the beta1 reason mapping this PR is changing.🔎 Tighten the assertion
- if meta.IsStatusConditionFalse(addOn.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnManifestApplied) { - t.Errorf("Condition Reason is not correct: %v", addOn.Status.Conditions) - } + cond := meta.FindStatusCondition(addOn.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnManifestApplied) + if cond == nil || cond.Status != metav1.ConditionTrue || + cond.Reason != addonapiv1beta1.AddonManifestAppliedReasonManifestsApplied { + t.Errorf("unexpected ManifestApplied condition: %#v", cond) + }For the error-path case, assert
metav1.ConditionFalsewithaddonapiv1beta1.AddonManifestAppliedReasonWorkApplyFailed.Also applies to: 206-216, 247-256
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/agentdeploy/default_sync_test.go` around lines 154 - 164, Update the test assertions in validateAddonActions to check the ManifestApplied condition explicitly (presence, status, and reason) instead of using meta.IsStatusConditionFalse alone: unmarshal the Patch into ManagedClusterAddOn, find the status condition with type ManagedClusterAddOnManifestApplied, assert its Status equals metav1.ConditionFalse and its Reason equals addonapiv1beta1.AddonManifestAppliedReasonWorkApplyFailed for the error-path case (and similarly assert metav1.ConditionTrue and the correct success reason for the success-path cases); apply the same stricter assertions in the other two validate blocks referenced (around the other checks at the noted locations) so each test verifies condition existence, exact Status, and expected Reason mapping.pkg/addonmanager/controllers/cmamanagedby/controller.go (1)
87-92:⚠️ Potential issue | 🔴 CriticalRemove the lifecycle annotation key instead of blanking it.
Assigning
""at Line 91 leaves the annotation present. On the next sync, the guard at Lines 87-89 no longer matches"self", so this controller stops reconciling even though the handoff has not actually completed. Delete the key entirely here.🛠️ Proposed fix
if cmaCopy.Annotations == nil || cmaCopy.Annotations[addonapiv1alpha1.AddonLifecycleAnnotationKey] != addonapiv1alpha1.AddonLifecycleSelfManageAnnotationValue { return nil } -cmaCopy.Annotations[addonapiv1alpha1.AddonLifecycleAnnotationKey] = "" +delete(cmaCopy.Annotations, addonapiv1alpha1.AddonLifecycleAnnotationKey)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/cmamanagedby/controller.go` around lines 87 - 92, The code currently sets cmaCopy.Annotations[addonapiv1alpha1.AddonLifecycleAnnotationKey] = "" which leaves the annotation present and causes the guard that checks cmaCopy.Annotations[addonapiv1alpha1.AddonLifecycleAnnotationKey] != addonapiv1alpha1.AddonLifecycleSelfManageAnnotationValue to misbehave; replace that assignment by deleting the key from the map (use delete(cmaCopy.Annotations, addonapiv1alpha1.AddonLifecycleAnnotationKey)) so the annotation is truly removed; ensure this change is made where cmaCopy is modified in the controller code handling the lifecycle annotation.test/integration/kube/cluster_management_addon_test.go (1)
69-70:⚠️ Potential issue | 🟠 MajorDefer the
ClusterManagementAddOncleanup immediately after create.If this spec fails before Lines 101-102, the cluster-scoped addon is left behind because
AfterEachnever removes it. That can make the next run fail withAlreadyExists.🧹 Proposed cleanup pattern
cma, err := hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Create(context.Background(), cma, metav1.CreateOptions{}) gomega.Expect(err).ToNot(gomega.HaveOccurred()) +defer func() { + err := hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Delete( + context.Background(), cma.Name, metav1.DeleteOptions{}, + ) + gomega.Expect(err).ToNot(gomega.HaveOccurred()) +}() ... -err = hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Delete(context.Background(), testAddonImpl.name, metav1.DeleteOptions{}) -gomega.Expect(err).ToNot(gomega.HaveOccurred())Also applies to: 101-102
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/kube/cluster_management_addon_test.go` around lines 69 - 70, After successfully creating the ClusterManagementAddOn (the cma returned by hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Create), immediately schedule its deletion with a defer that calls the ClusterManagementAddOns().Delete using the created cma.Name and context to ensure cleanup even if the test fails before the existing AfterEach runs; place the defer directly after the create call so the resource is removed regardless of later failures.test/integration/cloudevents/agent_deploy_test.go (2)
748-763:⚠️ Potential issue | 🟠 MajorSet the controller owner reference before
Create.This helper creates the
ManagedClusterAddOnfirst and only patches the controller owner ref afterward. Controllers in this package can observe the object in that gap, which makes these integration tests flaky.🔧 Proposed fix
func createManagedClusterAddOnwithOwnerRefs(namespace string, addon *addonapiv1beta1.ManagedClusterAddOn, cma *addonapiv1beta1.ClusterManagementAddOn) { - addon, err := hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).Create(context.Background(), addon, metav1.CreateOptions{}) - gomega.Expect(err).ToNot(gomega.HaveOccurred()) - - addonCopy := addon.DeepCopy() - // This is to assume that addon-manager has already added the OwnerReferences. owner := metav1.NewControllerRef(cma, schema.GroupVersionKind{ Group: addonapiv1beta1.GroupName, Version: addonapiv1beta1.GroupVersion.Version, Kind: "ClusterManagementAddOn", }) - modified := utils.MergeOwnerRefs(&addonCopy.OwnerReferences, *owner, false) - if modified { - _, err = hubAddonClient.AddonV1beta1().ManagedClusterAddOns(addonCopy.Namespace).Update(context.Background(), addonCopy, metav1.UpdateOptions{}) - gomega.Expect(err).ToNot(gomega.HaveOccurred()) - } + addonCopy := addon.DeepCopy() + utils.MergeOwnerRefs(&addonCopy.OwnerReferences, *owner, false) + + _, err := hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).Create(context.Background(), addonCopy, metav1.CreateOptions{}) + gomega.Expect(err).ToNot(gomega.HaveOccurred()) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/cloudevents/agent_deploy_test.go` around lines 748 - 763, In createManagedClusterAddOnwithOwnerRefs, set the controller OwnerReference on the ManagedClusterAddOn object before calling hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).Create(...) instead of creating and patching it afterwards; build the owner via metav1.NewControllerRef(cma, schema.GroupVersionKind{Group: addonapiv1beta1.GroupName, Version: addonapiv1beta1.GroupVersion.Version, Kind: "ClusterManagementAddOn"}) and merge/assign it into addon.OwnerReferences (or addonCopy.OwnerReferences) prior to the Create call so the created resource already has the controller owner ref and avoids the observation gap that causes flakiness.
270-279:⚠️ Potential issue | 🟡 MinorAccept
Progressing=Falsein these status checks.All of these polls fail if the controller starts publishing an explicit
ManagedClusterAddOnConditionProgressing=False, even though that is a healthy terminal state. They should only rejectProgressing=True.💡 Suggested change
- if cond := meta.FindStatusCondition(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnConditionProgressing); cond != nil { - return fmt.Errorf("expected no addon progressing condition, %v", addon.Status.Conditions) - } + if meta.IsStatusConditionTrue(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnConditionProgressing) { + return fmt.Errorf("unexpected addon progressing condition, %v", addon.Status.Conditions) + }Also applies to: 303-312, 438-450, 568-580, 728-740
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/cloudevents/agent_deploy_test.go` around lines 270 - 279, The status checks currently reject any presence of ManagedClusterAddOnConditionProgressing; update them to only fail when the Progressing condition is present AND its Status is True (i.e., only reject Progressing=True), allowing Progressing=False as a healthy terminal state; specifically, replace the meta.FindStatusCondition(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnConditionProgressing) nil-check logic with a check that inspects cond.Status (compare against metav1.ConditionTrue) and only return an error if cond != nil && cond.Status == metav1.ConditionTrue; apply the same change for all other occurrences that check ManagedClusterAddOnConditionProgressing.test/integration/kube/assertion_test.go (2)
91-97:⚠️ Potential issue | 🔴 CriticalCheck the
Geterror before mutating the fetched object.Both polls can panic on a transient read failure, and they bypass the
ctxparameter the caller passed in.🛠️ Safer retry pattern
func updateClusterManagementAddOn(ctx context.Context, new *addonapiv1beta1.ClusterManagementAddOn) { - gomega.Eventually(func() bool { - old, err := hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Get(context.Background(), new.Name, metav1.GetOptions{}) - old.Spec = new.Spec - old.Annotations = new.Annotations - _, err = hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Update(context.Background(), old, metav1.UpdateOptions{}) - if err == nil { - return true - } - return false - }, eventuallyTimeout, eventuallyInterval).Should(gomega.BeTrue()) + gomega.Eventually(func() error { + old, err := hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Get(ctx, new.Name, metav1.GetOptions{}) + if err != nil { + return err + } + old.Spec = new.Spec + old.Annotations = new.Annotations + _, err = hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Update(ctx, old, metav1.UpdateOptions{}) + return err + }, eventuallyTimeout, eventuallyInterval).ShouldNot(gomega.HaveOccurred()) }Apply the same shape to
updateManagedClusterAddOnStatus.Also applies to: 159-164
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/kube/assertion_test.go` around lines 91 - 97, The code in updateClusterManagementAddOn is mutating the result of a Get without checking its error and it ignores the passed ctx; change the polling lambda to call hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Get using the provided ctx, check if err != nil (or if the returned object is nil) and return false so Eventually will retry instead of panicking, then perform the Spec/Annotations mutation and Update; make the identical fixes (use ctx, check Get error/nil) in updateManagedClusterAddOnStatus as well.
132-154:⚠️ Potential issue | 🟠 MajorAppend a
KubeClientregistration when none matches.The non-empty branch only updates an existing
KubeCliententry. Ifaddon.Status.Registrationsalready contains other registration types, this helper returns success without ever setting the driver.🛠️ Preserve existing entries and add the missing `KubeClient` config
- } else { - for i := range addon.Status.Registrations { + } else { + found := false + for i := range addon.Status.Registrations { if addon.Status.Registrations[i].Type == addonapiv1beta1.KubeClient { if addon.Status.Registrations[i].KubeClient == nil { addon.Status.Registrations[i].KubeClient = &addonapiv1beta1.KubeClientConfig{} } addon.Status.Registrations[i].KubeClient.Driver = driver + found = true break } } + if !found { + addon.Status.Registrations = append(addon.Status.Registrations, addonapiv1beta1.RegistrationConfig{ + Type: addonapiv1beta1.KubeClient, + KubeClient: &addonapiv1beta1.KubeClientConfig{ + Driver: driver, + }, + }) + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/kube/assertion_test.go` around lines 132 - 154, The loop that updates addon.Status.Registrations only modifies an existing KubeClient entry but does not append one if none of the existing entries are of Type addonapiv1beta1.KubeClient; update the branch that handles non-empty registrations (the for loop over addon.Status.Registrations) to detect when no KubeClient was found and append a new addonapiv1beta1.RegistrationConfig with Type: addonapiv1beta1.KubeClient and KubeClient: &addonapiv1beta1.KubeClientConfig{Driver: driver}; keep preserving existing entries and then call hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).UpdateStatus(...) as before.
🧹 Nitpick comments (5)
pkg/utils/addon_config_test.go (1)
98-132: Commented-out test case has inconsistent v1beta1 structure.This commented test case uses a different
ConfigReferencestructure than the active tests:
- Here:
ConfigReferentis at theConfigReferencelevel (line 121), andDesiredConfigonly containsSpecHash(lines 124-126).- Active tests:
ConfigReferentis nested insideDesiredConfig(e.g., lines 87-90, 157-159).If this test is still needed, update the structure to match the v1beta1 pattern used elsewhere. Otherwise, consider removing the dead code before merging.
♻️ Suggested structure if keeping the test
- // ConfigReferent: addonapiv1beta1.ConfigReferent{ - // Name: "test1", - // }, // DesiredConfig: &addonapiv1beta1.ConfigSpecHash{ + // ConfigReferent: addonapiv1beta1.ConfigReferent{ + // Name: "test1", + // }, // SpecHash: "wronghash", // },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/addon_config_test.go` around lines 98 - 132, The commented-out test case uses an inconsistent v1beta1 ConfigReference shape; update or remove it: either remove the dead block, or change the ConfigReference in the test to match the other v1beta1 tests by nesting ConfigReferent inside DesiredConfig (i.e., ensure the ManagedClusterAddOn.Status.ConfigReferences entry places ConfigReferent under DesiredConfig and DesiredConfig includes SpecHash), matching the structure used by other cases and the newTestAddOnDeploymentConfigGetter expectations so the test for ManagedClusterAddOn/ConfigReference/DesiredConfig is consistent.test/integration-test.mk (1)
50-60: Consider extracting the CRD path to a variable.The CRD path is hardcoded. If this path is also used in
hack/fix-crd-storage-version.sh, consider extracting it to a variable (e.g.,CRD_PATH) at the top of the Makefile for maintainability and to keep both targets in sync.♻️ Example refactor
+MANAGED_CLUSTER_ADDON_CRD := vendor/open-cluster-management.io/api/addon/v1beta1/0000_01_addon.open-cluster-management.io_managedclusteraddons.crd.yaml + restore-crd-storage-version: `@echo` "Restoring original CRD..." `@if` [ -f "$(TEST_TMP)/managedclusteraddons.crd.yaml.backup" ]; then \ cp "$(TEST_TMP)/managedclusteraddons.crd.yaml.backup" \ - "vendor/open-cluster-management.io/api/addon/v1beta1/0000_01_addon.open-cluster-management.io_managedclusteraddons.crd.yaml"; \ + "$(MANAGED_CLUSTER_ADDON_CRD)"; \ rm -f "$(TEST_TMP)/managedclusteraddons.crd.yaml.backup"; \ echo "✓ CRD restored and backup removed"; \🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration-test.mk` around lines 50 - 60, The Makefile target restore-crd-storage-version hardcodes the CRD file path; extract that path into a top-level variable (e.g., CRD_PATH) and update the target to use $(CRD_PATH) instead of the literal string so the path is defined in one place; also update any other consumer (notably hack/fix-crd-storage-version.sh) to read or reference the same CRD_PATH variable (or source it) so both the restore-crd-storage-version target and the script remain in sync.pkg/utils/permission_test.go (1)
247-386: Test cases correctly mirror ClusterRole binding tests.The test structure and cases are appropriately duplicated from
TestBindKubeClientClusterRole_PendingErrorto validate the same v1beta1 behavior for Role bindings. While there's notable duplication, the independence and clarity of test cases is reasonable.Consider extracting shared test case definitions if this pattern expands further:
// Potential shared definition (optional, for future consideration): // var kubeClientBindingTestCases = []struct { ... }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/permission_test.go` around lines 247 - 386, Extract the duplicated test-case slice into a shared variable (e.g., kubeClientBindingTestCases) and have both TestBindKubeClientClusterRole_PendingError and TestBindKubeClientRole_PendingError reference that single definition; move the slice currently inline in the Role test (the anonymous []struct{...} containing cases like "no registrations - returns pending error", "KubeClient is nil - returns pending error", "valid subject - no error", etc.) to the package scope as kubeClientBindingTestCases and update the test functions to iterate over that variable to remove duplication while keeping each test function's assertions intact.test/integration/kube/agent_hook_deploy_test.go (1)
173-180: Prefer beta1 condition constants over raw strings.Now that this test is on
addon/v1beta1, using"ManifestApplied"and"HookManifestCompleted"as literals makes the migration easier to drift or typo. The beta1 API exports dedicated constants for both condition types, so wiring the assertions to those names will keep the test pinned to the actual API surface. (pkg.go.dev)Suggested change
- if !meta.IsStatusConditionTrue(addon.Status.Conditions, "ManifestApplied") { + if !meta.IsStatusConditionTrue(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnManifestApplied) { return fmt.Errorf("Unexpected addon applied condition, %v", addon.Status.Conditions) } @@ - if !meta.IsStatusConditionFalse(addon.Status.Conditions, "HookManifestCompleted") { + if !meta.IsStatusConditionFalse(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnHookManifestCompleted) { return fmt.Errorf("Unexpected addon applied condition, %v", addon.Status.Conditions) }Also applies to: 203-210
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/kube/agent_hook_deploy_test.go` around lines 173 - 180, Replace the raw condition name string literals ("ManifestApplied" and "HookManifestCompleted") with the exported constants from the addon/v1beta1 API package (import it, e.g. as addonv1beta1) wherever you call meta.IsStatusConditionTrue against addon.Status.Conditions (notably in the block around ManagedClusterAddOns retrieval using hubAddonClient.AddonV1beta1().ManagedClusterAddOns and the later similar block). Update the two checks to use addonv1beta1.<appropriate constant> instead of the string literals and add the package import so the test is pinned to the v1beta1 condition names.pkg/addonmanager/controllers/addonconfig/controller.go (1)
216-228: Redundant nil check forDesiredConfigat line 217.The check
if configReference.DesiredConfig == nilat line 217 is redundant because the loop already continues at line 192-194 whenDesiredConfigis nil. At this point in the code,DesiredConfigis guaranteed to be non-nil.♻️ Proposed fix to remove redundant check
// update desired spec hash only for the configs in spec for _, addonconfig := range addon.Spec.Configs { - if configReference.DesiredConfig == nil { - continue - } - if configReference.ConfigGroupResource == addonconfig.ConfigGroupResource && configReference.DesiredConfig.ConfigReferent == addonconfig.ConfigReferent {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/addonconfig/controller.go` around lines 216 - 228, The nil check for configReference.DesiredConfig inside the loop is redundant because earlier code already continues when DesiredConfig is nil; remove the if block `if configReference.DesiredConfig == nil { continue }` (the one at/around the check comparing ConfigGroupResource and ConfigReferent) so the matching logic proceeds directly, keeping the call to utils.GetSpecHash(config) and the assignment to addon.Status.ConfigReferences[index].DesiredConfig.SpecHash unchanged; ensure you only remove that redundant guard and run tests to confirm no behavioral change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@examples/helloworld_agent/agent.go`:
- Around line 176-179: The code currently returns any error from
c.addonClient.AddonV1beta1().ManagedClusterAddOns(clusterName).Get which causes
a perpetual requeue when the ManagedClusterAddOn is deleted; update the error
handling to treat a NotFound as non-fatal by checking k8s errors.IsNotFound(err)
and simply return nil (or continue without requeuing) instead of returning the
error, while still returning the error for other cases; locate the Get call
(addon, err :=
c.addonClient.AddonV1beta1().ManagedClusterAddOns(clusterName).Get(...)) and
adjust its error branch to import and use apimachinery errors.IsNotFound.
In `@test/integration/cloudevents/agent_hook_deploy_test.go`:
- Around line 213-221: The test currently fails if any Progressing condition
exists; change the check to only fail when Progressing condition exists with
Status == metav1.ConditionTrue. Locate the block using
meta.FindStatusCondition(addon.Status.Conditions,
addonapiv1beta1.ManagedClusterAddOnConditionProgressing) and replace the
existence-only check with a conditional that inspects cond.Status (compare
against metav1.ConditionTrue) and returns the error only when cond.Status ==
metav1.ConditionTrue.
In `@test/integration/kube/agent_hook_deploy_test.go`:
- Around line 122-131: Change the test to actually validate that
InstallNamespaceAnnotation is used: set
addonapiv1beta1.InstallNamespaceAnnotation to a different namespace value (not
"default") when constructing the ManagedClusterAddOn (the variable addon) passed
to createManagedClusterAddOnwithOwnerRefs, then fetch the rendered/applied
ManifestWork (or the function that inspects created ManifestWorks) and assert
that the manifests inside reference the annotation-derived namespace rather than
the hard-coded fixture "default" (use the test helpers that read the
ManifestWork for the ManagedClusterAddOn name to locate the applied manifests
and assert the namespace string).
In `@test/integration/kube/assertion_test.go`:
- Around line 106-123: In createManagedClusterAddOnwithOwnerRefs, set the
controller OwnerReference on the addon object before calling
hubAddonClient.AddonV1beta1().ManagedClusterAddOns(...).Create so there is no
reconciliation window; specifically, build the owner with
metav1.NewControllerRef(cma, schema.GroupVersionKind{...}) and merge it into
addon.OwnerReferences via utils.MergeOwnerRefs (or equivalent) prior to Create,
then call Create and skip the post-Create Update path; apply the same change to
the duplicate function in test/integration/cloudevents/agent_deploy_test.go.
---
Outside diff comments:
In `@pkg/addonmanager/controllers/cmaconfig/controller.go`:
- Around line 161-199: In updateConfigSpecHash, the error returns inside the
loops swallow getConfigSpecHash failures by returning nil; change those to
return the actual error so transient lister/hash errors cause a requeue and the
controller can retry. Specifically, in cmaConfigController.updateConfigSpecHash
where you call c.getConfigSpecHash(...) for DefaultConfigReferences and
InstallProgressions, replace the two occurrences of "return nil" on error with
"return err" (propagating the error from getConfigSpecHash) so
DesiredConfig.SpecHash isn't left stale when lookups fail.
In `@test/e2e/helloworld_test.go`:
- Around line 531-556: The helper currently returns early if an
AddOnDeploymentConfig already exists, which leaves stale configs in tests;
change the logic in the prepare*AddOnDeploymentConfig helpers so that after
fetching via hubAddOnClient.AddonV1beta1().AddOnDeploymentConfigs(...).Get(...)
you either Create when errors.IsNotFound(err) or reconcile by updating the
existing object's Spec (NodePlacement, Tolerations, AgentInstallNamespace, etc.)
and calling Update (or Patch) on
hubAddOnClient.AddonV1beta1().AddOnDeploymentConfigs(namespace).Update(...) to
ensure the cluster state matches the desired
deployConfigName/namespace/managedClusterName payload; apply the same
reconciliation pattern to the other helpers mentioned (the blocks around lines
560-584, 587-610, 613-653, 656-679).
- Around line 533-549: In the prepare*AddOnDeploymentConfig helper functions the
Create call uses managedClusterName instead of the helper's namespace parameter,
which can target the wrong resource; update the
AddonV1beta1().AddOnDeploymentConfigs(...).Create(...) calls to pass the
helper's namespace variable (not managedClusterName) for all
prepare*AddOnDeploymentConfig helpers (the same change for each of the five
helpers), keeping the existing AddOnDeploymentConfig spec (NodePlacement,
Tolerations, AgentInstallNamespace) intact.
In `@test/e2ehosted/helloworld_hosted_test.go`:
- Line 124: Update the inline comment that currently reads "Ensure helloworld
addo is functioning." to correct the typo to "Ensure helloworld addon is
functioning."; locate the string in test/e2ehosted/helloworld_hosted_test.go
(search for the exact comment text) and replace "addo" with "addon" leaving the
rest of the comment unchanged.
---
Duplicate comments:
In `@pkg/addonmanager/controllers/agentdeploy/default_sync_test.go`:
- Around line 154-164: Update the test assertions in validateAddonActions to
check the ManifestApplied condition explicitly (presence, status, and reason)
instead of using meta.IsStatusConditionFalse alone: unmarshal the Patch into
ManagedClusterAddOn, find the status condition with type
ManagedClusterAddOnManifestApplied, assert its Status equals
metav1.ConditionFalse and its Reason equals
addonapiv1beta1.AddonManifestAppliedReasonWorkApplyFailed for the error-path
case (and similarly assert metav1.ConditionTrue and the correct success reason
for the success-path cases); apply the same stricter assertions in the other two
validate blocks referenced (around the other checks at the noted locations) so
each test verifies condition existence, exact Status, and expected Reason
mapping.
In `@pkg/addonmanager/controllers/agentdeploy/utils.go`:
- Around line 133-135: The label lookup currently uses the beta1 annotation
constant (addonapiv1beta1.AddonPreDeleteHookAnnotationKey) against the labels
map, so legacy alpha label-bearing objects are skipped; change the first lookup
to use the legacy alpha label constant (e.g.
addonapiv1alpha1.AddonPreDeleteHookLabel) while keeping the second line that
checks annotations using addonapiv1beta1.AddonPreDeleteHookAnnotationKey; update
the variables referenced (labels, annotations) in the PreDeleteHook detection
code in pkg/addonmanager/controllers/agentdeploy/utils.go accordingly.
In `@pkg/addonmanager/controllers/cmamanagedby/controller.go`:
- Around line 87-92: The code currently sets
cmaCopy.Annotations[addonapiv1alpha1.AddonLifecycleAnnotationKey] = "" which
leaves the annotation present and causes the guard that checks
cmaCopy.Annotations[addonapiv1alpha1.AddonLifecycleAnnotationKey] !=
addonapiv1alpha1.AddonLifecycleSelfManageAnnotationValue to misbehave; replace
that assignment by deleting the key from the map (use
delete(cmaCopy.Annotations, addonapiv1alpha1.AddonLifecycleAnnotationKey)) so
the annotation is truly removed; ensure this change is made where cmaCopy is
modified in the controller code handling the lifecycle annotation.
In `@pkg/utils/permission.go`:
- Around line 273-297: The empty-subject guard in BuildSubjectsFromRegistration
is comparing *subject (a BaseSubject) against addonapiv1beta1.Subject{}, which
is the wrong concrete type; update the check to compare against
addonapiv1beta1.BaseSubject{} instead (i.e., replace
equality.Semantic.DeepEqual(*subject, addonapiv1beta1.Subject{}) with
equality.Semantic.DeepEqual(*subject, addonapiv1beta1.BaseSubject{})) so the
kube-client registration empty case is detected correctly.
- Around line 327-330: ensureAddonOwnerReference now writes a beta1 APIVersion
but ApplyRole and ApplyRoleBinding only diff on rules/subjects/roleRef, so
roles/rolebindings created with v1alpha1 keep old ownerRefs; update ApplyRole
and ApplyRoleBinding to include OwnerReferences (or the specific ownerRef
APIVersion/UID) in the equality check/compare logic and treat a changed ownerRef
as a meaningful diff that triggers an update (or explicitly reconcile the
ownerRef after create/update), referencing the functions
ensureAddonOwnerReference, ApplyRole and ApplyRoleBinding to locate the change.
In `@test/e2ehosted/helloworld_hosted_test.go`:
- Around line 49-80: The current Get/Create block for helloWorldHostedAddonName
skips updating existing addons so prior v1alpha1 objects lacking
addonapiv1beta1.HostingClusterNameAnnotationKey and
addonapiv1beta1.InstallNamespaceAnnotation never get reconciled; modify the
logic after the successful Get (the object returned by
hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(...).Get) to detect missing
annotations and perform an Update or Patch to add those two annotations
(preserving resourceVersion/owner info), then handle and return any update
errors; keep the existing Create path unchanged and ensure the function still
returns nil on success.
In `@test/integration-test.mk`:
- Around line 72-73: The test-integration recipe can skip CRD restoration if
test-kube-integration fails; wrap the recipe commands so the shell registers a
trap that always runs restore-crd-storage-version on EXIT (or ERR/EXIT) before
running update-crd-storage-version and test-kube-integration, then clear the
trap after successful completion; update the test-integration target to invoke
the trap-backed shell sequence so restore-crd-storage-version is executed
regardless of test outcome.
In `@test/integration/cloudevents/agent_deploy_test.go`:
- Around line 748-763: In createManagedClusterAddOnwithOwnerRefs, set the
controller OwnerReference on the ManagedClusterAddOn object before calling
hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).Create(...)
instead of creating and patching it afterwards; build the owner via
metav1.NewControllerRef(cma, schema.GroupVersionKind{Group:
addonapiv1beta1.GroupName, Version: addonapiv1beta1.GroupVersion.Version, Kind:
"ClusterManagementAddOn"}) and merge/assign it into addon.OwnerReferences (or
addonCopy.OwnerReferences) prior to the Create call so the created resource
already has the controller owner ref and avoids the observation gap that causes
flakiness.
- Around line 270-279: The status checks currently reject any presence of
ManagedClusterAddOnConditionProgressing; update them to only fail when the
Progressing condition is present AND its Status is True (i.e., only reject
Progressing=True), allowing Progressing=False as a healthy terminal state;
specifically, replace the meta.FindStatusCondition(addon.Status.Conditions,
addonapiv1beta1.ManagedClusterAddOnConditionProgressing) nil-check logic with a
check that inspects cond.Status (compare against metav1.ConditionTrue) and only
return an error if cond != nil && cond.Status == metav1.ConditionTrue; apply the
same change for all other occurrences that check
ManagedClusterAddOnConditionProgressing.
In `@test/integration/kube/assertion_test.go`:
- Around line 91-97: The code in updateClusterManagementAddOn is mutating the
result of a Get without checking its error and it ignores the passed ctx; change
the polling lambda to call
hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Get using the provided
ctx, check if err != nil (or if the returned object is nil) and return false so
Eventually will retry instead of panicking, then perform the Spec/Annotations
mutation and Update; make the identical fixes (use ctx, check Get error/nil) in
updateManagedClusterAddOnStatus as well.
- Around line 132-154: The loop that updates addon.Status.Registrations only
modifies an existing KubeClient entry but does not append one if none of the
existing entries are of Type addonapiv1beta1.KubeClient; update the branch that
handles non-empty registrations (the for loop over addon.Status.Registrations)
to detect when no KubeClient was found and append a new
addonapiv1beta1.RegistrationConfig with Type: addonapiv1beta1.KubeClient and
KubeClient: &addonapiv1beta1.KubeClientConfig{Driver: driver}; keep preserving
existing entries and then call
hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).UpdateStatus(...)
as before.
In `@test/integration/kube/cluster_management_addon_test.go`:
- Around line 69-70: After successfully creating the ClusterManagementAddOn (the
cma returned by hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Create),
immediately schedule its deletion with a defer that calls the
ClusterManagementAddOns().Delete using the created cma.Name and context to
ensure cleanup even if the test fails before the existing AfterEach runs; place
the defer directly after the create call so the resource is removed regardless
of later failures.
---
Nitpick comments:
In `@pkg/addonmanager/controllers/addonconfig/controller.go`:
- Around line 216-228: The nil check for configReference.DesiredConfig inside
the loop is redundant because earlier code already continues when DesiredConfig
is nil; remove the if block `if configReference.DesiredConfig == nil { continue
}` (the one at/around the check comparing ConfigGroupResource and
ConfigReferent) so the matching logic proceeds directly, keeping the call to
utils.GetSpecHash(config) and the assignment to
addon.Status.ConfigReferences[index].DesiredConfig.SpecHash unchanged; ensure
you only remove that redundant guard and run tests to confirm no behavioral
change.
In `@pkg/utils/addon_config_test.go`:
- Around line 98-132: The commented-out test case uses an inconsistent v1beta1
ConfigReference shape; update or remove it: either remove the dead block, or
change the ConfigReference in the test to match the other v1beta1 tests by
nesting ConfigReferent inside DesiredConfig (i.e., ensure the
ManagedClusterAddOn.Status.ConfigReferences entry places ConfigReferent under
DesiredConfig and DesiredConfig includes SpecHash), matching the structure used
by other cases and the newTestAddOnDeploymentConfigGetter expectations so the
test for ManagedClusterAddOn/ConfigReference/DesiredConfig is consistent.
In `@pkg/utils/permission_test.go`:
- Around line 247-386: Extract the duplicated test-case slice into a shared
variable (e.g., kubeClientBindingTestCases) and have both
TestBindKubeClientClusterRole_PendingError and
TestBindKubeClientRole_PendingError reference that single definition; move the
slice currently inline in the Role test (the anonymous []struct{...} containing
cases like "no registrations - returns pending error", "KubeClient is nil -
returns pending error", "valid subject - no error", etc.) to the package scope
as kubeClientBindingTestCases and update the test functions to iterate over that
variable to remove duplication while keeping each test function's assertions
intact.
In `@test/integration-test.mk`:
- Around line 50-60: The Makefile target restore-crd-storage-version hardcodes
the CRD file path; extract that path into a top-level variable (e.g., CRD_PATH)
and update the target to use $(CRD_PATH) instead of the literal string so the
path is defined in one place; also update any other consumer (notably
hack/fix-crd-storage-version.sh) to read or reference the same CRD_PATH variable
(or source it) so both the restore-crd-storage-version target and the script
remain in sync.
In `@test/integration/kube/agent_hook_deploy_test.go`:
- Around line 173-180: Replace the raw condition name string literals
("ManifestApplied" and "HookManifestCompleted") with the exported constants from
the addon/v1beta1 API package (import it, e.g. as addonv1beta1) wherever you
call meta.IsStatusConditionTrue against addon.Status.Conditions (notably in the
block around ManagedClusterAddOns retrieval using
hubAddonClient.AddonV1beta1().ManagedClusterAddOns and the later similar block).
Update the two checks to use addonv1beta1.<appropriate constant> instead of the
string literals and add the package import so the test is pinned to the v1beta1
condition names.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3e613f30-0d3d-406b-b1a1-075473def7bb
⛔ Files ignored due to path filters (6)
go.sumis excluded by!**/*.sumvendor/modules.txtis excluded by!vendor/**vendor/open-cluster-management.io/api/addon/v1beta1/types_managedclusteraddon.gois excluded by!vendor/**vendor/open-cluster-management.io/api/operator/v1/0000_01_operator.open-cluster-management.io_clustermanagers.crd.yamlis excluded by!vendor/**vendor/open-cluster-management.io/api/operator/v1/types_clustermanager.gois excluded by!vendor/**vendor/open-cluster-management.io/api/operator/v1/zz_generated.deepcopy.gois excluded by!vendor/**
📒 Files selected for processing (63)
README.mdexamples/helloworld/helloworld_test.goexamples/helloworld_agent/agent.goexamples/helloworld_helm/helloworld_helm.goexamples/helloworld_helm/helloworld_helm_test.gogo.modhack/fix-crd-storage-version.shpkg/addonfactory/helm_agentaddon.gopkg/addonfactory/helm_agentaddon_test.gopkg/addonfactory/template_agentaddon.gopkg/addonfactory/template_agentaddon_test.gopkg/addonfactory/test_helper.gopkg/addonmanager/addontesting/helpers.gopkg/addonmanager/base_manager.gopkg/addonmanager/cloudevents/manager.gopkg/addonmanager/constants/constants.gopkg/addonmanager/controllers/addonconfig/controller.gopkg/addonmanager/controllers/addonconfig/controller_test.gopkg/addonmanager/controllers/agentdeploy/controller.gopkg/addonmanager/controllers/agentdeploy/default_hook_sync_test.gopkg/addonmanager/controllers/agentdeploy/default_sync_test.gopkg/addonmanager/controllers/agentdeploy/hosted_hook_sync_test.gopkg/addonmanager/controllers/agentdeploy/hosted_sync_test.gopkg/addonmanager/controllers/agentdeploy/util_test.gopkg/addonmanager/controllers/agentdeploy/utils.gopkg/addonmanager/controllers/certificate/csrapprove_test.gopkg/addonmanager/controllers/certificate/csrsign_test.gopkg/addonmanager/controllers/cmaconfig/controller.gopkg/addonmanager/controllers/cmaconfig/controller_test.gopkg/addonmanager/controllers/cmamanagedby/controller.gopkg/addonmanager/controllers/cmamanagedby/controller_test.gopkg/addonmanager/controllers/registration/controller.gopkg/addonmanager/controllers/registration/controller_test.gopkg/addonmanager/manager.gopkg/agent/interface.gopkg/index/index.gopkg/utils/addon_config.gopkg/utils/addon_config_test.gopkg/utils/addon_test.gopkg/utils/helpers.gopkg/utils/permission.gopkg/utils/permission_test.gotest/e2e/helloworld_helm_test.gotest/e2e/helloworld_test.gotest/e2ehosted/helloworld_hosted_test.gotest/e2etoken/helloworld_token_test.gotest/integration-test.mktest/integration/cloudevents/agent_deploy_test.gotest/integration/cloudevents/agent_hook_deploy_test.gotest/integration/cloudevents/agent_hosting_deploy_test.gotest/integration/cloudevents/agent_hosting_hook_deploy_test.gotest/integration/cloudevents/suite_test.gotest/integration/kube/agent_deploy_test.gotest/integration/kube/agent_hook_deploy_test.gotest/integration/kube/agent_hosting_deploy_test.gotest/integration/kube/agent_hosting_hook_deploy_test.gotest/integration/kube/assertion_test.gotest/integration/kube/cluster_management_addon_test.gotest/integration/kube/csr_test.gotest/integration/kube/deleted_managed_cluster_create_addon_test.gotest/integration/kube/multiworks_test.gotest/integration/kube/registration_test.gotest/integration/kube/suite_test.go
✅ Files skipped from review due to trivial changes (1)
- README.md
🚧 Files skipped from review as they are similar to previous changes (6)
- hack/fix-crd-storage-version.sh
- pkg/utils/addon_test.go
- examples/helloworld/helloworld_test.go
- pkg/addonmanager/controllers/agentdeploy/util_test.go
- pkg/addonfactory/template_agentaddon_test.go
- go.mod
| addon, err := c.addonClient.AddonV1beta1().ManagedClusterAddOns(clusterName).Get(ctx, c.addonName, metav1.GetOptions{}) | ||
| if err != nil { | ||
| return err | ||
| } |
There was a problem hiding this comment.
Ignore missing addons instead of requeueing forever.
If the ManagedClusterAddOn is gone, this controller has nothing to sync. Returning the raw NotFound here turns every hub ConfigMap event into a retry loop during uninstall or creation races.
🩹 Suggested fix
addon, err := c.addonClient.AddonV1beta1().ManagedClusterAddOns(clusterName).Get(ctx, c.addonName, metav1.GetOptions{})
- if err != nil {
- return err
- }
+ switch {
+ case errors.IsNotFound(err):
+ return nil
+ case err != nil:
+ return err
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/helloworld_agent/agent.go` around lines 176 - 179, The code
currently returns any error from
c.addonClient.AddonV1beta1().ManagedClusterAddOns(clusterName).Get which causes
a perpetual requeue when the ManagedClusterAddOn is deleted; update the error
handling to treat a NotFound as non-fatal by checking k8s errors.IsNotFound(err)
and simply return nil (or continue without requeuing) instead of returning the
error, while still returning the error for other cases; locate the Get call
(addon, err :=
c.addonClient.AddonV1beta1().ManagedClusterAddOns(clusterName).Get(...)) and
adjust its error branch to import and use apimachinery errors.IsNotFound.
| addon, err := hubAddonClient.AddonV1beta1().ManagedClusterAddOns(managedClusterName).Get(context.Background(), testAddonImpl.name, metav1.GetOptions{}) | ||
| if err != nil { | ||
| return err | ||
| } | ||
| if !meta.IsStatusConditionTrue(addon.Status.Conditions, addonapiv1alpha1.ManagedClusterAddOnManifestApplied) { | ||
| if !meta.IsStatusConditionTrue(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnManifestApplied) { | ||
| return fmt.Errorf("Unexpected addon applied condition, %v", addon.Status.Conditions) | ||
| } | ||
| if cond := meta.FindStatusCondition(addon.Status.Conditions, addonapiv1alpha1.ManagedClusterAddOnConditionProgressing); cond != nil { | ||
| if cond := meta.FindStatusCondition(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnConditionProgressing); cond != nil { | ||
| return fmt.Errorf("expected no addon progressing condition, %v", addon.Status.Conditions) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's read the file at the specified lines
head -221 test/integration/cloudevents/agent_hook_deploy_test.go | tail -30Repository: open-cluster-management-io/addon-framework
Length of output: 1687
🏁 Script executed:
# Read lines around 213-221 with context
sed -n '200,230p' test/integration/cloudevents/agent_hook_deploy_test.goRepository: open-cluster-management-io/addon-framework
Length of output: 1677
🏁 Script executed:
# Find where meta.IsStatusConditionTrue and meta.FindStatusCondition are defined/used
rg -A2 "IsStatusConditionTrue|FindStatusCondition" --type go -m 5Repository: open-cluster-management-io/addon-framework
Length of output: 27738
🏁 Script executed:
# Search for ManagedClusterAddOnConditionProgressing definition
rg "ManagedClusterAddOnConditionProgressing" --type go -B2 -A2Repository: open-cluster-management-io/addon-framework
Length of output: 7564
Check Progressing condition status value, not just existence.
This test fails as soon as the controller publishes an explicit Progressing=False condition, which is still healthy. Only Progressing=True should trigger this failure.
Suggested change
- if cond := meta.FindStatusCondition(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnConditionProgressing); cond != nil {
- return fmt.Errorf("expected no addon progressing condition, %v", addon.Status.Conditions)
- }
+ if meta.IsStatusConditionTrue(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnConditionProgressing) {
+ return fmt.Errorf("unexpected addon progressing condition, %v", addon.Status.Conditions)
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/integration/cloudevents/agent_hook_deploy_test.go` around lines 213 -
221, The test currently fails if any Progressing condition exists; change the
check to only fail when Progressing condition exists with Status ==
metav1.ConditionTrue. Locate the block using
meta.FindStatusCondition(addon.Status.Conditions,
addonapiv1beta1.ManagedClusterAddOnConditionProgressing) and replace the
existence-only check with a conditional that inspects cond.Status (compare
against metav1.ConditionTrue) and returns the error only when cond.Status ==
metav1.ConditionTrue.
| addon := &addonapiv1beta1.ManagedClusterAddOn{ | ||
| ObjectMeta: metav1.ObjectMeta{ | ||
| Name: testAddonImpl.name, | ||
| Annotations: map[string]string{ | ||
| addonapiv1beta1.InstallNamespaceAnnotation: "default", | ||
| }, | ||
| }, | ||
| Spec: addonapiv1alpha1.ManagedClusterAddOnSpec{ | ||
| InstallNamespace: "default", | ||
| }, | ||
| Spec: addonapiv1beta1.ManagedClusterAddOnSpec{}, | ||
| } | ||
| createManagedClusterAddOnwithOwnerRefs(managedClusterName, addon, cma) |
There was a problem hiding this comment.
This migration test doesn't actually prove the beta1 install-namespace annotation is honored.
The new InstallNamespaceAnnotation is set to "default", but the fixture in this file already hard-codes "namespace": "default" and "--addon-namespace=default". A controller regression that ignored the annotation and just applied the manifest as-is would still pass here. Please vary the annotated namespace away from the baked-in fixture value, then assert the rendered/applied ManifestWork reflects the annotation-derived namespace.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/integration/kube/agent_hook_deploy_test.go` around lines 122 - 131,
Change the test to actually validate that InstallNamespaceAnnotation is used:
set addonapiv1beta1.InstallNamespaceAnnotation to a different namespace value
(not "default") when constructing the ManagedClusterAddOn (the variable addon)
passed to createManagedClusterAddOnwithOwnerRefs, then fetch the
rendered/applied ManifestWork (or the function that inspects created
ManifestWorks) and assert that the manifests inside reference the
annotation-derived namespace rather than the hard-coded fixture "default" (use
the test helpers that read the ManifestWork for the ManagedClusterAddOn name to
locate the applied manifests and assert the namespace string).
| func createManagedClusterAddOnwithOwnerRefs(namespace string, addon *addonapiv1beta1.ManagedClusterAddOn, cma *addonapiv1beta1.ClusterManagementAddOn) { | ||
| addon, err := hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).Create(context.Background(), addon, metav1.CreateOptions{}) | ||
| gomega.Expect(err).ToNot(gomega.HaveOccurred()) | ||
|
|
||
| addonCopy := addon.DeepCopy() | ||
|
|
||
| // This is to assume that addon-manager has already added the OwnerReferences. | ||
| owner := metav1.NewControllerRef(cma, schema.GroupVersionKind{ | ||
| Group: addonapiv1alpha1.GroupName, | ||
| Version: addonapiv1alpha1.GroupVersion.Version, | ||
| Group: addonapiv1beta1.GroupName, | ||
| Version: addonapiv1beta1.GroupVersion.Version, | ||
| Kind: "ClusterManagementAddOn", | ||
| }) | ||
| modified := utils.MergeOwnerRefs(&addonCopy.OwnerReferences, *owner, false) | ||
| if modified { | ||
| _, err = hubAddonClient.AddonV1alpha1().ManagedClusterAddOns(addonCopy.Namespace).Update(context.Background(), addonCopy, metav1.UpdateOptions{}) | ||
| _, err = hubAddonClient.AddonV1beta1().ManagedClusterAddOns(addonCopy.Namespace).Update(context.Background(), addonCopy, metav1.UpdateOptions{}) | ||
| gomega.Expect(err).ToNot(gomega.HaveOccurred()) | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "assertion_test.go" -path "*/kube/*"Repository: open-cluster-management-io/addon-framework
Length of output: 127
🏁 Script executed:
cat -n ./test/integration/kube/assertion_test.go | head -150 | tail -50Repository: open-cluster-management-io/addon-framework
Length of output: 2559
🏁 Script executed:
rg "createManagedClusterAddOnwithOwnerRefs" test/integration/kube/assertion_test.go -A 2 -B 2Repository: open-cluster-management-io/addon-framework
Length of output: 589
🏁 Script executed:
rg "createManagedClusterAddOnwithOwnerRefs" --type go -B 3 -A 3Repository: open-cluster-management-io/addon-framework
Length of output: 16585
🏁 Script executed:
# Check if there are other copies of this function
rg "func createManagedClusterAddOnwithOwnerRefs" --type go -lRepository: open-cluster-management-io/addon-framework
Length of output: 175
🏁 Script executed:
cat -n test/integration/cloudevents/agent_deploy_test.go | grep -A 20 "func createManagedClusterAddOnwithOwnerRefs"Repository: open-cluster-management-io/addon-framework
Length of output: 1149
🏁 Script executed:
# Check the utils.MergeOwnerRefs function
rg "func MergeOwnerRefs" --type go -A 10Repository: open-cluster-management-io/addon-framework
Length of output: 625
🏁 Script executed:
# Let's check if there are production code patterns similar to this
rg "Create\(" --type go -A 5 | rg -A 5 "Update\(" | head -50Repository: open-cluster-management-io/addon-framework
Length of output: 7924
Set the controller owner reference before Create.
Creating first and patching owner refs afterward leaves a reconciliation window where controllers can observe the addon without its controller owner reference. This pattern appears in both test files and should be fixed to ensure consistent test behavior.
🔧 Proposed fix
func createManagedClusterAddOnwithOwnerRefs(namespace string, addon *addonapiv1beta1.ManagedClusterAddOn, cma *addonapiv1beta1.ClusterManagementAddOn) {
- addon, err := hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).Create(context.Background(), addon, metav1.CreateOptions{})
- gomega.Expect(err).ToNot(gomega.HaveOccurred())
-
- addonCopy := addon.DeepCopy()
-
// This is to assume that addon-manager has already added the OwnerReferences.
owner := metav1.NewControllerRef(cma, schema.GroupVersionKind{
Group: addonapiv1beta1.GroupName,
Version: addonapiv1beta1.GroupVersion.Version,
Kind: "ClusterManagementAddOn",
})
- modified := utils.MergeOwnerRefs(&addonCopy.OwnerReferences, *owner, false)
- if modified {
- _, err = hubAddonClient.AddonV1beta1().ManagedClusterAddOns(addonCopy.Namespace).Update(context.Background(), addonCopy, metav1.UpdateOptions{})
- gomega.Expect(err).ToNot(gomega.HaveOccurred())
- }
+ addonCopy := addon.DeepCopy()
+ utils.MergeOwnerRefs(&addonCopy.OwnerReferences, *owner, false)
+
+ _, err := hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).Create(context.Background(), addonCopy, metav1.CreateOptions{})
+ gomega.Expect(err).ToNot(gomega.HaveOccurred())
}Note: This function also exists in test/integration/cloudevents/agent_deploy_test.go and needs the same fix.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/integration/kube/assertion_test.go` around lines 106 - 123, In
createManagedClusterAddOnwithOwnerRefs, set the controller OwnerReference on the
addon object before calling
hubAddonClient.AddonV1beta1().ManagedClusterAddOns(...).Create so there is no
reconciliation window; specifically, build the owner with
metav1.NewControllerRef(cma, schema.GroupVersionKind{...}) and merge it into
addon.OwnerReferences via utils.MergeOwnerRefs (or equivalent) prior to Create,
then call Create and skip the post-Create Update path; apply the same change to
the duplicate function in test/integration/cloudevents/agent_deploy_test.go.
b1a11f7 to
374fd0a
Compare
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
test/integration/cloudevents/suite_test.go (1)
223-252:⚠️ Potential issue | 🔴 CriticalType mismatch detected in method and callback signatures.
The code uses
addonapiv1beta1types but theagent.AgentAddoninterface and callback definitions expectaddonapiv1alpha1types:
Manifestsmethod parameter: usesaddonapiv1beta1.ManagedClusterAddOn(expected:addonapiv1alpha1.ManagedClusterAddOn)CSRConfigurationscallback: usesaddonapiv1beta1.ManagedClusterAddOnand returnsaddonapiv1beta1.RegistrationConfig(expected:addonapiv1alpha1types)CSRApproveCheckcallback: usesaddonapiv1beta1.ManagedClusterAddOn(expected:addonapiv1alpha1.ManagedClusterAddOn)CSRSigncallback: usesaddonapiv1beta1.ManagedClusterAddOn(expected:addonapiv1alpha1.ManagedClusterAddOn)Update all method and callback signatures to use
addonapiv1alpha1types to match the interface definitions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/cloudevents/suite_test.go` around lines 223 - 252, The Manifests and GetAgentAddonOptions registration callbacks use addonapiv1beta1 types but the agent.AgentAddon interface expects addonapiv1alpha1; update the signatures to use addonapiv1alpha1 throughout: change the Manifests method parameter from addonapiv1beta1.ManagedClusterAddOn to addonapiv1alpha1.ManagedClusterAddOn, update the RegistrationOption.CSRConfigurations callback to accept addonapiv1alpha1.ManagedClusterAddOn and return []addonapiv1alpha1.RegistrationConfig, and change CSRApproveCheck and CSRSign callbacks to use addonapiv1alpha1.ManagedClusterAddOn (and adjust CSRSign return types if needed) so all references in Manifests, GetAgentAddonOptions, RegistrationOption, CSRConfigurations, CSRApproveCheck, and CSRSign consistently use addonapiv1alpha1.pkg/utils/addon_config.go (1)
39-58:⚠️ Potential issue | 🟠 MajorDon't treat a missing deployment config as "ready" in template-based flows.
Line 55 returns
""/nilwhen the config reference is absent, so callers can continue with defaults even if the add-on is only missing its status wiring temporarily. That makes template-based addons vulnerable to being processed before their config references are fully established. Please either surface a retryable error here or verify every caller gates this helper behind the template-based readiness filter.Based on learnings: In template-based addon mode (templateBasedAddOn=true), the FilterTemplateBasedAddOns function intentionally returns false for template-based addons whose Status.ConfigReferences are not yet populated with the proper AddOnTemplate reference and SpecHash. This ensures controllers retry and wait for template configuration to be fully established before processing, preventing premature processing of incompletely configured template addons.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/addon_config.go` around lines 39 - 58, The helper that calls GetDesiredAddOnDeploymentConfig currently returns "", nil when config == nil, which allows processing to continue for template-based addons; instead detect template-based addons (use FilterTemplateBasedAddOns or the templateBasedAddOn predicate) and, if the addon is template-based and config == nil, return a retryable error (e.g., fmt.Errorf("addon deployment config not yet populated for template-based addon %s")), so controllers will retry; otherwise (non-template addons) you may keep returning the default "", nil. Update the anonymous function wrapping GetDesiredAddOnDeploymentConfig and use FilterTemplateBasedAddOns / templateBasedAddOn logic to gate this behavior.
♻️ Duplicate comments (8)
test/e2ehosted/helloworld_hosted_test.go (1)
49-79:⚠️ Potential issue | 🟠 MajorBackfill beta1 annotations on existing
ManagedClusterAddOnobjects.If Line 50 finds a leftover
helloworldhostedfrom a previous run, this path never writesHostingClusterNameAnnotationKeyorInstallNamespaceAnnotation, so the later polls can keep waiting on a stale CR. Patch/update the existing object inside thisEventuallyloop instead of only setting annotations on create. Also return the create result rather than the staleNotFounderror.🔧 Suggested fix
addon, err := hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(hostedManagedClusterName).Get( context.Background(), helloWorldHostedAddonName, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { _, cerr := hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(hostedManagedClusterName).Create( context.Background(), &addonapiv1beta1.ManagedClusterAddOn{ ObjectMeta: metav1.ObjectMeta{ Namespace: hostedManagedClusterName, Name: helloWorldHostedAddonName, Annotations: map[string]string{ addonapiv1beta1.HostingClusterNameAnnotationKey: hostingClusterName, addonapiv1beta1.InstallNamespaceAnnotation: addonAgentNamespace, }, }, Spec: addonapiv1beta1.ManagedClusterAddOnSpec{}, }, metav1.CreateOptions{}) - if cerr != nil { - return cerr - } + return cerr } return err } + + addon = addon.DeepCopy() + if addon.Annotations == nil { + addon.Annotations = map[string]string{} + } + needsUpdate := false + if addon.Annotations[addonapiv1beta1.HostingClusterNameAnnotationKey] != hostingClusterName { + addon.Annotations[addonapiv1beta1.HostingClusterNameAnnotationKey] = hostingClusterName + needsUpdate = true + } + if addon.Annotations[addonapiv1beta1.InstallNamespaceAnnotation] != addonAgentNamespace { + addon.Annotations[addonapiv1beta1.InstallNamespaceAnnotation] = addonAgentNamespace + needsUpdate = true + } + if needsUpdate { + _, err = hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(hostedManagedClusterName).Update( + context.Background(), addon, metav1.UpdateOptions{}) + return err + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2ehosted/helloworld_hosted_test.go` around lines 49 - 79, If a leftover ManagedClusterAddOn exists, the test must backfill beta1 annotations instead of leaving a stale CR; inside the Eventually block after hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(...).Get (symbol: addon, err), when err == nil check addon.ObjectMeta.Annotations for addonapiv1beta1.HostingClusterNameAnnotationKey and addonapiv1beta1.InstallNamespaceAnnotation and if missing, set them on addon.ObjectMeta.Annotations and call hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(hostedManagedClusterName).Update(context.Background(), addon, metav1.UpdateOptions{}) (handle and return any update error), and on successful create or update return nil so the Eventually loop proceeds instead of waiting on a stale state.test/integration-test.mk (1)
72-73:⚠️ Potential issue | 🟠 MajorRun update/test/restore in one recipe.
update-crd-storage-versionandtest-kube-integrationare normal prerequisites here, somake -jcan schedule them in parallel, and if the test prerequisite fails the restore step never runs. Keep the full flow in one shell with anEXITtrap so ordering, cleanup, and the final exit code are guaranteed.Proposed fix
-test-integration: update-crd-storage-version test-kube-integration - @$(MAKE) restore-crd-storage-version +test-integration: + @( \ + set -e; \ + trap 'rc=$$?; $(MAKE) restore-crd-storage-version; exit $$rc' EXIT; \ + $(MAKE) update-crd-storage-version; \ + $(MAKE) test-kube-integration; \ + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration-test.mk` around lines 72 - 73, The test-integration recipe currently lists update-crd-storage-version and test-kube-integration as prerequisites so make -j may run them in parallel and restore-crd-storage-version won’t run if a prerequisite fails; change test-integration to run the full sequence in one shell: invoke update-crd-storage-version, run test-kube-integration, and then always run restore-crd-storage-version in a shell-level EXIT trap that captures the first non-zero exit code and re-returns it; update the recipe for test-integration to call those targets serially inside a single sh -c block and set a trap 'EXIT' to run restore-crd-storage-version and exit with the original status.pkg/utils/permission.go (2)
327-335:⚠️ Potential issue | 🟠 MajorExisting
Role/RoleBindingobjects will keep the old ownerRef APIVersion.This helper now writes a beta1 owner ref, but
ApplyRoleandApplyRoleBindingstill ignoreOwnerReferenceson update. Anything created under v1alpha1 stays on the old owner ref unless rules or subjects also change.💡 Reconcile owner refs on update too
// ApplyRole existingCopy := existing.DeepCopy() contentSame := equality.Semantic.DeepEqual(existingCopy.Rules, required.Rules) -if contentSame { +ownerRefsSame := equality.Semantic.DeepEqual(existingCopy.OwnerReferences, required.OwnerReferences) +if contentSame && ownerRefsSame { return existingCopy, false, nil } existingCopy.Rules = required.Rules +existingCopy.OwnerReferences = required.OwnerReferences// ApplyRoleBinding subjectsAreSame := equality.Semantic.DeepEqual(existingCopy.Subjects, requiredCopy.Subjects) roleRefIsSame := equality.Semantic.DeepEqual(existingCopy.RoleRef, requiredCopy.RoleRef) -if subjectsAreSame && roleRefIsSame { +ownerRefsSame := equality.Semantic.DeepEqual(existingCopy.OwnerReferences, requiredCopy.OwnerReferences) +if subjectsAreSame && roleRefIsSame && ownerRefsSame { return existingCopy, false, nil } existingCopy.Subjects = requiredCopy.Subjects existingCopy.RoleRef = requiredCopy.RoleRef +existingCopy.OwnerReferences = requiredCopy.OwnerReferences🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/permission.go` around lines 327 - 335, Existing Role/RoleBinding objects keep the old v1alpha1 ownerRef because ensureAddonOwnerReference now writes beta1 but ApplyRole/ApplyRoleBinding only set ownerRefs on create; update paths must reconcile owner references too. Modify the update logic in ApplyRole and ApplyRoleBinding so that when an existing object is patched/updated you also compute the desired ownerRefs (using ensureAddonOwnerReference) and replace the object's OwnerReferences if they differ, then persist the update (or include ownerRefs in the patch) so existing resources are migrated to the new addonapiv1beta1 ownerRef; reference the ensureAddonOwnerReference helper and the ApplyRole/ApplyRoleBinding functions to locate where to add this ownerRef reconciliation.
295-297:⚠️ Potential issue | 🟡 MinorThe empty-subject guard is comparing the wrong beta1 type.
subjectis a*addonapiv1beta1.BaseSubject, so this comparison never matches. Current callers only checklen(subjects), but the helper no longer honors its documented nil-on-empty behavior.💡 Minimal fix
- if subject == nil || equality.Semantic.DeepEqual(*subject, addonapiv1beta1.Subject{}) { + if subject == nil || (subject.User == "" && len(subject.Groups) == 0) { return nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/permission.go` around lines 295 - 297, The guard incorrectly compares subject (a *addonapiv1beta1.BaseSubject) against addonapiv1beta1.Subject{}, so it never detects an empty BaseSubject; update the check to compare *subject to addonapiv1beta1.BaseSubject{} (i.e. use equality.Semantic.DeepEqual(*subject, addonapiv1beta1.BaseSubject{})) or an equivalent zero-value/is-zero check; modify the code around the subject nil/empty guard (the variable subject and the equality.Semantic.DeepEqual call) so the helper returns nil for an empty BaseSubject as documented.pkg/addonmanager/controllers/agentdeploy/default_sync_test.go (1)
154-164:⚠️ Potential issue | 🟡 MinorAssert the exact
ManifestAppliedcondition here.These checks are still too weak for the beta1 reason migration. The success paths pass on a missing or
Unknowncondition, and none of these blocks verify the expected reason.💡 Tighten the assertions
- if meta.IsStatusConditionFalse(addOn.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnManifestApplied) { - t.Errorf("Condition Reason is not correct: %v", addOn.Status.Conditions) - } + cond := meta.FindStatusCondition(addOn.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnManifestApplied) + if cond == nil || cond.Status != metav1.ConditionTrue || + cond.Reason != addonapiv1beta1.AddonManifestAppliedReasonManifestsApplied { + t.Errorf("unexpected ManifestApplied condition: %#v", cond) + }- if !meta.IsStatusConditionFalse(addOn.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnManifestApplied) { - t.Errorf("Condition Reason is not correct: %v", addOn.Status.Conditions) - } + cond := meta.FindStatusCondition(addOn.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnManifestApplied) + if cond == nil || cond.Status != metav1.ConditionFalse || + cond.Reason != addonapiv1beta1.AddonManifestAppliedReasonWorkApplyFailed { + t.Errorf("unexpected ManifestApplied condition: %#v", cond) + }Also applies to: 206-216, 247-256
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/agentdeploy/default_sync_test.go` around lines 154 - 164, The test's validateAddonActions is only weakly checking ManifestApplied; update it to assert the exact condition object exists and has the expected status and reason: locate the patch unmarshalling into ManagedClusterAddOn in validateAddonActions, find the ManifestApplied condition (use meta.FindStatusCondition or iterate addOn.Status.Conditions), assert meta.IsStatusConditionTrue(addOn.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnManifestApplied) (or equivalent), and assert the condition.Reason equals the canonical reason used by the controller for the beta1 migration (the exact reason constant/string used in your controller), failing the test if the condition is missing, not True, or has the wrong Reason.test/integration/kube/cluster_management_addon_test.go (1)
69-70:⚠️ Potential issue | 🟠 MajorRegister CMA cleanup immediately after create.
Lines 69-70 create a cluster-scoped
ClusterManagementAddOn, but the only delete is at Lines 101-102 inside the spec body. If this spec exits early, the resource is left behind and later runs can fail withAlreadyExists. Useginkgo.DeferCleanupright after the create succeeds.🧹 Suggested cleanup pattern
cma, err := hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Create(context.Background(), cma, metav1.CreateOptions{}) gomega.Expect(err).ToNot(gomega.HaveOccurred()) + ginkgo.DeferCleanup(func() { + cleanupErr := hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Delete( + context.Background(), cma.Name, metav1.DeleteOptions{}, + ) + gomega.Expect(cleanupErr).ToNot(gomega.HaveOccurred()) + }) @@ - err = hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Delete(context.Background(), testAddonImpl.name, metav1.DeleteOptions{}) - gomega.Expect(err).ToNot(gomega.HaveOccurred())Also applies to: 101-102
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/kube/cluster_management_addon_test.go` around lines 69 - 70, After creating the ClusterManagementAddOn with hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Create(...), register ginkgo.DeferCleanup immediately (right after the gomega.Expect(err).ToNot(HaveOccurred()) line) to ensure the CMA is always deleted if the test exits early; reference the created variable (cma) and the client method ClusterManagementAddOns().Delete(...) in the cleanup so the resource is removed reliably (also apply the same pattern where deletion currently occurs around the spec body).pkg/addonmanager/controllers/cmamanagedby/controller.go (1)
87-90:⚠️ Potential issue | 🔴 CriticalFix the lifecycle annotation migration before this can build.
The build is already failing on Lines 87-90:
addonapiv1beta1does not exportAddonLifecycleAnnotationKeyorAddonLifecycleSelfManageAnnotationValue. Also, once this is rewired to the valid constant source, Line 90 should delete the annotation instead of assigning"", otherwise the key is still present.Run this to locate the surviving lifecycle constant source before rewiring the import:
#!/bin/bash set -euo pipefail rg -n --type=go 'AddonLifecycleAnnotationKey|AddonLifecycleSelfManageAnnotationValue'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/cmamanagedby/controller.go` around lines 87 - 90, The code is referencing non-exported constants addonapiv1beta1.AddonLifecycleAnnotationKey and AddonLifecycleSelfManageAnnotationValue and is deleting the annotation incorrectly by setting it to an empty string; find the correct package that exports AddonLifecycleAnnotationKey and AddonLifecycleSelfManageAnnotationValue (use the suggested ripgrep command to locate the canonical source), update the import to that package, and replace the line that sets cmaCopy.Annotations[...]="” with code that actually deletes the key (e.g., delete(cmaCopy.Annotations, AddonLifecycleAnnotationKey)) so the annotation is removed rather than left as an empty value, keeping references to the same symbol names (AddonLifecycleAnnotationKey, AddonLifecycleSelfManageAnnotationValue) in your changes.test/e2e/helloworld_helm_test.go (1)
296-298:⚠️ Potential issue | 🟡 MinorMerge the values annotation instead of replacing the map.
Line 296 overwrites every existing annotation on the live
ManagedClusterAddOn. In beta1 that can drop install-namespace and other addon metadata that now lives in annotations, changing the behavior under test.🩹 Suggested fix
- newAddon.SetAnnotations(map[string]string{ - "addon.open-cluster-management.io/values": `{"global":{"imagePullSecret":"mySecret"}}`, - }) + annotations := newAddon.GetAnnotations() + if annotations == nil { + annotations = map[string]string{} + } + annotations["addon.open-cluster-management.io/values"] = `{"global":{"imagePullSecret":"mySecret"}}` + newAddon.SetAnnotations(annotations)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/helloworld_helm_test.go` around lines 296 - 298, The test currently calls newAddon.SetAnnotations(...) which replaces the entire annotations map and can drop existing keys; instead merge the new values into the existing annotation map: retrieve the current annotations via newAddon.GetAnnotations() (handle nil by creating a map), set or overwrite the "addon.open-cluster-management.io/values" key with the JSON string, and then call newAddon.SetAnnotations(mergedMap) so only that annotation is updated and other existing annotations (like install-namespace) are preserved.
🧹 Nitpick comments (8)
pkg/utils/addon_config_test.go (1)
98-132: Commented-out test case has inconsistent beta1 structure.The commented test case "addon deployment config reference spec hash not match" appears to have a partially migrated structure:
- Line 121 places
ConfigReferentat theConfigReferencelevel (alpha1 pattern)- Line 124 has
DesiredConfigwith onlySpecHash, missing the nestedConfigReferentThis is inconsistent with the beta1 structure used in other test cases where
ConfigReferentis nested insideDesiredConfig.ConfigSpecHash.Consider either:
- Removing this commented code if the test case is no longer needed
- Fixing the structure to match beta1 and uncommenting it
♻️ Suggested fix if uncommenting
- // ConfigReferent: addonapiv1beta1.ConfigReferent{ - // Name: "test1", - // }, // DesiredConfig: &addonapiv1beta1.ConfigSpecHash{ + // ConfigReferent: addonapiv1beta1.ConfigReferent{ + // Name: "test1", + // }, // SpecHash: "wronghash", // },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/addon_config_test.go` around lines 98 - 132, The commented test case "addon deployment config reference spec hash not match" contains an inconsistent beta1 structure; either delete the commented block or restore beta1 fields and uncomment: ensure ManagedClusterAddOn.Status.ConfigReferences uses entries where ConfigReference contains DesiredConfig of type ConfigSpecHash which itself includes ConfigReferent{Name: "..."} and SpecHash:"wronghash" (matching beta1 nesting), and that AddOnDeploymentConfig.Spec remains as in other tests; update the ConfigReference/DesiredConfig/ConfigSpecHash/ConfigReferent nesting to match the other test cases before re-enabling the test.pkg/utils/permission_test.go (3)
544-563: Verify the user subject explicitly.These assertions only prove group filtering. A bug in the beta1 user-field mapping could still pass here if the group count stays the same. Add an
expectedUserfield and assert that arbacv1.UserKindsubject with that name is present.Suggested direction
+ foundUser := false for _, subject := range subjects { + if subject.Kind == rbacv1.UserKind && subject.Name == tt.expectedUser { + foundUser = true + } if subject.Kind == rbacv1.GroupKind && subject.Name == tt.shouldContain { foundExpected = true } if subject.Kind == rbacv1.GroupKind && subject.Name == tt.shouldNotContain { foundFiltered = true } } + assert.True(t, foundUser, "expected user %s not found in subjects", tt.expectedUser) assert.True(t, foundExpected, "expected group %s not found in subjects", tt.shouldContain) assert.False(t, foundFiltered, "group %s should have been filtered out but was found", tt.shouldNotContain)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/permission_test.go` around lines 544 - 563, Add an expectedUser field to the test case struct and, inside the t.Run where BuildSubjectsFromRegistration is called, iterate subjects to assert there is a subject with Kind == rbacv1.UserKind and Name == tt.expectedUser; keep the existing group existence/filtered assertions but add a boolean like foundUser and assert.True(t, foundUser, "expected user %s not found in subjects", tt.expectedUser). This change targets the test harness around BuildSubjectsFromRegistration to explicitly validate the user subject mapping.
72-212: Consider sharing the case matrix between the role and cluster-role tests.Both suites duplicate the same registration fixtures almost verbatim. Pulling the addon builders or test cases into a shared helper would make future API migrations less error-prone.
Also applies to: 247-387
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/permission_test.go` around lines 72 - 212, The tests variable in permission_test.go duplicates registration fixtures used elsewhere (also at lines 247-387); refactor by extracting shared test builders/helpers (e.g., NewManagedClusterAddOn builder functions that construct addonv1beta1.ManagedClusterAddOn with RegistrationConfig/KubeClientConfig/CustomSignerConfig/Subject variants) and use those builders in both the role and cluster-role test suites instead of inlining fixtures, so tests reuse the same constructors for cases like "no registrations", "KubeClient is nil", "empty subject", and the valid-subject variants.
233-236: Assert the actual RBAC subjects, not just their count.These success cases still pass if the wrong user/group is bound as long as the cardinality matches. After the move to
KubeClientSubject.BaseSubject, it would be safer to table-drive the expected[]rbacv1.Subjectand compare the full set.Suggested direction
- assert.Len(t, binding.Subjects, tt.expectedSubLen) + assert.ElementsMatch(t, tt.expectedSubjects, binding.Subjects)Add
expectedSubjects []rbacv1.Subjectto each test case so the assertions catch field-mapping regressions, not just count changes.Also applies to: 408-411
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/permission_test.go` around lines 233 - 236, Tests currently only assert the length of binding.Subjects which misses incorrect subject mappings after the KubeClientSubject.BaseSubject change; update the table-driven tests that call fakeKubeClient.RbacV1().ClusterRoleBindings().Get and inspect binding.Subjects to include an expectedSubjects []rbacv1.Subject per test case and assert the actual subjects match (use deep equality or element-wise comparison like assert.ElementsMatch/assert.Equal) so the full Subject fields (Kind, Name, Namespace, APIGroup) are validated rather than just the count.pkg/addonmanager/controllers/addonconfig/controller_test.go (1)
558-558: Consider fixing the typo in function name.The helper function is named
newTestConfingbut should likely benewTestConfig. This is a pre-existing typo, but could be addressed while making changes to this file.✏️ Suggested rename
-func newTestConfing(name, namespace string, generation int64) *unstructured.Unstructured { +func newTestConfig(name, namespace string, generation int64) *unstructured.Unstructured {Note: Also update all call sites (lines 52, 93, 147, 176, 215, 257, 301, 381, 427, 479, 523).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/addonconfig/controller_test.go` at line 558, Rename the helper function newTestConfing to newTestConfig and update every call site to use the new name; specifically change the function declaration named newTestConfing(name, namespace string, generation int64) to newTestConfig and replace all usages in the test file so they compile and run (update imports or references if any helper aliasing was used). Ensure the symbol newTestConfig is consistently used across tests (where newTestConfing was previously called) and run the test suite to confirm no remaining references to the old misspelled name.pkg/addonmanager/controllers/agentdeploy/default_hook_sync_test.go (1)
44-53: Consider extracting the repeated APIVersion string to a constant.The APIVersion string
"addon.open-cluster-management.io/v1beta1"appears in multiple OwnerReference definitions (lines 46, 168, 235). Extracting it to a test constant would reduce duplication and simplify future API version migrations.♻️ Suggested refactor
+const testAddonAPIVersion = "addon.open-cluster-management.io/v1beta1" + func getDeployWork() *workapiv1.ManifestWork { // ... work.SetOwnerReferences([]metav1.OwnerReference{ { - APIVersion: "addon.open-cluster-management.io/v1beta1", + APIVersion: testAddonAPIVersion, Kind: "ManagedClusterAddOn",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/agentdeploy/default_hook_sync_test.go` around lines 44 - 53, The APIVersion string "addon.open-cluster-management.io/v1beta1" is duplicated across multiple OwnerReference uses; introduce a test-level constant (e.g. const testAddonAPIVersion = "addon.open-cluster-management.io/v1beta1") and replace the literal occurrences in the OwnerReference constructions used by SetOwnerReferences in default_hook_sync_test.go (and any other tests referencing the same literal) with that constant to centralize the value for easier updates.pkg/addonmanager/controllers/agentdeploy/hosted_sync_test.go (1)
499-507: Avoid hardcoded addon API version literals in these owner refs.Lines 501 and 568 duplicate
"addon.open-cluster-management.io/v1beta1". Since this PR is already doing an API-version migration, derivingOwnerReference.APIVersionfrom the beta1 package’s exported group/version constant would keep these fixtures aligned with the import and make the next version bump less error-prone. Please verify the exported symbol name inopen-cluster-management.io/api/addon/v1beta1before swapping the literal.Also applies to: 566-574
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/agentdeploy/hosted_sync_test.go` around lines 499 - 507, Replace the hardcoded "addon.open-cluster-management.io/v1beta1" APIVersion in the OwnerReference set by work.SetOwnerReferences with the exported group/version constant from the addon v1beta1 package (import open-cluster-management.io/api/addon/v1beta1 and use its exported GroupVersion constant like v1beta1.SchemeGroupVersion.String() or the correct exported symbol name after verification); update both occurrences (the OwnerReference.APIVersion in the work.SetOwnerReferences call and the similar OwnerReference block later) so the fixtures derive APIVersion from the imported package rather than a literal.pkg/addonmanager/controllers/addonconfig/controller.go (1)
217-228: Redundant nil check forDesiredConfig.Line 217 checks
if configReference.DesiredConfig == nilagain, but this condition can never be true at this point since the loop already continues at line 192-194 whenDesiredConfigis nil.Suggested fix
// update desired spec hash only for the configs in spec for _, addonconfig := range addon.Spec.Configs { - if configReference.DesiredConfig == nil { - continue - } - if configReference.ConfigGroupResource == addonconfig.ConfigGroupResource && configReference.DesiredConfig.ConfigReferent == addonconfig.ConfigReferent {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/addonconfig/controller.go` around lines 217 - 228, The nil check for configReference.DesiredConfig is redundant because the loop already continues when DesiredConfig is nil; remove the if configReference.DesiredConfig == nil { continue } check at lines 217-220 and simply perform the resource/type comparison and spec hash update directly (keep the existing checks for ConfigGroupResource and ConfigReferent), calling utils.GetSpecHash(config) and assigning addon.Status.ConfigReferences[index].DesiredConfig.SpecHash as before to avoid unnecessary branches; reference symbols: configReference, DesiredConfig, ConfigGroupResource, ConfigReferent, utils.GetSpecHash, addon.Status.ConfigReferences[index].DesiredConfig.SpecHash, config.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@hack/fix-crd-storage-version.sh`:
- Around line 9-10: The current BACKUP_FILE uses a fixed name in BACKUP_DIR
(BACKUP_DIR="${TEST_TMP:-/tmp}" and
BACKUP_FILE="${BACKUP_DIR}/managedclusteraddons.crd.yaml.backup") which can
collide across checkouts/runs; change BACKUP_FILE to be unique per run by using
a repo-local path or a unique suffix (e.g., include the current PID, timestamp,
git commit hash, or use mktemp) so that scripts like restore-crd-storage-version
read the correct file; update references to BACKUP_FILE/ BACKUP_DIR accordingly
and ensure cleanup still works for the new unique filename.
- Around line 58-63: The verify block currently only prints lines with
"storage:" from ${CRD_FILE} and always succeeds; update it to assert that the
expected storage value(s) are present and exit non‑zero when not. After listing
the grep output for "storage:" against ${CRD_FILE}, run a test (e.g., grep -q
for the exact expected storage string or count matches) and if the pattern is
not found, print a clear error and exit 1 so the script fails when the swap did
not take.
In `@pkg/addonmanager/controllers/addonconfig/controller_test.go`:
- Around line 61-62: The test fails because addontesting.NewAddon currently
returns *addonv1alpha1.ManagedClusterAddOn while the test (anonymous func in
controller_test.go) expects *addonapiv1beta1.ManagedClusterAddOn; update the
addontesting.NewAddon function signature and its return value to return
*addonapiv1beta1.ManagedClusterAddOn, adjust any constructors/struct literals
inside NewAddon to use the beta API types (and update imports to include
addonapiv1beta1 instead of addonv1alpha1), and ensure callers are updated or
type-converted where necessary so the helper now provides the correct type.
In `@pkg/addonmanager/controllers/cmamanagedby/controller_test.go`:
- Around line 83-94: The test's patch assertion in validateAddonActions is too
strict and has a mismatched error message; instead of asserting
len(cma.Annotations) == 1, change the check to verify only that the addon
lifecycle annotation key (addonapiv1beta1.AddonLifecycleAnnotationKey) is
present in cma.Annotations and that its value is the empty string, and update
the error message to state the expectation of an empty lifecycle annotation;
keep the rest of the unmarshalling and patch extraction (patch :=
actions[0].(clienttesting.PatchActionImpl).Patch, cma :=
&addonapiv1beta1.ClusterManagementAddOn{}) the same.
In `@pkg/utils/addon_config.go`:
- Around line 67-71: GetDesiredAddOnDeploymentConfig currently dereferences
addon.Status and can panic when addon is nil; add an explicit nil check at the
top of GetDesiredAddOnDeploymentConfig (guarding the addon parameter) and return
a descriptive error instead of proceeding (e.g., "addon is nil" or similar).
Ensure the function returns that error immediately before any access to
addon.Status or other fields, preserving existing return types and behavior for
non-nil inputs.
---
Outside diff comments:
In `@pkg/utils/addon_config.go`:
- Around line 39-58: The helper that calls GetDesiredAddOnDeploymentConfig
currently returns "", nil when config == nil, which allows processing to
continue for template-based addons; instead detect template-based addons (use
FilterTemplateBasedAddOns or the templateBasedAddOn predicate) and, if the addon
is template-based and config == nil, return a retryable error (e.g.,
fmt.Errorf("addon deployment config not yet populated for template-based addon
%s")), so controllers will retry; otherwise (non-template addons) you may keep
returning the default "", nil. Update the anonymous function wrapping
GetDesiredAddOnDeploymentConfig and use FilterTemplateBasedAddOns /
templateBasedAddOn logic to gate this behavior.
In `@test/integration/cloudevents/suite_test.go`:
- Around line 223-252: The Manifests and GetAgentAddonOptions registration
callbacks use addonapiv1beta1 types but the agent.AgentAddon interface expects
addonapiv1alpha1; update the signatures to use addonapiv1alpha1 throughout:
change the Manifests method parameter from addonapiv1beta1.ManagedClusterAddOn
to addonapiv1alpha1.ManagedClusterAddOn, update the
RegistrationOption.CSRConfigurations callback to accept
addonapiv1alpha1.ManagedClusterAddOn and return
[]addonapiv1alpha1.RegistrationConfig, and change CSRApproveCheck and CSRSign
callbacks to use addonapiv1alpha1.ManagedClusterAddOn (and adjust CSRSign return
types if needed) so all references in Manifests, GetAgentAddonOptions,
RegistrationOption, CSRConfigurations, CSRApproveCheck, and CSRSign consistently
use addonapiv1alpha1.
---
Duplicate comments:
In `@pkg/addonmanager/controllers/agentdeploy/default_sync_test.go`:
- Around line 154-164: The test's validateAddonActions is only weakly checking
ManifestApplied; update it to assert the exact condition object exists and has
the expected status and reason: locate the patch unmarshalling into
ManagedClusterAddOn in validateAddonActions, find the ManifestApplied condition
(use meta.FindStatusCondition or iterate addOn.Status.Conditions), assert
meta.IsStatusConditionTrue(addOn.Status.Conditions,
addonapiv1beta1.ManagedClusterAddOnManifestApplied) (or equivalent), and assert
the condition.Reason equals the canonical reason used by the controller for the
beta1 migration (the exact reason constant/string used in your controller),
failing the test if the condition is missing, not True, or has the wrong Reason.
In `@pkg/addonmanager/controllers/cmamanagedby/controller.go`:
- Around line 87-90: The code is referencing non-exported constants
addonapiv1beta1.AddonLifecycleAnnotationKey and
AddonLifecycleSelfManageAnnotationValue and is deleting the annotation
incorrectly by setting it to an empty string; find the correct package that
exports AddonLifecycleAnnotationKey and AddonLifecycleSelfManageAnnotationValue
(use the suggested ripgrep command to locate the canonical source), update the
import to that package, and replace the line that sets
cmaCopy.Annotations[...]="” with code that actually deletes the key (e.g.,
delete(cmaCopy.Annotations, AddonLifecycleAnnotationKey)) so the annotation is
removed rather than left as an empty value, keeping references to the same
symbol names (AddonLifecycleAnnotationKey,
AddonLifecycleSelfManageAnnotationValue) in your changes.
In `@pkg/utils/permission.go`:
- Around line 327-335: Existing Role/RoleBinding objects keep the old v1alpha1
ownerRef because ensureAddonOwnerReference now writes beta1 but
ApplyRole/ApplyRoleBinding only set ownerRefs on create; update paths must
reconcile owner references too. Modify the update logic in ApplyRole and
ApplyRoleBinding so that when an existing object is patched/updated you also
compute the desired ownerRefs (using ensureAddonOwnerReference) and replace the
object's OwnerReferences if they differ, then persist the update (or include
ownerRefs in the patch) so existing resources are migrated to the new
addonapiv1beta1 ownerRef; reference the ensureAddonOwnerReference helper and the
ApplyRole/ApplyRoleBinding functions to locate where to add this ownerRef
reconciliation.
- Around line 295-297: The guard incorrectly compares subject (a
*addonapiv1beta1.BaseSubject) against addonapiv1beta1.Subject{}, so it never
detects an empty BaseSubject; update the check to compare *subject to
addonapiv1beta1.BaseSubject{} (i.e. use equality.Semantic.DeepEqual(*subject,
addonapiv1beta1.BaseSubject{})) or an equivalent zero-value/is-zero check;
modify the code around the subject nil/empty guard (the variable subject and the
equality.Semantic.DeepEqual call) so the helper returns nil for an empty
BaseSubject as documented.
In `@test/e2e/helloworld_helm_test.go`:
- Around line 296-298: The test currently calls newAddon.SetAnnotations(...)
which replaces the entire annotations map and can drop existing keys; instead
merge the new values into the existing annotation map: retrieve the current
annotations via newAddon.GetAnnotations() (handle nil by creating a map), set or
overwrite the "addon.open-cluster-management.io/values" key with the JSON
string, and then call newAddon.SetAnnotations(mergedMap) so only that annotation
is updated and other existing annotations (like install-namespace) are
preserved.
In `@test/e2ehosted/helloworld_hosted_test.go`:
- Around line 49-79: If a leftover ManagedClusterAddOn exists, the test must
backfill beta1 annotations instead of leaving a stale CR; inside the Eventually
block after hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(...).Get (symbol:
addon, err), when err == nil check addon.ObjectMeta.Annotations for
addonapiv1beta1.HostingClusterNameAnnotationKey and
addonapiv1beta1.InstallNamespaceAnnotation and if missing, set them on
addon.ObjectMeta.Annotations and call
hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(hostedManagedClusterName).Update(context.Background(),
addon, metav1.UpdateOptions{}) (handle and return any update error), and on
successful create or update return nil so the Eventually loop proceeds instead
of waiting on a stale state.
In `@test/integration-test.mk`:
- Around line 72-73: The test-integration recipe currently lists
update-crd-storage-version and test-kube-integration as prerequisites so make -j
may run them in parallel and restore-crd-storage-version won’t run if a
prerequisite fails; change test-integration to run the full sequence in one
shell: invoke update-crd-storage-version, run test-kube-integration, and then
always run restore-crd-storage-version in a shell-level EXIT trap that captures
the first non-zero exit code and re-returns it; update the recipe for
test-integration to call those targets serially inside a single sh -c block and
set a trap 'EXIT' to run restore-crd-storage-version and exit with the original
status.
In `@test/integration/kube/cluster_management_addon_test.go`:
- Around line 69-70: After creating the ClusterManagementAddOn with
hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Create(...), register
ginkgo.DeferCleanup immediately (right after the
gomega.Expect(err).ToNot(HaveOccurred()) line) to ensure the CMA is always
deleted if the test exits early; reference the created variable (cma) and the
client method ClusterManagementAddOns().Delete(...) in the cleanup so the
resource is removed reliably (also apply the same pattern where deletion
currently occurs around the spec body).
---
Nitpick comments:
In `@pkg/addonmanager/controllers/addonconfig/controller_test.go`:
- Line 558: Rename the helper function newTestConfing to newTestConfig and
update every call site to use the new name; specifically change the function
declaration named newTestConfing(name, namespace string, generation int64) to
newTestConfig and replace all usages in the test file so they compile and run
(update imports or references if any helper aliasing was used). Ensure the
symbol newTestConfig is consistently used across tests (where newTestConfing was
previously called) and run the test suite to confirm no remaining references to
the old misspelled name.
In `@pkg/addonmanager/controllers/addonconfig/controller.go`:
- Around line 217-228: The nil check for configReference.DesiredConfig is
redundant because the loop already continues when DesiredConfig is nil; remove
the if configReference.DesiredConfig == nil { continue } check at lines 217-220
and simply perform the resource/type comparison and spec hash update directly
(keep the existing checks for ConfigGroupResource and ConfigReferent), calling
utils.GetSpecHash(config) and assigning
addon.Status.ConfigReferences[index].DesiredConfig.SpecHash as before to avoid
unnecessary branches; reference symbols: configReference, DesiredConfig,
ConfigGroupResource, ConfigReferent, utils.GetSpecHash,
addon.Status.ConfigReferences[index].DesiredConfig.SpecHash, config.
In `@pkg/addonmanager/controllers/agentdeploy/default_hook_sync_test.go`:
- Around line 44-53: The APIVersion string
"addon.open-cluster-management.io/v1beta1" is duplicated across multiple
OwnerReference uses; introduce a test-level constant (e.g. const
testAddonAPIVersion = "addon.open-cluster-management.io/v1beta1") and replace
the literal occurrences in the OwnerReference constructions used by
SetOwnerReferences in default_hook_sync_test.go (and any other tests referencing
the same literal) with that constant to centralize the value for easier updates.
In `@pkg/addonmanager/controllers/agentdeploy/hosted_sync_test.go`:
- Around line 499-507: Replace the hardcoded
"addon.open-cluster-management.io/v1beta1" APIVersion in the OwnerReference set
by work.SetOwnerReferences with the exported group/version constant from the
addon v1beta1 package (import open-cluster-management.io/api/addon/v1beta1 and
use its exported GroupVersion constant like v1beta1.SchemeGroupVersion.String()
or the correct exported symbol name after verification); update both occurrences
(the OwnerReference.APIVersion in the work.SetOwnerReferences call and the
similar OwnerReference block later) so the fixtures derive APIVersion from the
imported package rather than a literal.
In `@pkg/utils/addon_config_test.go`:
- Around line 98-132: The commented test case "addon deployment config reference
spec hash not match" contains an inconsistent beta1 structure; either delete the
commented block or restore beta1 fields and uncomment: ensure
ManagedClusterAddOn.Status.ConfigReferences uses entries where ConfigReference
contains DesiredConfig of type ConfigSpecHash which itself includes
ConfigReferent{Name: "..."} and SpecHash:"wronghash" (matching beta1 nesting),
and that AddOnDeploymentConfig.Spec remains as in other tests; update the
ConfigReference/DesiredConfig/ConfigSpecHash/ConfigReferent nesting to match the
other test cases before re-enabling the test.
In `@pkg/utils/permission_test.go`:
- Around line 544-563: Add an expectedUser field to the test case struct and,
inside the t.Run where BuildSubjectsFromRegistration is called, iterate subjects
to assert there is a subject with Kind == rbacv1.UserKind and Name ==
tt.expectedUser; keep the existing group existence/filtered assertions but add a
boolean like foundUser and assert.True(t, foundUser, "expected user %s not found
in subjects", tt.expectedUser). This change targets the test harness around
BuildSubjectsFromRegistration to explicitly validate the user subject mapping.
- Around line 72-212: The tests variable in permission_test.go duplicates
registration fixtures used elsewhere (also at lines 247-387); refactor by
extracting shared test builders/helpers (e.g., NewManagedClusterAddOn builder
functions that construct addonv1beta1.ManagedClusterAddOn with
RegistrationConfig/KubeClientConfig/CustomSignerConfig/Subject variants) and use
those builders in both the role and cluster-role test suites instead of inlining
fixtures, so tests reuse the same constructors for cases like "no
registrations", "KubeClient is nil", "empty subject", and the valid-subject
variants.
- Around line 233-236: Tests currently only assert the length of
binding.Subjects which misses incorrect subject mappings after the
KubeClientSubject.BaseSubject change; update the table-driven tests that call
fakeKubeClient.RbacV1().ClusterRoleBindings().Get and inspect binding.Subjects
to include an expectedSubjects []rbacv1.Subject per test case and assert the
actual subjects match (use deep equality or element-wise comparison like
assert.ElementsMatch/assert.Equal) so the full Subject fields (Kind, Name,
Namespace, APIGroup) are validated rather than just the count.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7deca7e1-7da2-4008-b5f4-18e42d90388e
⛔ Files ignored due to path filters (6)
go.sumis excluded by!**/*.sumvendor/modules.txtis excluded by!vendor/**vendor/open-cluster-management.io/api/addon/v1beta1/types_managedclusteraddon.gois excluded by!vendor/**vendor/open-cluster-management.io/api/operator/v1/0000_01_operator.open-cluster-management.io_clustermanagers.crd.yamlis excluded by!vendor/**vendor/open-cluster-management.io/api/operator/v1/types_clustermanager.gois excluded by!vendor/**vendor/open-cluster-management.io/api/operator/v1/zz_generated.deepcopy.gois excluded by!vendor/**
📒 Files selected for processing (62)
README.mdexamples/helloworld/helloworld_test.goexamples/helloworld_agent/agent.goexamples/helloworld_helm/helloworld_helm.goexamples/helloworld_helm/helloworld_helm_test.gogo.modhack/fix-crd-storage-version.shpkg/addonfactory/helm_agentaddon.gopkg/addonfactory/helm_agentaddon_test.gopkg/addonfactory/template_agentaddon.gopkg/addonfactory/template_agentaddon_test.gopkg/addonfactory/test_helper.gopkg/addonmanager/addontesting/helpers.gopkg/addonmanager/base_manager.gopkg/addonmanager/cloudevents/manager.gopkg/addonmanager/constants/constants.gopkg/addonmanager/controllers/addonconfig/controller.gopkg/addonmanager/controllers/addonconfig/controller_test.gopkg/addonmanager/controllers/agentdeploy/controller.gopkg/addonmanager/controllers/agentdeploy/default_hook_sync_test.gopkg/addonmanager/controllers/agentdeploy/default_sync_test.gopkg/addonmanager/controllers/agentdeploy/hosted_hook_sync_test.gopkg/addonmanager/controllers/agentdeploy/hosted_sync_test.gopkg/addonmanager/controllers/agentdeploy/util_test.gopkg/addonmanager/controllers/agentdeploy/utils.gopkg/addonmanager/controllers/certificate/csrapprove_test.gopkg/addonmanager/controllers/certificate/csrsign_test.gopkg/addonmanager/controllers/cmaconfig/controller.gopkg/addonmanager/controllers/cmaconfig/controller_test.gopkg/addonmanager/controllers/cmamanagedby/controller.gopkg/addonmanager/controllers/cmamanagedby/controller_test.gopkg/addonmanager/controllers/registration/controller.gopkg/addonmanager/controllers/registration/controller_test.gopkg/addonmanager/manager.gopkg/agent/interface.gopkg/index/index.gopkg/utils/addon_config.gopkg/utils/addon_config_test.gopkg/utils/addon_test.gopkg/utils/permission.gopkg/utils/permission_test.gotest/e2e/helloworld_helm_test.gotest/e2e/helloworld_test.gotest/e2ehosted/helloworld_hosted_test.gotest/e2etoken/helloworld_token_test.gotest/integration-test.mktest/integration/cloudevents/agent_deploy_test.gotest/integration/cloudevents/agent_hook_deploy_test.gotest/integration/cloudevents/agent_hosting_deploy_test.gotest/integration/cloudevents/agent_hosting_hook_deploy_test.gotest/integration/cloudevents/suite_test.gotest/integration/kube/agent_deploy_test.gotest/integration/kube/agent_hook_deploy_test.gotest/integration/kube/agent_hosting_deploy_test.gotest/integration/kube/agent_hosting_hook_deploy_test.gotest/integration/kube/assertion_test.gotest/integration/kube/cluster_management_addon_test.gotest/integration/kube/csr_test.gotest/integration/kube/deleted_managed_cluster_create_addon_test.gotest/integration/kube/multiworks_test.gotest/integration/kube/registration_test.gotest/integration/kube/suite_test.go
🚧 Files skipped from review as they are similar to previous changes (5)
- test/integration/kube/deleted_managed_cluster_create_addon_test.go
- test/integration/kube/suite_test.go
- examples/helloworld_agent/agent.go
- test/e2e/helloworld_test.go
- pkg/addonmanager/controllers/agentdeploy/hosted_hook_sync_test.go
| BACKUP_DIR="${TEST_TMP:-/tmp}" | ||
| BACKUP_FILE="${BACKUP_DIR}/managedclusteraddons.crd.yaml.backup" |
There was a problem hiding this comment.
Avoid a shared /tmp backup filename.
BACKUP_FILE is fixed for every checkout on the host, so two local runs or CI jobs can overwrite each other's backup. When restore-crd-storage-version later copies that file back, one run can restore the other run's CRD. Use a repo-local backup path, or make the temp path unique per checkout/run.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@hack/fix-crd-storage-version.sh` around lines 9 - 10, The current BACKUP_FILE
uses a fixed name in BACKUP_DIR (BACKUP_DIR="${TEST_TMP:-/tmp}" and
BACKUP_FILE="${BACKUP_DIR}/managedclusteraddons.crd.yaml.backup") which can
collide across checkouts/runs; change BACKUP_FILE to be unique per run by using
a repo-local path or a unique suffix (e.g., include the current PID, timestamp,
git commit hash, or use mktemp) so that scripts like restore-crd-storage-version
read the correct file; update references to BACKUP_FILE/ BACKUP_DIR accordingly
and ensure cleanup still works for the new unique filename.
| # Verify the changes | ||
| echo "" | ||
| echo "✓ Storage settings updated:" | ||
| grep -n "storage:" "${CRD_FILE}" | while read -r line; do | ||
| echo " $line" | ||
| done |
There was a problem hiding this comment.
Fail if the storage swap did not take.
This "verify" block only prints the current storage: lines and always exits 0. If the CRD format drifts and one of the replacements stops matching, the script still reports success and the integration test proceeds against the wrong storage version.
Proposed minimal assertion
# Verify the changes
+first_storage=$(grep -m1 "storage:" "${CRD_FILE}" | grep -o "storage: [a-z]*")
+second_storage=$(grep "storage:" "${CRD_FILE}" | tail -1 | grep -o "storage: [a-z]*")
+if [ "$first_storage" != "storage: false" ] || [ "$second_storage" != "storage: true" ]; then
+ echo "Error: failed to switch storage version"
+ exit 1
+fi
+
echo ""
echo "✓ Storage settings updated:"
grep -n "storage:" "${CRD_FILE}" | while read -r line; do
echo " $line"
done📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Verify the changes | |
| echo "" | |
| echo "✓ Storage settings updated:" | |
| grep -n "storage:" "${CRD_FILE}" | while read -r line; do | |
| echo " $line" | |
| done | |
| # Verify the changes | |
| first_storage=$(grep -m1 "storage:" "${CRD_FILE}" | grep -o "storage: [a-z]*") | |
| second_storage=$(grep "storage:" "${CRD_FILE}" | tail -1 | grep -o "storage: [a-z]*") | |
| if [ "$first_storage" != "storage: false" ] || [ "$second_storage" != "storage: true" ]; then | |
| echo "Error: failed to switch storage version" | |
| exit 1 | |
| fi | |
| echo "" | |
| echo "✓ Storage settings updated:" | |
| grep -n "storage:" "${CRD_FILE}" | while read -r line; do | |
| echo " $line" | |
| done |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@hack/fix-crd-storage-version.sh` around lines 58 - 63, The verify block
currently only prints lines with "storage:" from ${CRD_FILE} and always
succeeds; update it to assert that the expected storage value(s) are present and
exit non‑zero when not. After listing the grep output for "storage:" against
${CRD_FILE}, run a test (e.g., grep -q for the exact expected storage string or
count matches) and if the pattern is not found, print a clear error and exit 1
so the script fails when the swap did not take.
| addon *addonapiv1beta1.ManagedClusterAddOn, | ||
| adcgetter AddOnDeploymentConfigGetter, | ||
| ) (*addonapiv1alpha1.AddOnDeploymentConfig, error) { | ||
| ) (*addonapiv1beta1.AddOnDeploymentConfig, error) { | ||
|
|
||
| ok, configRef := GetAddOnConfigRef(addon.Status.ConfigReferences, |
There was a problem hiding this comment.
Guard addon before reading addon.Status.
GetDesiredAddOnDeploymentConfig(nil, ...) panics on Line 71. Since this is an exported helper, it should fail with an error instead of crashing the caller.
Proposed fix
func GetDesiredAddOnDeploymentConfig(
addon *addonapiv1beta1.ManagedClusterAddOn,
adcgetter AddOnDeploymentConfigGetter,
) (*addonapiv1beta1.AddOnDeploymentConfig, error) {
+ if addon == nil {
+ return nil, fmt.Errorf("addon is nil")
+ }
ok, configRef := GetAddOnConfigRef(addon.Status.ConfigReferences,
AddOnDeploymentConfigGVR.Group, AddOnDeploymentConfigGVR.Resource)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| addon *addonapiv1beta1.ManagedClusterAddOn, | |
| adcgetter AddOnDeploymentConfigGetter, | |
| ) (*addonapiv1alpha1.AddOnDeploymentConfig, error) { | |
| ) (*addonapiv1beta1.AddOnDeploymentConfig, error) { | |
| ok, configRef := GetAddOnConfigRef(addon.Status.ConfigReferences, | |
| func GetDesiredAddOnDeploymentConfig( | |
| addon *addonapiv1beta1.ManagedClusterAddOn, | |
| adcgetter AddOnDeploymentConfigGetter, | |
| ) (*addonapiv1beta1.AddOnDeploymentConfig, error) { | |
| if addon == nil { | |
| return nil, fmt.Errorf("addon is nil") | |
| } | |
| ok, configRef := GetAddOnConfigRef(addon.Status.ConfigReferences, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/utils/addon_config.go` around lines 67 - 71,
GetDesiredAddOnDeploymentConfig currently dereferences addon.Status and can
panic when addon is nil; add an explicit nil check at the top of
GetDesiredAddOnDeploymentConfig (guarding the addon parameter) and return a
descriptive error instead of proceeding (e.g., "addon is nil" or similar).
Ensure the function returns that error immediately before any access to
addon.Status or other fields, preserving existing return types and behavior for
non-nil inputs.
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (11)
pkg/utils/permission.go (2)
327-330:⚠️ Potential issue | 🟠 MajorExisting
RoleandRoleBindingobjects still keep the alpha ownerRef.Line 330 updates the create path to beta1, but
ApplyRoleandApplyRoleBindingnever reconcileOwnerReferenceson update. After upgrade, objects created underv1alpha1stay pinned to the old owner ref whenever rules/subjects already match.💡 Minimal fix
// ApplyRole ownerRefsSame := equality.Semantic.DeepEqual(existingCopy.OwnerReferences, required.OwnerReferences) if contentSame && ownerRefsSame { return existingCopy, false, nil } existingCopy.Rules = required.Rules existingCopy.OwnerReferences = required.OwnerReferences // ApplyRoleBinding ownerRefsSame := equality.Semantic.DeepEqual(existingCopy.OwnerReferences, requiredCopy.OwnerReferences) if subjectsAreSame && roleRefIsSame && ownerRefsSame { return existingCopy, false, nil } existingCopy.Subjects = requiredCopy.Subjects existingCopy.RoleRef = requiredCopy.RoleRef existingCopy.OwnerReferences = requiredCopy.OwnerReferences🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/permission.go` around lines 327 - 330, ApplyRole and ApplyRoleBinding never reconcile OwnerReferences on updates, leaving objects created with the alpha ownerRef pinned after upgrade; modify ApplyRole and ApplyRoleBinding to include OwnerReferences in the equality checks (use equality.Semantic.DeepEqual to compare existingCopy.OwnerReferences with required.OwnerReferences / requiredCopy.OwnerReferences) and, when differences are detected, assign existingCopy.OwnerReferences = required.OwnerReferences (or requiredCopy.OwnerReferences) along with updating Rules/Subjects/RoleRef so the ownerRef is reconciled during updates.
295-297:⚠️ Potential issue | 🟡 MinorCompare against
BaseSubject{}here.Line 296 is checking a
BaseSubjectvalue againstaddonapiv1beta1.Subject{}. Those are distinct Go types, so this branch never matches and empty registrations fall through instead of returningnil.💡 Minimal fix
- if subject == nil || equality.Semantic.DeepEqual(*subject, addonapiv1beta1.Subject{}) { + if subject == nil || equality.Semantic.DeepEqual(*subject, addonapiv1beta1.BaseSubject{}) { return nil }In open-cluster-management.io/api/addon/v1beta1, are BaseSubject and Subject distinct Go types, and would equality.Semantic.DeepEqual(addonapiv1beta1.BaseSubject{}, addonapiv1beta1.Subject{}) ever return true?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/permission.go` around lines 295 - 297, The branch checking for an empty subject is comparing against the wrong type: change the DeepEqual comparison to compare the subject value to addonapiv1beta1.BaseSubject{} (or otherwise assert/convert the subject to BaseSubject before comparing) so the condition in the function that uses the subject variable and equality.Semantic.DeepEqual correctly detects an empty BaseSubject and returns nil; update the equality check where subject == nil || equality.Semantic.DeepEqual(*subject, addonapiv1beta1.Subject{}) is used to compare against addonapiv1beta1.BaseSubject{} instead.pkg/addonmanager/controllers/cmamanagedby/controller.go (1)
87-93:⚠️ Potential issue | 🔴 CriticalDelete the lifecycle annotation instead of blanking it.
ManagedByAddonManagertreats a missing lifecycle key and an empty value differently. Setting the annotation to""leaves the key present, so the addon still looks self-managed and the handoff back to addon-manager never happens.🔧 Proposed fix
- cmaCopy.Annotations[addonapiv1alpha1.AddonLifecycleAnnotationKey] = "" + delete(cmaCopy.Annotations, addonapiv1alpha1.AddonLifecycleAnnotationKey)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/cmamanagedby/controller.go` around lines 87 - 93, The code is incorrectly blanking the lifecycle annotation instead of removing it, so finish the handoff by deleting the annotation key from cmaCopy.Annotations when its value equals addonapiv1alpha1.AddonLifecycleSelfManageAnnotationValue; replace the assignment cmaCopy.Annotations[addonapiv1alpha1.AddonLifecycleAnnotationKey] = "" with a deletion (delete(cmaCopy.Annotations, addonapiv1alpha1.AddonLifecycleAnnotationKey)) and then call c.addonPatcher.PatchLabelAnnotations(ctx, cmaCopy, cmaCopy.ObjectMeta, cma.ObjectMeta) as before.pkg/addonmanager/controllers/addonconfig/controller.go (1)
190-202:⚠️ Potential issue | 🟠 MajorUse
LastAppliedConfigas fallback for config lookups.The code currently skips updating
LastObservedGenerationwhenDesiredConfigis nil, even thoughLastAppliedConfigmay contain valid config coordinates for previously applied configs. This could prevent generation tracking for configs that were applied but no longer appear in the desired spec.🔧 Proposed fix to add fallback
- // In v1beta1, ConfigReference doesn't have Name/Namespace directly - // They are in DesiredConfig field - if configReference.DesiredConfig == nil { + // In v1beta1, ConfigReference uses DesiredConfig for current configs + // Fall back to LastAppliedConfig for previously applied configs + targetConfig := configReference.DesiredConfig + if targetConfig == nil { + targetConfig = configReference.LastAppliedConfig + } + if targetConfig == nil { continue } var config *unstructured.Unstructured var err error - if configReference.DesiredConfig.Namespace == "" { - config, err = lister.Get(configReference.DesiredConfig.Name) + if targetConfig.Namespace == "" { + config, err = lister.Get(targetConfig.Name) } else { - config, err = lister.Namespace(configReference.DesiredConfig.Namespace).Get(configReference.DesiredConfig.Name) + config, err = lister.Namespace(targetConfig.Namespace).Get(targetConfig.Name) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/addonconfig/controller.go` around lines 190 - 202, The code currently skips when configReference.DesiredConfig is nil; instead, if DesiredConfig is nil attempt to use configReference.LastAppliedConfig as a fallback for the config coordinates and only continue when both are nil. Update the lookup logic around configReference (the block using DesiredConfig, lister.Get, and lister.Namespace(...).Get) to select coords := configReference.DesiredConfig; if coords == nil { coords = configReference.LastAppliedConfig }; if coords == nil { continue }; then use coords.Name and coords.Namespace with lister (same namespace-conditional logic) so LastObservedGeneration can be updated for previously applied configs.test/e2ehosted/helloworld_hosted_test.go (1)
49-80:⚠️ Potential issue | 🟠 MajorReconcile existing
ManagedClusterAddOnobjects to the beta1 shape.The test only adds beta1 annotations (
HostingClusterNameAnnotationKey,InstallNamespaceAnnotation) on the Create path. If an existing addon from a prior test run lacks these annotations, the test will hang waiting for conditions that depend on them.🔧 Proposed fix to update existing addons
if err != nil { if errors.IsNotFound(err) { _, cerr := hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(hostedManagedClusterName).Create( // ... create logic unchanged if cerr != nil { return cerr } + return nil } return err } + + // Ensure existing addon has required beta1 annotations + needsUpdate := false + if addon.Annotations == nil { + addon.Annotations = map[string]string{} + needsUpdate = true + } + if addon.Annotations[addonapiv1beta1.HostingClusterNameAnnotationKey] != hostingClusterName { + addon.Annotations[addonapiv1beta1.HostingClusterNameAnnotationKey] = hostingClusterName + needsUpdate = true + } + if addon.Annotations[addonapiv1beta1.InstallNamespaceAnnotation] != addonAgentNamespace { + addon.Annotations[addonapiv1beta1.InstallNamespaceAnnotation] = addonAgentNamespace + needsUpdate = true + } + if needsUpdate { + _, err = hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(hostedManagedClusterName).Update( + context.Background(), addon, metav1.UpdateOptions{}) + return err + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2ehosted/helloworld_hosted_test.go` around lines 49 - 80, When retrieving the ManagedClusterAddOn inside the Eventually block (the hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(...).Get call), if the addon exists but is missing the beta1 annotations (addonapiv1beta1.HostingClusterNameAnnotationKey or addonapiv1beta1.InstallNamespaceAnnotation), update its ObjectMeta.Annotations with those keys (set hostingClusterName and addonAgentNamespace) and call Update (hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(...).Update) before continuing; keep the existing Create path intact for IsNotFound, and then proceed to check the hosting cluster validity condition as before.examples/helloworld_agent/agent.go (1)
176-179:⚠️ Potential issue | 🟡 MinorHandle
NotFounderror to avoid perpetual requeue.When the
ManagedClusterAddOnis deleted, returning the rawNotFounderror causes this controller to retry indefinitely. Consider treatingNotFoundas a non-error case.🩹 Suggested fix
addon, err := c.addonClient.AddonV1beta1().ManagedClusterAddOns(clusterName).Get(ctx, c.addonName, metav1.GetOptions{}) - if err != nil { - return err - } + switch { + case errors.IsNotFound(err): + return nil + case err != nil: + return err + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/helloworld_agent/agent.go` around lines 176 - 179, The Get call retrieving the ManagedClusterAddOn (addon := c.addonClient.AddonV1beta1().ManagedClusterAddOns(clusterName).Get(...)) should treat a NotFound as non-fatal to avoid perpetual requeue: detect the Kubernetes "not found" error (e.g., using apierrors.IsNotFound(err)) and return nil (or otherwise exit successfully) instead of returning the raw error; keep returning other errors as before and keep references to c.addonName and the same Get invocation when implementing the check.test/e2e/helloworld_helm_test.go (1)
296-298:⚠️ Potential issue | 🟡 MinorMerge annotations instead of replacing to preserve
InstallNamespaceAnnotation.Using
SetAnnotationswith a new map replaces all existing annotations, potentially dropping theInstallNamespaceAnnotationand other addon-behavior-related keys set during creation.🩹 Suggested fix
- newAddon.SetAnnotations(map[string]string{ - "addon.open-cluster-management.io/values": `{"global":{"imagePullSecret":"mySecret"}}`, - }) + annotations := newAddon.GetAnnotations() + if annotations == nil { + annotations = map[string]string{} + } + annotations["addon.open-cluster-management.io/values"] = `{"global":{"imagePullSecret":"mySecret"}}` + newAddon.SetAnnotations(annotations)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/helloworld_helm_test.go` around lines 296 - 298, The test currently calls newAddon.SetAnnotations(...) which replaces all annotations and can drop InstallNamespaceAnnotation; instead merge the new annotations into the existing map: use newAddon.GetAnnotations() (or check for nil), copy/modify that map to add/overwrite the "addon.open-cluster-management.io/values" entry, then call newAddon.SetAnnotations(mergedMap) so InstallNamespaceAnnotation and other existing keys are preserved when updating annotations on newAddon.test/integration/kube/assertion_test.go (3)
95-106:⚠️ Potential issue | 🟠 MajorCheck the
Geterror before mutatingold.Line 97 retrieves the object but Line 98 immediately mutates
old.Specwithout checking ifGetsucceeded. IfGetfails, this will cause a nil pointer dereference or operate on stale data.🛠️ Safer retry pattern
func updateClusterManagementAddOn(ctx context.Context, new *addonapiv1beta1.ClusterManagementAddOn) { - gomega.Eventually(func() bool { - old, err := hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Get(context.Background(), new.Name, metav1.GetOptions{}) - old.Spec = new.Spec - old.Annotations = new.Annotations - _, err = hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Update(context.Background(), old, metav1.UpdateOptions{}) - if err == nil { - return true - } - return false - }, eventuallyTimeout, eventuallyInterval).Should(gomega.BeTrue()) + gomega.Eventually(func() error { + old, err := hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Get(ctx, new.Name, metav1.GetOptions{}) + if err != nil { + return err + } + old.Spec = new.Spec + old.Annotations = new.Annotations + _, err = hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Update(ctx, old, metav1.UpdateOptions{}) + return err + }, eventuallyTimeout, eventuallyInterval).ShouldNot(gomega.HaveOccurred()) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/kube/assertion_test.go` around lines 95 - 106, In updateClusterManagementAddOn, avoid mutating old before verifying the Get succeeded: after calling hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Get(...) check the returned err and that old is non-nil before setting old.Spec and old.Annotations; if Get returned an error or old is nil, return false so the Eventually retry loop will retry, otherwise proceed to call Update and return true on success. Ensure you reference the existing hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Get and Update calls when adding the error/nil checks.
163-173:⚠️ Potential issue | 🟠 MajorCheck the
Geterror before mutatingold.Same issue as
updateClusterManagementAddOn: Line 165 retrieves the object but Line 166 immediately mutatesold.Statuswithout checking ifGetsucceeded.🛠️ Safer retry pattern
func updateManagedClusterAddOnStatus(ctx context.Context, new *addonapiv1beta1.ManagedClusterAddOn) { - gomega.Eventually(func() bool { - old, err := hubAddonClient.AddonV1beta1().ManagedClusterAddOns(new.Namespace).Get(context.Background(), new.Name, metav1.GetOptions{}) - old.Status = new.Status - _, err = hubAddonClient.AddonV1beta1().ManagedClusterAddOns(old.Namespace).UpdateStatus(context.Background(), old, metav1.UpdateOptions{}) - if err == nil { - return true - } - return false - }, eventuallyTimeout, eventuallyInterval).Should(gomega.BeTrue()) + gomega.Eventually(func() error { + old, err := hubAddonClient.AddonV1beta1().ManagedClusterAddOns(new.Namespace).Get(ctx, new.Name, metav1.GetOptions{}) + if err != nil { + return err + } + old.Status = new.Status + _, err = hubAddonClient.AddonV1beta1().ManagedClusterAddOns(old.Namespace).UpdateStatus(ctx, old, metav1.UpdateOptions{}) + return err + }, eventuallyTimeout, eventuallyInterval).ShouldNot(gomega.HaveOccurred()) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/kube/assertion_test.go` around lines 163 - 173, In updateManagedClusterAddOnStatus, check the error returned by the Get call before mutating old: after calling hubAddonClient.AddonV1beta1().ManagedClusterAddOns(new.Namespace).Get(...), if err != nil return false from the Eventually lambda (or otherwise retry) and only assign old.Status = new.Status and call UpdateStatus when err == nil; this prevents dereferencing a nil/invalid old object and matches the safer retry pattern used elsewhere (referencing function updateManagedClusterAddOnStatus, variables old/new, and the Get/UpdateStatus calls).
110-127:⚠️ Potential issue | 🟠 MajorSet the controller owner reference before
Create.Creating first and patching owner refs afterward leaves a reconciliation window where controllers can observe the addon without its controller owner reference.
🔧 Proposed fix
func createManagedClusterAddOnwithOwnerRefs(namespace string, addon *addonapiv1beta1.ManagedClusterAddOn, cma *addonapiv1beta1.ClusterManagementAddOn) { + // Set owner reference before Create to avoid reconciliation race + owner := metav1.NewControllerRef(cma, schema.GroupVersionKind{ + Group: addonapiv1beta1.GroupName, + Version: addonapiv1beta1.GroupVersion.Version, + Kind: "ClusterManagementAddOn", + }) + addonCopy := addon.DeepCopy() + utils.MergeOwnerRefs(&addonCopy.OwnerReferences, *owner, false) + - addon, err := hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).Create(context.Background(), addon, metav1.CreateOptions{}) + _, err := hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).Create(context.Background(), addonCopy, metav1.CreateOptions{}) gomega.Expect(err).ToNot(gomega.HaveOccurred()) - - addonCopy := addon.DeepCopy() - - // This is to assume that addon-manager has already added the OwnerReferences. - owner := metav1.NewControllerRef(cma, schema.GroupVersionKind{ - Group: addonapiv1beta1.GroupName, - Version: addonapiv1beta1.GroupVersion.Version, - Kind: "ClusterManagementAddOn", - }) - modified := utils.MergeOwnerRefs(&addonCopy.OwnerReferences, *owner, false) - if modified { - _, err = hubAddonClient.AddonV1beta1().ManagedClusterAddOns(addonCopy.Namespace).Update(context.Background(), addonCopy, metav1.UpdateOptions{}) - gomega.Expect(err).ToNot(gomega.HaveOccurred()) - } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/kube/assertion_test.go` around lines 110 - 127, The test currently creates the ManagedClusterAddOn and only then adds the controller OwnerReference (in createManagedClusterAddOnwithOwnerRefs), which leaves a race window; instead, build and attach the controller OwnerReference (use metav1.NewControllerRef with the cma GroupVersionKind) to addon.OwnerReferences (or call utils.MergeOwnerRefs on the addon object) before calling hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).Create(...), then create the resource and assert no error — remove the post-create Update path so the OwnerReference exists at creation time.test/integration/kube/agent_hook_deploy_test.go (1)
122-131:⚠️ Potential issue | 🟡 MinorTest doesn't prove the beta1 InstallNamespaceAnnotation is honored.
The
InstallNamespaceAnnotationis set to"default", but thehookJobJsonfixture already hard-codes"namespace": "default"and"--addon-namespace=default". A controller regression that ignored the annotation would still pass. Consider using a distinct namespace value to validate the annotation is actually used.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/kube/agent_hook_deploy_test.go` around lines 122 - 131, The test sets InstallNamespaceAnnotation to "default" while hookJobJson already hardcodes "default", so it doesn't verify the annotation is used; update the ManagedClusterAddOn in the test (created via createManagedClusterAddOnwithOwnerRefs) to use a distinct namespace value (e.g., "addon-namespace-test") for addonapiv1beta1.InstallNamespaceAnnotation and adjust the hookJobJson fixture or the test's expectations that assert on pod/job namespace and the --addon-namespace CLI arg to match that distinct value so the test fails if the controller ignores the annotation.
🧹 Nitpick comments (6)
pkg/addonmanager/controllers/agentdeploy/hosted_sync_test.go (1)
243-243: Pre-existing typo in variable name.
manifestAppliyedConditionhas a typo ("Appliyed" instead of "Applied"). This is not introduced by this PR but could be cleaned up opportunistically.✏️ Optional fix
- manifestAppliyedCondition := meta.FindStatusCondition(addOn.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnManifestApplied) - if manifestAppliyedCondition == nil { + manifestAppliedCondition := meta.FindStatusCondition(addOn.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnManifestApplied) + if manifestAppliedCondition == nil { t.Fatal("manifestapplied condition should not be nil") } - if manifestAppliyedCondition.Reason != addonapiv1beta1.AddonManifestAppliedReasonManifestsApplied { - t.Errorf("Condition Reason is not correct: %v", manifestAppliyedCondition.Reason) + if manifestAppliedCondition.Reason != addonapiv1beta1.AddonManifestAppliedReasonManifestsApplied { + t.Errorf("Condition Reason is not correct: %v", manifestAppliedCondition.Reason) } - if manifestAppliyedCondition.Message != "no manifest need to apply" { - t.Errorf("Condition Message is not correct: %v", manifestAppliyedCondition.Message) + if manifestAppliedCondition.Message != "no manifest need to apply" { + t.Errorf("Condition Message is not correct: %v", manifestAppliedCondition.Message) } - if manifestAppliyedCondition.Status != metav1.ConditionTrue { - t.Errorf("Condition Status is not correct: %v", manifestAppliyedCondition.Status) + if manifestAppliedCondition.Status != metav1.ConditionTrue { + t.Errorf("Condition Status is not correct: %v", manifestAppliedCondition.Status) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/agentdeploy/hosted_sync_test.go` at line 243, Rename the misspelled local variable manifestAppliyedCondition to manifestAppliedCondition wherever it's declared and used (the value returned from meta.FindStatusCondition called with addOn.Status.Conditions and addonapiv1beta1.ManagedClusterAddOnManifestApplied); update all references in the test (assertions, comparisons, etc.) to use the corrected identifier to keep naming consistent and avoid confusion.pkg/addonmanager/controllers/agentdeploy/default_hook_sync_test.go (1)
130-132: Consider tightening condition assertions.Similar to the default sync tests, these assertions use
meta.IsStatusConditionFalsewhich doesn't verify the exact reason. For thoroughness, usingmeta.FindStatusConditionand asserting both status and reason would provide stronger guarantees.Also applies to: 204-206
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/agentdeploy/default_hook_sync_test.go` around lines 130 - 132, Replace the loose meta.IsStatusConditionFalse checks with explicit lookups using meta.FindStatusCondition on addOn.Status.Conditions for the ManagedClusterAddOnHookManifestCompleted condition, assert the returned condition is non-nil, assert condition.Status == metav1.ConditionFalse, and assert condition.Reason equals the expected reason string (replace "<expected-reason>" with the test's expected constant); do the same replacement for the similar assertions at the other location (lines ~204-206) so both status and reason are validated.pkg/agent/interface.go (1)
42-42: Document the source-breaking beta1 API migration.These exported callbacks and helpers now require
addon/v1beta1types end-to-end, so any out-of-tree addon implementation importingpkg/agentwill stop compiling until it migrates too. Please call this out in release/migration notes, or keep a compatibility layer if this is intended to land in a non-breaking release.Also applies to: 78-78, 113-123, 190-190, 254-255, 285-285
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/agent/interface.go` at line 42, The exported interface change (e.g., the Manifests(cluster *clusterv1.ManagedCluster, addon *addonapiv1beta1.ManagedClusterAddOn) signature and other exported callbacks/helpers) now requires addon/v1beta1 types end-to-end and is source-breaking for out-of-tree addons; either add a clear migration note in your release/migration docs calling out every changed exported symbol (mentioning Manifests and the other exported callbacks/helpers) and instructions to migrate to addon/v1beta1, or implement a compatibility layer/shim inside pkg/agent that accepts the older addon API types and converts them to addon/v1beta1 before calling the new functions so consumers don’t break. Ensure the chosen fix covers all affected exported functions/callbacks referenced in the diff.pkg/utils/permission_test.go (1)
72-212: Consider extracting the repeated beta1 addon fixtures.These two tables are the same scenario matrix exercised through different binders. A small helper for building
addonv1beta1.ManagedClusterAddOnfixtures would reduce duplication and make future API-shape updates less likely to drift.Also applies to: 247-387
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/permission_test.go` around lines 72 - 212, The test file repeats the same ManagedClusterAddOn fixture matrix across multiple binders (the tests slice of structs and the ManagedClusterAddOn objects), leading to duplication; create a small helper function (e.g., buildTestAddon or newManagedClusterAddOnFixture) that accepts identifying parameters (name/namespace, slice of addonv1beta1.RegistrationConfig or fields to vary like Type, KubeClient.Subject.User, Groups, CustomSigner) and returns a *addonv1beta1.ManagedClusterAddOn, then replace the repeated inline constructions in the tests variable and the other table (lines ~247-387) to call that helper from the test cases to centralize fixture creation and reduce duplication.test/e2etoken/helloworld_token_test.go (1)
163-188: Copy the matchedBaseSubjectvalue instead of storing the address of the loop variable.Line 187 stores a pointer to the
BaseSubjectfield on the loop variablereg. SinceregSubjectis used later in the test (lines 207–211) after the loop exits, the pointer refers to a field of an out-of-scope variable, which is fragile and could break if the stack frame is reused. Copying the value is clearer and safer.Suggested cleanup
- var regSubject *addonapiv1beta1.BaseSubject + var regSubject addonapiv1beta1.BaseSubjectThen at line 187:
- regSubject = ®.KubeClient.Subject.BaseSubject + regSubject = reg.KubeClient.Subject.BaseSubject🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2etoken/helloworld_token_test.go` around lines 163 - 188, The test stores the address of the loop variable's field which is unsafe; instead of assigning regSubject = ®.KubeClient.Subject.BaseSubject, copy the value into regSubject (e.g., assign reg.KubeClient.Subject.BaseSubject to regSubject variable or set regSubject = new(BaseSubject) and copy the struct) so regSubject holds its own copy independent of the loop-scoped reg; update references to reg.KubeClient.Subject.BaseSubject, reg, and regSubject accordingly.pkg/addonmanager/controllers/addonconfig/controller.go (1)
216-228: Redundant nil check forDesiredConfig.The nil check at line 217 is redundant since we already skipped entries with nil
DesiredConfigat line 192. This inner loop only runs for entries that passed the earlier check.♻️ Remove redundant check
// update desired spec hash only for the configs in spec for _, addonconfig := range addon.Spec.Configs { - if configReference.DesiredConfig == nil { - continue - } - if configReference.ConfigGroupResource == addonconfig.ConfigGroupResource && configReference.DesiredConfig.ConfigReferent == addonconfig.ConfigReferent {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/addonconfig/controller.go` around lines 216 - 228, Remove the redundant nil check for configReference.DesiredConfig inside the inner loop that iterates addon.Spec.Configs; the outer validation already skips ConfigReferences with nil DesiredConfig, so delete the "if configReference.DesiredConfig == nil { continue }" block and keep the matching logic that compares configReference.ConfigGroupResource and configReference.DesiredConfig.ConfigReferent to addonconfig.ConfigReferent, then compute the spec hash via utils.GetSpecHash(config) and assign it to addon.Status.ConfigReferences[index].DesiredConfig.SpecHash as before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/addonmanager/controllers/cmaconfig/controller_test.go`:
- Line 349: The test is creating the wrong informer: it uses
addonInformers.Addon().V1beta1().ManagedClusterAddOns().Informer() but the
clusterManagementAddonIndexer and the index.IndexClusterManagementAddonByConfig
indexer expect ClusterManagementAddOn resources; replace the
ManagedClusterAddOns() informer call with the ClusterManagementAddOns() informer
so clusterManagementAddonIndexer is built from
addonInformers.Addon().V1beta1().ClusterManagementAddOns().Informer(), ensuring
the index.IndexClusterManagementAddonByConfig is registered against the correct
informer/resource.
In `@pkg/agent/interface.go`:
- Line 123: Update the GoDoc for AgentInstallNamespaceFunc and
RegistrationOption.AgentInstallNamespace to reflect the beta1 API contract:
remove the incorrect statement about overriding
ManagedClusterAddOn.spec.installNamespace and instead document that the function
should return the namespace to use for installing/ registering the agent under
the beta1 registration contract (i.e., the namespace used in the registration
payload), and describe the expected return values and error behavior; update the
comments above AgentInstallNamespaceFunc and
RegistrationOption.AgentInstallNamespace accordingly.
In `@pkg/utils/permission_test.go`:
- Around line 506-540: Update the mixed-registrations subtest to assert the
actual subjects returned by BuildSubjectsFromRegistration rather than only count
and group presence: call BuildSubjectsFromRegistration with the test addon's
Registrations and assert that the resulting subject list contains "kube-user"
and "kube-group" and does not contain "custom-user" nor "custom-group" (or
replace with a single equality assertion against the exact expected subject
set). Also apply the same stronger assertions to the other test case mentioned
(lines 548-563) so the test verifies both user and group subjects for KubeClient
and that CustomSigner subjects are excluded.
- Around line 233-236: The test currently only checks len(binding.Subjects)
after calling fakeKubeClient.RbacV1().ClusterRoleBindings().Get, which allows
wrong subject values to pass; update the assertions in the ClusterRoleBinding
checks (the block that reads binding, err :=
fakeKubeClient.RbacV1().ClusterRoleBindings().Get(...)) to compare
binding.Subjects against the test-case's expected subject slice (e.g.,
replace/augment the assert.Len using tt.expectedSubLen with an
equality/assert.ElementsMatch/assert.Equal assertion against tt.expectedSubjects
or similar), and make the same change for the other occurrence noted (the block
around lines 408-411) so tests validate the actual Subject objects (Kind, Name,
Namespace) rather than only the count.
---
Duplicate comments:
In `@examples/helloworld_agent/agent.go`:
- Around line 176-179: The Get call retrieving the ManagedClusterAddOn (addon :=
c.addonClient.AddonV1beta1().ManagedClusterAddOns(clusterName).Get(...)) should
treat a NotFound as non-fatal to avoid perpetual requeue: detect the Kubernetes
"not found" error (e.g., using apierrors.IsNotFound(err)) and return nil (or
otherwise exit successfully) instead of returning the raw error; keep returning
other errors as before and keep references to c.addonName and the same Get
invocation when implementing the check.
In `@pkg/addonmanager/controllers/addonconfig/controller.go`:
- Around line 190-202: The code currently skips when
configReference.DesiredConfig is nil; instead, if DesiredConfig is nil attempt
to use configReference.LastAppliedConfig as a fallback for the config
coordinates and only continue when both are nil. Update the lookup logic around
configReference (the block using DesiredConfig, lister.Get, and
lister.Namespace(...).Get) to select coords := configReference.DesiredConfig; if
coords == nil { coords = configReference.LastAppliedConfig }; if coords == nil {
continue }; then use coords.Name and coords.Namespace with lister (same
namespace-conditional logic) so LastObservedGeneration can be updated for
previously applied configs.
In `@pkg/addonmanager/controllers/cmamanagedby/controller.go`:
- Around line 87-93: The code is incorrectly blanking the lifecycle annotation
instead of removing it, so finish the handoff by deleting the annotation key
from cmaCopy.Annotations when its value equals
addonapiv1alpha1.AddonLifecycleSelfManageAnnotationValue; replace the assignment
cmaCopy.Annotations[addonapiv1alpha1.AddonLifecycleAnnotationKey] = "" with a
deletion (delete(cmaCopy.Annotations,
addonapiv1alpha1.AddonLifecycleAnnotationKey)) and then call
c.addonPatcher.PatchLabelAnnotations(ctx, cmaCopy, cmaCopy.ObjectMeta,
cma.ObjectMeta) as before.
In `@pkg/utils/permission.go`:
- Around line 327-330: ApplyRole and ApplyRoleBinding never reconcile
OwnerReferences on updates, leaving objects created with the alpha ownerRef
pinned after upgrade; modify ApplyRole and ApplyRoleBinding to include
OwnerReferences in the equality checks (use equality.Semantic.DeepEqual to
compare existingCopy.OwnerReferences with required.OwnerReferences /
requiredCopy.OwnerReferences) and, when differences are detected, assign
existingCopy.OwnerReferences = required.OwnerReferences (or
requiredCopy.OwnerReferences) along with updating Rules/Subjects/RoleRef so the
ownerRef is reconciled during updates.
- Around line 295-297: The branch checking for an empty subject is comparing
against the wrong type: change the DeepEqual comparison to compare the subject
value to addonapiv1beta1.BaseSubject{} (or otherwise assert/convert the subject
to BaseSubject before comparing) so the condition in the function that uses the
subject variable and equality.Semantic.DeepEqual correctly detects an empty
BaseSubject and returns nil; update the equality check where subject == nil ||
equality.Semantic.DeepEqual(*subject, addonapiv1beta1.Subject{}) is used to
compare against addonapiv1beta1.BaseSubject{} instead.
In `@test/e2e/helloworld_helm_test.go`:
- Around line 296-298: The test currently calls newAddon.SetAnnotations(...)
which replaces all annotations and can drop InstallNamespaceAnnotation; instead
merge the new annotations into the existing map: use newAddon.GetAnnotations()
(or check for nil), copy/modify that map to add/overwrite the
"addon.open-cluster-management.io/values" entry, then call
newAddon.SetAnnotations(mergedMap) so InstallNamespaceAnnotation and other
existing keys are preserved when updating annotations on newAddon.
In `@test/e2ehosted/helloworld_hosted_test.go`:
- Around line 49-80: When retrieving the ManagedClusterAddOn inside the
Eventually block (the
hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(...).Get call), if the addon
exists but is missing the beta1 annotations
(addonapiv1beta1.HostingClusterNameAnnotationKey or
addonapiv1beta1.InstallNamespaceAnnotation), update its ObjectMeta.Annotations
with those keys (set hostingClusterName and addonAgentNamespace) and call Update
(hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(...).Update) before
continuing; keep the existing Create path intact for IsNotFound, and then
proceed to check the hosting cluster validity condition as before.
In `@test/integration/kube/agent_hook_deploy_test.go`:
- Around line 122-131: The test sets InstallNamespaceAnnotation to "default"
while hookJobJson already hardcodes "default", so it doesn't verify the
annotation is used; update the ManagedClusterAddOn in the test (created via
createManagedClusterAddOnwithOwnerRefs) to use a distinct namespace value (e.g.,
"addon-namespace-test") for addonapiv1beta1.InstallNamespaceAnnotation and
adjust the hookJobJson fixture or the test's expectations that assert on pod/job
namespace and the --addon-namespace CLI arg to match that distinct value so the
test fails if the controller ignores the annotation.
In `@test/integration/kube/assertion_test.go`:
- Around line 95-106: In updateClusterManagementAddOn, avoid mutating old before
verifying the Get succeeded: after calling
hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Get(...) check the
returned err and that old is non-nil before setting old.Spec and
old.Annotations; if Get returned an error or old is nil, return false so the
Eventually retry loop will retry, otherwise proceed to call Update and return
true on success. Ensure you reference the existing
hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Get and Update calls
when adding the error/nil checks.
- Around line 163-173: In updateManagedClusterAddOnStatus, check the error
returned by the Get call before mutating old: after calling
hubAddonClient.AddonV1beta1().ManagedClusterAddOns(new.Namespace).Get(...), if
err != nil return false from the Eventually lambda (or otherwise retry) and only
assign old.Status = new.Status and call UpdateStatus when err == nil; this
prevents dereferencing a nil/invalid old object and matches the safer retry
pattern used elsewhere (referencing function updateManagedClusterAddOnStatus,
variables old/new, and the Get/UpdateStatus calls).
- Around line 110-127: The test currently creates the ManagedClusterAddOn and
only then adds the controller OwnerReference (in
createManagedClusterAddOnwithOwnerRefs), which leaves a race window; instead,
build and attach the controller OwnerReference (use metav1.NewControllerRef with
the cma GroupVersionKind) to addon.OwnerReferences (or call utils.MergeOwnerRefs
on the addon object) before calling
hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).Create(...), then
create the resource and assert no error — remove the post-create Update path so
the OwnerReference exists at creation time.
---
Nitpick comments:
In `@pkg/addonmanager/controllers/addonconfig/controller.go`:
- Around line 216-228: Remove the redundant nil check for
configReference.DesiredConfig inside the inner loop that iterates
addon.Spec.Configs; the outer validation already skips ConfigReferences with nil
DesiredConfig, so delete the "if configReference.DesiredConfig == nil { continue
}" block and keep the matching logic that compares
configReference.ConfigGroupResource and
configReference.DesiredConfig.ConfigReferent to addonconfig.ConfigReferent, then
compute the spec hash via utils.GetSpecHash(config) and assign it to
addon.Status.ConfigReferences[index].DesiredConfig.SpecHash as before.
In `@pkg/addonmanager/controllers/agentdeploy/default_hook_sync_test.go`:
- Around line 130-132: Replace the loose meta.IsStatusConditionFalse checks with
explicit lookups using meta.FindStatusCondition on addOn.Status.Conditions for
the ManagedClusterAddOnHookManifestCompleted condition, assert the returned
condition is non-nil, assert condition.Status == metav1.ConditionFalse, and
assert condition.Reason equals the expected reason string (replace
"<expected-reason>" with the test's expected constant); do the same replacement
for the similar assertions at the other location (lines ~204-206) so both status
and reason are validated.
In `@pkg/addonmanager/controllers/agentdeploy/hosted_sync_test.go`:
- Line 243: Rename the misspelled local variable manifestAppliyedCondition to
manifestAppliedCondition wherever it's declared and used (the value returned
from meta.FindStatusCondition called with addOn.Status.Conditions and
addonapiv1beta1.ManagedClusterAddOnManifestApplied); update all references in
the test (assertions, comparisons, etc.) to use the corrected identifier to keep
naming consistent and avoid confusion.
In `@pkg/agent/interface.go`:
- Line 42: The exported interface change (e.g., the Manifests(cluster
*clusterv1.ManagedCluster, addon *addonapiv1beta1.ManagedClusterAddOn) signature
and other exported callbacks/helpers) now requires addon/v1beta1 types
end-to-end and is source-breaking for out-of-tree addons; either add a clear
migration note in your release/migration docs calling out every changed exported
symbol (mentioning Manifests and the other exported callbacks/helpers) and
instructions to migrate to addon/v1beta1, or implement a compatibility
layer/shim inside pkg/agent that accepts the older addon API types and converts
them to addon/v1beta1 before calling the new functions so consumers don’t break.
Ensure the chosen fix covers all affected exported functions/callbacks
referenced in the diff.
In `@pkg/utils/permission_test.go`:
- Around line 72-212: The test file repeats the same ManagedClusterAddOn fixture
matrix across multiple binders (the tests slice of structs and the
ManagedClusterAddOn objects), leading to duplication; create a small helper
function (e.g., buildTestAddon or newManagedClusterAddOnFixture) that accepts
identifying parameters (name/namespace, slice of addonv1beta1.RegistrationConfig
or fields to vary like Type, KubeClient.Subject.User, Groups, CustomSigner) and
returns a *addonv1beta1.ManagedClusterAddOn, then replace the repeated inline
constructions in the tests variable and the other table (lines ~247-387) to call
that helper from the test cases to centralize fixture creation and reduce
duplication.
In `@test/e2etoken/helloworld_token_test.go`:
- Around line 163-188: The test stores the address of the loop variable's field
which is unsafe; instead of assigning regSubject =
®.KubeClient.Subject.BaseSubject, copy the value into regSubject (e.g.,
assign reg.KubeClient.Subject.BaseSubject to regSubject variable or set
regSubject = new(BaseSubject) and copy the struct) so regSubject holds its own
copy independent of the loop-scoped reg; update references to
reg.KubeClient.Subject.BaseSubject, reg, and regSubject accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 4de018ab-1961-4f2d-bbc0-a17e87dc962b
⛔ Files ignored due to path filters (6)
go.sumis excluded by!**/*.sumvendor/modules.txtis excluded by!vendor/**vendor/open-cluster-management.io/api/addon/v1beta1/types_managedclusteraddon.gois excluded by!vendor/**vendor/open-cluster-management.io/api/operator/v1/0000_01_operator.open-cluster-management.io_clustermanagers.crd.yamlis excluded by!vendor/**vendor/open-cluster-management.io/api/operator/v1/types_clustermanager.gois excluded by!vendor/**vendor/open-cluster-management.io/api/operator/v1/zz_generated.deepcopy.gois excluded by!vendor/**
📒 Files selected for processing (63)
README.mdexamples/helloworld/helloworld_test.goexamples/helloworld_agent/agent.goexamples/helloworld_helm/helloworld_helm.goexamples/helloworld_helm/helloworld_helm_test.gogo.modhack/fix-crd-storage-version.shpkg/addonfactory/helm_agentaddon.gopkg/addonfactory/helm_agentaddon_test.gopkg/addonfactory/template_agentaddon.gopkg/addonfactory/template_agentaddon_test.gopkg/addonfactory/test_helper.gopkg/addonmanager/addontesting/helpers.gopkg/addonmanager/base_manager.gopkg/addonmanager/cloudevents/manager.gopkg/addonmanager/constants/constants.gopkg/addonmanager/controllers/addonconfig/controller.gopkg/addonmanager/controllers/addonconfig/controller_test.gopkg/addonmanager/controllers/agentdeploy/controller.gopkg/addonmanager/controllers/agentdeploy/default_hook_sync_test.gopkg/addonmanager/controllers/agentdeploy/default_sync_test.gopkg/addonmanager/controllers/agentdeploy/hosted_hook_sync_test.gopkg/addonmanager/controllers/agentdeploy/hosted_sync_test.gopkg/addonmanager/controllers/agentdeploy/util_test.gopkg/addonmanager/controllers/agentdeploy/utils.gopkg/addonmanager/controllers/certificate/csrapprove_test.gopkg/addonmanager/controllers/certificate/csrsign_test.gopkg/addonmanager/controllers/cmaconfig/controller.gopkg/addonmanager/controllers/cmaconfig/controller_test.gopkg/addonmanager/controllers/cmamanagedby/controller.gopkg/addonmanager/controllers/cmamanagedby/controller_test.gopkg/addonmanager/controllers/registration/controller.gopkg/addonmanager/controllers/registration/controller_test.gopkg/addonmanager/manager.gopkg/agent/interface.gopkg/index/index.gopkg/utils/addon_config.gopkg/utils/addon_config_test.gopkg/utils/addon_test.gopkg/utils/helpers.gopkg/utils/permission.gopkg/utils/permission_test.gotest/e2e/helloworld_helm_test.gotest/e2e/helloworld_test.gotest/e2ehosted/helloworld_hosted_test.gotest/e2etoken/helloworld_token_test.gotest/integration-test.mktest/integration/cloudevents/agent_deploy_test.gotest/integration/cloudevents/agent_hook_deploy_test.gotest/integration/cloudevents/agent_hosting_deploy_test.gotest/integration/cloudevents/agent_hosting_hook_deploy_test.gotest/integration/cloudevents/suite_test.gotest/integration/kube/agent_deploy_test.gotest/integration/kube/agent_hook_deploy_test.gotest/integration/kube/agent_hosting_deploy_test.gotest/integration/kube/agent_hosting_hook_deploy_test.gotest/integration/kube/assertion_test.gotest/integration/kube/cluster_management_addon_test.gotest/integration/kube/csr_test.gotest/integration/kube/deleted_managed_cluster_create_addon_test.gotest/integration/kube/multiworks_test.gotest/integration/kube/registration_test.gotest/integration/kube/suite_test.go
🚧 Files skipped from review as they are similar to previous changes (11)
- go.mod
- pkg/addonmanager/controllers/cmaconfig/controller.go
- test/integration-test.mk
- examples/helloworld_helm/helloworld_helm_test.go
- test/integration/kube/csr_test.go
- test/integration/cloudevents/agent_hosting_hook_deploy_test.go
- hack/fix-crd-storage-version.sh
- pkg/addonmanager/manager.go
- pkg/addonmanager/controllers/agentdeploy/hosted_hook_sync_test.go
- test/integration/cloudevents/agent_deploy_test.go
- test/integration/cloudevents/agent_hook_deploy_test.go
| binding, err := fakeKubeClient.RbacV1().ClusterRoleBindings().Get(context.TODO(), "test-role", metav1.GetOptions{}) | ||
| assert.NoError(t, err) | ||
| assert.NotNil(t, binding) | ||
| assert.Len(t, binding.Subjects, 2) // user + group | ||
| assert.Len(t, binding.Subjects, tt.expectedSubLen) |
There was a problem hiding this comment.
Assert the bound subject set, not only its size.
Both success branches only verify len(binding.Subjects). After the v1beta1 subject-model migration, a binding with the wrong user/group/kind can still pass as long as it contains 1 or 2 subjects. Please compare against the expected subjects for each case.
Also applies to: 408-411
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/utils/permission_test.go` around lines 233 - 236, The test currently only
checks len(binding.Subjects) after calling
fakeKubeClient.RbacV1().ClusterRoleBindings().Get, which allows wrong subject
values to pass; update the assertions in the ClusterRoleBinding checks (the
block that reads binding, err :=
fakeKubeClient.RbacV1().ClusterRoleBindings().Get(...)) to compare
binding.Subjects against the test-case's expected subject slice (e.g.,
replace/augment the assert.Len using tt.expectedSubLen with an
equality/assert.ElementsMatch/assert.Equal assertion against tt.expectedSubjects
or similar), and make the same change for the other occurrence noted (the block
around lines 408-411) so tests validate the actual Subject objects (Kind, Name,
Namespace) rather than only the count.
70300f1 to
0d1a472
Compare
fefa7de to
a1545dd
Compare
b4a13fc to
d1e1226
Compare
|
/assign @qiujian16 |
|
@haoqing0110 this change looks good to me, just a question though - for addon-framework users, intended that we set our own |
Signed-off-by: Qing Hao <qhao@redhat.com>
Signed-off-by: Qing Hao <qhao@redhat.com>
Signed-off-by: Jian Qiu <jqiu@redhat.com>
Signed-off-by: Qing Hao <qhao@redhat.com>
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: haoqing0110 The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
@tesshuflower yes, you can still use the function provided by addon-framework. The only change of "install namespace" from v1alpha1 to v1beta1 is in API level, we removed the mca spec.installNamespace in v1beta1, and you should use addondeploymentconfig instead. |
Summary
Related issue(s)
Fixes #
Summary by CodeRabbit
Chores
Behavioral
Tools
Documentation