🌱 Addon v1beta1 upgrade#379
🌱 Addon v1beta1 upgrade#379qiujian16 wants to merge 3 commits intoopen-cluster-management-io:mainfrom
Conversation
Signed-off-by: Qing Hao <qhao@redhat.com>
Signed-off-by: Qing Hao <qhao@redhat.com>
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: qiujian16 The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
WalkthroughMigrates addon framework from addon/v1alpha1 to addon/v1beta1: API types, clients, informers, and constants updated; install-namespace moved to annotations; callbacks and manifest/health APIs made context-aware; registration model refactored to polymorphic RegistrationConfig; v1alpha1 adapter added for compatibility. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Possibly related PRs
Suggested labels
Suggested reviewers
✨ Finishing Touches🧪 Generate unit tests (beta)
|
2e07604 to
84cffe7
Compare
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (8)
pkg/addonmanager/controllers/cmamanagedby/controller.go (1)
84-93:⚠️ Potential issue | 🟠 MajorDelete the lifecycle annotation instead of blanking it.
Line 91 sets
addon.open-cluster-management.io/lifecycleto"". In the current tree,ManagedByAddonManageronly treats the addon as manager-owned when that key is absent or explicitlyaddon-manager, so this patch can leave the CMA in a state that is neither self-managed nor picked up by the addon manager.Suggested fix
if cmaCopy.Annotations == nil || cmaCopy.Annotations[addonapiv1alpha1.AddonLifecycleAnnotationKey] != addonapiv1alpha1.AddonLifecycleSelfManageAnnotationValue { return nil } - cmaCopy.Annotations[addonapiv1alpha1.AddonLifecycleAnnotationKey] = "" + delete(cmaCopy.Annotations, addonapiv1alpha1.AddonLifecycleAnnotationKey) + if len(cmaCopy.Annotations) == 0 { + cmaCopy.Annotations = nil + } _, err = c.addonPatcher.PatchLabelAnnotations(ctx, cmaCopy, cmaCopy.ObjectMeta, cma.ObjectMeta)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/cmamanagedby/controller.go` around lines 84 - 93, The patch currently clears the lifecycle annotation value to an empty string which leaves the CMA in an ambiguous state; update the logic in the block that operates on cmaCopy (the DeepCopy) so that instead of setting cmaCopy.Annotations[addonapiv1alpha1.AddonLifecycleAnnotationKey] = "" you remove the key entirely (e.g., delete(cmaCopy.Annotations, addonapiv1alpha1.AddonLifecycleAnnotationKey)) before calling c.addonPatcher.PatchLabelAnnotations(ctx, cmaCopy, cmaCopy.ObjectMeta, cma.ObjectMeta) so the addon manager will correctly recognize the resource as manager-owned.pkg/utils/addon_config_test.go (1)
98-132:⚠️ Potential issue | 🟡 MinorRestore the mismatched spec-hash case.
Right now the suite only proves “hash absent” and “hash matches”. If
AgentInstallNamespaceFromDeploymentConfigFuncaccidentally stopped honoringDesiredConfig.SpecHash, every active case here would still pass. The commented block is the negative-path coverage that catches that regression.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/addon_config_test.go` around lines 98 - 132, Restore the commented negative-path test case in pkg/utils/addon_config_test.go: uncomment the test block that constructs a AddOnDeploymentConfig via newTestAddOnDeploymentConfigGetter and a ManagedClusterAddOn with Status.ConfigReferences containing DesiredConfig.SpecHash set to "wronghash", and assert AgentInstallNamespaceFromDeploymentConfigFunc (the function under test) returns the empty string (expected ""). This re-adds the mismatched-spec-hash case so the test suite fails if AgentInstallNamespaceFromDeploymentConfigFunc ignores DesiredConfig.SpecHash.pkg/addonmanager/controllers/cmaconfig/controller_test.go (1)
347-353:⚠️ Potential issue | 🟡 MinorUse the
ClusterManagementAddOnsinformer here.This test is populating
ClusterManagementAddOnobjects and assigning the result toclusterManagementAddonIndexer, but the changed line builds aManagedClusterAddOnsinformer. Because the cache is untyped, the test can still pass while not matching the production wiring.Suggested fix
- addonInformer := addonInformers.Addon().V1beta1().ManagedClusterAddOns().Informer() + addonInformer := addonInformers.Addon().V1beta1().ClusterManagementAddOns().Informer()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/cmaconfig/controller_test.go` around lines 347 - 353, The test sets c.cmas (ClusterManagementAddOn objects) but uses the ManagedClusterAddOns informer; update the informer to the ClusterManagementAddOns informer so the test mirrors production wiring. In the setup where addonInformers is created (using addoninformers.NewSharedInformerFactory and assigned to addonInformer), replace Addon().V1beta1().ManagedClusterAddOns().Informer() with Addon().V1beta1().ClusterManagementAddOns().Informer() so cmaConfigController.clusterManagementAddonIndexer receives the correct informer indexer.test/e2e/helloworld_helm_test.go (1)
49-58:⚠️ Potential issue | 🟡 MinorDon’t return the stale
NotFoundafter a successful create.The create path succeeds, but this block still returns the original
Geterror. That makes the success path look like a failure, so first-time setup always burns an extraEventuallypoll.Suggested fix
_, err = hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(managedClusterName).Get(context.Background(), helloWorldHelmAddonName, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { _, cerr := hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(managedClusterName).Create(context.Background(), &helloworldhelmAddon, metav1.CreateOptions{}) - if cerr != nil { + if cerr != nil && !errors.IsAlreadyExists(cerr) { return cerr } + return nil } return err }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/helloworld_helm_test.go` around lines 49 - 58, The Get/Create block returns the original Get error even when Create succeeds; update the logic in the hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(managedClusterName) Get/Create handling so that when errors.IsNotFound(err) and the subsequent Create(...) returns nil (cerr == nil) you do not return the stale err—either return nil immediately after successful Create or set err = nil/continue execution—only return err for non-NotFound failures or if Create fails (cerr != nil); reference the Get, Create calls and variables err, cerr, managedClusterName, helloWorldHelmAddonName, helloworldhelmAddon to locate the code to change.pkg/addonmanager/controllers/cmaconfig/controller.go (1)
175-179:⚠️ Potential issue | 🟠 MajorError is silently swallowed instead of being returned.
When
getConfigSpecHashreturns an error, the function returnsnilinstead of the error. This masks potential issues with config retrieval and makes debugging difficult. The same pattern appears on line 196-198.🐛 Proposed fix to propagate errors
specHash, err := c.getConfigSpecHash(defaultConfigReference.ConfigGroupResource, defaultConfigReference.DesiredConfig.ConfigReferent) if err != nil { - return nil + return err }And similarly for the second occurrence:
specHash, err := c.getConfigSpecHash(configReference.ConfigGroupResource, configReference.DesiredConfig.ConfigReferent) if err != nil { - return nil + return err }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/cmaconfig/controller.go` around lines 175 - 179, The code is swallowing errors from c.getConfigSpecHash (called when setting cma.Status.DefaultConfigReferences[i].DesiredConfig.SpecHash) by returning nil instead of propagating the error; update both occurrences (the one assigning SpecHash at the first call and the second occurrence around the later assignment) to return the encountered error (e.g., return err) so the caller sees failures from getConfigSpecHash rather than masking them, and ensure the function signature allows returning the error up the call chain.pkg/addonmanager/controllers/agentdeploy/hosted_sync.go (1)
47-54:⚠️ Potential issue | 🟡 MinorClear hosted-only conditions when the addon leaves hosted mode.
This branch tears down hosted work and removes the finalizer, but it leaves
ManagedClusterAddOnHostingClusterValidityandManagedClusterAddOnHostingManifestAppliedon the addon. After a hosted addon switches back to default mode, users will still see stale hosted status forever.Suggested fix
if installMode != constants.InstallModeHosted { if err := s.cleanupDeployWork(ctx, addon); err != nil { return addon, err } + meta.RemoveStatusCondition(&addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnHostingClusterValidity) + meta.RemoveStatusCondition(&addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnHostingManifestApplied) addonRemoveFinalizer(addon, addonapiv1beta1.AddonHostingManifestFinalizer) return addon, nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/agentdeploy/hosted_sync.go` around lines 47 - 54, When installMode changes from hosted to default in the hosted-mode check inside HostedModeInfoFunc handling, also clear the hosted-only status conditions ManagedClusterAddOnHostingClusterValidity and ManagedClusterAddOnHostingManifestApplied from the addon before returning; locate the branch that calls s.cleanupDeployWork(ctx, addon) and addonRemoveFinalizer(addon, addonapiv1beta1.AddonHostingManifestFinalizer) and add logic to remove or update those two conditions on the addon status (e.g., via the same status condition helpers your codebase uses) so stale hosted status is not left on the resource.pkg/addonmanager/controllers/registration/controller.go (1)
299-299:⚠️ Potential issue | 🟡 MinorError message references old field name
CSRConfigurations.The error message still refers to
CSRConfigurationsbut the field has been renamed toConfigurationsin v1beta1.📝 Suggested fix
- return fmt.Errorf("failed to patch status condition(CSRConfigurations nil) of managedclusteraddon: %w", err) + return fmt.Errorf("failed to patch status condition(Configurations nil) of managedclusteraddon: %w", err)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/registration/controller.go` at line 299, Update the error message that still references the old field name "CSRConfigurations" to use the new v1beta1 name "Configurations" in the fmt.Errorf call used when patching the managedclusteraddon status; locate the fmt.Errorf(...) that returns "failed to patch status condition(CSRConfigurations nil) of managedclusteraddon: %w" and change the literal to "failed to patch status condition(Configurations nil) of managedclusteraddon: %w" so logs reflect the correct field name.pkg/agent/interface.go (1)
101-107:⚠️ Potential issue | 🟡 MinorComment references removed "Updaters" field.
The comment on line 102 states ManifestConfigs will "override the update strategy set by the 'Updaters' field", but the
Updatersfield has been removed fromAgentAddonOptionsin this v1beta1 interface. The comment should be updated to reflect the current behavior.📝 Suggested fix
// ManifestConfigs represents the configurations of manifests defined in workload field. It will: - // - override the update strategy set by the "Updaters" field // - merge the feedback rules set by the "HealthProber" field if they have the same resource identifier, // when merging the feedback rules, if the rule configured here is json path type, will ignore the // json path which is already in the existing rules, compare by the path name. // +optional ManifestConfigs []workapiv1.ManifestConfigOption🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/agent/interface.go` around lines 101 - 107, The comment for ManifestConfigs incorrectly references a removed "Updaters" field; update the doc comment for ManifestConfigs (type ManifestConfigs []workapiv1.ManifestConfigOption in AgentAddonOptions) to remove the sentence about overriding the "Updaters" field and instead describe the current behavior (e.g., that ManifestConfigs override any update strategy defined on the manifest itself or other remaining fields) or simply remove that clause so the comment only documents merging feedback rules and JSONPath deduplication.
🧹 Nitpick comments (11)
pkg/addonmanager/controllers/addonconfig/controller.go (1)
216-219: Redundant nil check forDesiredConfig.The check at line 217 is unnecessary because the loop already continues at line 192-194 if
configReference.DesiredConfigis nil. This check will never be true at this point.🔧 Suggested cleanup
// update desired spec hash only for the configs in spec for _, addonconfig := range addon.Spec.Configs { - if configReference.DesiredConfig == nil { - continue - } - if configReference.ConfigGroupResource == addonconfig.ConfigGroupResource && configReference.DesiredConfig.ConfigReferent == addonconfig.ConfigReferent {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/addonconfig/controller.go` around lines 216 - 219, Remove the redundant nil check for configReference.DesiredConfig inside the inner loop: the loop already continues when configReference.DesiredConfig is nil earlier, so delete the conditional "if configReference.DesiredConfig == nil { continue }" (or its equivalent) within the loop that iterates over addon.Spec.Configs (variables: addonconfig, configReference, DesiredConfig) to avoid unreachable code and keep behavior unchanged.pkg/addonfactory/test_helper.go (1)
20-27: Duplicate annotation assignment.
AnnotationValuesNameis set twice: at line 21 and again at line 27. The second assignment is redundant.🔧 Remove duplicate assignment
annotations := map[string]string{ AnnotationValuesName: values, } // In v1beta1, InstallNamespace is stored in annotation instead of Spec if len(installNamespace) > 0 { annotations[addonapiv1beta1.InstallNamespaceAnnotation] = installNamespace } - annotations[AnnotationValuesName] = values🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonfactory/test_helper.go` around lines 20 - 27, The code creates an annotations map and assigns AnnotationValuesName twice; remove the redundant second assignment so AnnotationValuesName is only set once when initializing annotations. Update the block that builds annotations (the map literal and subsequent conditional setting of addonapiv1beta1.InstallNamespaceAnnotation) to omit the extra annotations[AnnotationValuesName] = values line so only the initial map literal assignment remains.test/integration/cloudevents/agent_hosting_hook_deploy_test.go (1)
178-214: Assert the annotation-driven namespace directly instead of raw-byte inequality.This migration moves the install namespace source of truth into
metadata.annotations, but both manifest checks still pass on any reserialization difference. That will not catch a controller that ignoresaddonapiv1beta1.InstallNamespaceAnnotation. The same issue applies to the hook-work assertion later in the test.Example assertion update
- if apiequality.Semantic.DeepEqual(work.Spec.Workload.Manifests[0].Raw, []byte(deploymentHostingJson)) { - return fmt.Errorf("expected manifest is no correct, get %v", work.Spec.Workload.Manifests[0].Raw) - } + rendered := &unstructured.Unstructured{} + if err := rendered.UnmarshalJSON(work.Spec.Workload.Manifests[0].Raw); err != nil { + return err + } + if rendered.GetNamespace() != "default" { + return fmt.Errorf("expected rendered namespace default, got %s", rendered.GetNamespace()) + }Also applies to: 298-325
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/cloudevents/agent_hosting_hook_deploy_test.go` around lines 178 - 214, The test currently asserts manifest equality by raw-byte comparison to deploymentHostingJson which can miss controllers that ignore addonapiv1beta1.InstallNamespaceAnnotation; instead parse the work.Spec.Workload.Manifests[0].Raw (and the hook-work manifest later) into an object (e.g., unstructured map or Unstructured) and assert that metadata.annotations[addonapiv1beta1.InstallNamespaceAnnotation] equals the expected install namespace ("default") and/or that metadata.name/namespace fields match expected values; update the checks around agentWorkLister -> works[0] and the later hook-work assertion to inspect parsed manifest annotations rather than raw-byte inequality so the test fails if the controller does not set the InstallNamespaceAnnotation.test/integration/kube/csr_test.go (1)
11-14: Import ordering should follow Go conventions.The
agentimport at line 12 is placed within the standard library import block. Go convention groups imports as: (1) standard library, (2) external packages, (3) internal packages, with blank lines separating groups.🔧 Suggested import grouping
import ( "context" "crypto/ecdsa" "crypto/elliptic" "crypto/x509" "crypto/x509/pkix" "encoding/pem" "fmt" "net" - "open-cluster-management.io/addon-framework/pkg/agent" "time" mathrand "math/rand" "github.com/onsi/ginkgo" "github.com/onsi/gomega" + "open-cluster-management.io/addon-framework/pkg/agent" "k8s.io/apimachinery/pkg/util/rand"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/kube/csr_test.go` around lines 11 - 14, Reorder the import block so it follows Go conventions: group standard library imports ("net", "time") together, add a blank line, then place external package imports like "open-cluster-management.io/addon-framework/pkg/agent" in their own group; update the import section in the csr_test.go import block accordingly to maintain grouping and spacing.test/integration/kube/agent_hosting_hook_deploy_test.go (1)
271-279: Use the v1beta1 hook condition constant in this assertion too.This block still hard-codes
"HookManifestCompleted"while the same file already usesaddonapiv1beta1.ManagedClusterAddOnHookManifestCompletedlater. Reusing the constant keeps the migration consistent and avoids silent drift if the API value changes.♻️ Proposed fix
- if !meta.IsStatusConditionFalse(addon.Status.Conditions, "HookManifestCompleted") { + if !meta.IsStatusConditionFalse(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnHookManifestCompleted) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/kube/agent_hosting_hook_deploy_test.go` around lines 271 - 279, Replace the hard-coded string "HookManifestCompleted" with the v1beta1 constant addonapiv1beta1.ManagedClusterAddOnHookManifestCompleted in the assertion that checks addon.Status.Conditions (the call to meta.IsStatusConditionFalse) so the test uses the same ManagedClusterAddOnHookManifestCompleted constant used elsewhere (e.g., later in this file) when evaluating the addon retrieved via hubAddonClient.AddonV1beta1().ManagedClusterAddOns(...).Get(...).pkg/addonfactory/template_agentaddon_test.go (1)
34-35: Fail fast if scheme registration breaks.Swallowing
clusterv1alpha1.Installerrors makes this test fail later during manifest decoding with a much less local symptom.♻️ Proposed fix
scheme := runtime.NewScheme() - _ = clusterv1alpha1.Install(scheme) + if err := clusterv1alpha1.Install(scheme); err != nil { + t.Fatal(err) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonfactory/template_agentaddon_test.go` around lines 34 - 35, The test currently swallows the error from clusterv1alpha1.Install(scheme) which defers a failure to manifest decoding; change the call to check and fail immediately—capture the returned error from clusterv1alpha1.Install(scheme) and call t.Fatalf (or use your test helper like require.NoError) with a clear message if err != nil so the test fails fast; update the code around the scheme := runtime.NewScheme() and clusterv1alpha1.Install(scheme) references to perform this explicit error check.pkg/addonfactory/addonfactory.go (1)
156-163: Refresh theWithAgentInstallNamespaceGoDoc.The comment still describes
registrationOption.AgentInstallNamespace, but this method now wires the top-levelAgentAddonOptions.AgentInstallNamespacecallback directly. Keeping the old wording makes the migration path harder to follow for downstream callers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonfactory/addonfactory.go` around lines 156 - 163, Update the GoDoc for WithAgentInstallNamespace on AgentAddonFactory to reflect that it now sets the top-level AgentAddonOptions.AgentInstallNamespace callback directly (not registrationOption.AgentInstallNamespace); mention the signature nsFunc func(ctx context.Context, addon *addonapiv1beta1.ManagedClusterAddOn) (string, error) and describe that it overrides the default built-in namespace value by assigning it to f.agentAddonOptions.AgentInstallNamespace and returning the factory for chaining.pkg/utils/addon_config.go (1)
39-63: Context parameter is accepted but not propagated.The
AgentInstallNamespaceFromDeploymentConfigFuncreturns a function that acceptsctx context.Context, but this context is not passed toGetDesiredAddOnDeploymentConfigwhich internally usescontext.TODO()on line 83. This means cancellation signals and deadlines won't be respected during the addon deployment config fetch.♻️ Suggested fix
func AgentInstallNamespaceFromDeploymentConfigFunc( adcgetter AddOnDeploymentConfigGetter, ) func(ctx context.Context, addon *addonapiv1beta1.ManagedClusterAddOn) (string, error) { return func(ctx context.Context, addon *addonapiv1beta1.ManagedClusterAddOn) (string, error) { if addon == nil { return "", fmt.Errorf("failed to get addon install namespace, addon is nil") } - config, err := GetDesiredAddOnDeploymentConfig(addon, adcgetter) + config, err := GetDesiredAddOnDeploymentConfigWithContext(ctx, addon, adcgetter)This would require adding a context-aware variant of
GetDesiredAddOnDeploymentConfig.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/addon_config.go` around lines 39 - 63, AgentInstallNamespaceFromDeploymentConfigFunc currently ignores the incoming ctx when calling GetDesiredAddOnDeploymentConfig; update GetDesiredAddOnDeploymentConfig to add a context-aware variant (e.g., GetDesiredAddOnDeploymentConfigWithContext(ctx, ...)) or overload it to accept ctx, then call that new context-aware function from inside the returned closure (in AgentInstallNamespaceFromDeploymentConfigFunc) passing the received ctx so cancellation/deadlines are respected; update all references and tests to use the new function name where appropriate.pkg/addonmanager/controllers/registration/controller.go (2)
139-142: Driver inheritance from existing registration may cause unexpected behavior.The logic prioritizes the driver from the existing registration over the one from
newConfig. This means once an agent sets a driver, the hub-side config's driver is effectively ignored. While this appears intentional for token-based flows where the agent controls the driver, the comment should clarify this behavior to prevent confusion when addon developers expect their config to take precedence.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/registration/controller.go` around lines 139 - 142, Update the inline comment above the driver selection logic to explicitly state that existingReg.KubeClient.Driver takes precedence over config.KubeClient.Driver (i.e., once an agent has set a driver the hub-side config is ignored), explain this is intentional for token-based flows where the agent should control the driver, and note the potential for confusion for addon authors expecting hub config precedence; reference the variables used (driver, existingReg, existingReg.KubeClient.Driver, config.KubeClient.Driver) so future readers understand why the override is implemented in the controller.go registration logic.
71-107: Registration matching by index may be fragile if config order changes.The
findExistingRegistrationfunction matchesKubeClientregistrations by positional index. If the order ofKubeClientconfigs changes between reconciliations (e.g., due to addon code changes or dynamic config generation), this could incorrectly match registrations, potentially preserving the wrong driver or subject from an unrelated existing registration.Consider whether matching by a more stable identifier (if available) would be safer, or document this ordering constraint clearly for addon developers.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/registration/controller.go` around lines 71 - 107, The current findExistingRegistration matches KubeClient entries purely by positional index (in newConfigs vs existingRegistrations), which is fragile; change findExistingRegistration to prefer matching KubeClient registrations by a stable identifier extracted from the concrete *agent.KubeClientRegistration (e.g., a client/cluster/name/serviceAccount field on the KubeClientRegistration struct) by first casting newConfigs[index] to *agent.KubeClientRegistration and comparing that stable key against each existingRegistrations[j] with Type == KubeClient (and existingRegistrations[j].KubeClient-equivalent field), and only fall back to the current index-based logic if no stable key exists; update comments in findExistingRegistration and add/adjust tests to cover key-based matching and the fallback path.pkg/agent/adapter.go (1)
196-199: Conversion errors are silently ignored.The conversion functions
Convert_v1beta1_ManagedClusterAddOnSpec_To_v1alpha1_...and similar return errors that are being discarded with_ =. While the comment states "nil scope is safe", if the underlying API types evolve and these functions start returning errors for other reasons, bugs could go undetected.Consider at minimum logging conversion errors at a debug level, or returning an error if conversion fails.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/agent/adapter.go` around lines 196 - 199, The conversion calls currently swallow returned errors from Convert_v1beta1_ManagedClusterAddOnSpec_To_v1alpha1_ManagedClusterAddOnSpec and Convert_v1beta1_ManagedClusterAddOnStatus_To_v1alpha1_ManagedClusterAddOnStatus; change the code to capture and handle those errors instead of assigning to `_`. Specifically, replace the `_ =` calls with error checks (e.g. err := addonv1beta1.Convert_v1beta1_ManagedClusterAddOnSpec_To_v1alpha1_ManagedClusterAddOnSpec(...); if err != nil { return nil, fmt.Errorf("spec conversion failed: %w", err) }) and do the same for the status conversion, updating the enclosing function signature to return an error (or, if changing the signature is infeasible, log the error at debug level using the package logger) and update callers accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/addonfactory/template_agentaddon.go`:
- Around line 134-135: The code uses context.TODO() instead of the propagated
ctx, breaking cancellation propagation; update the relevant method signatures to
accept ctx where missing (e.g., ensure getBuiltinValues and any helper methods
accept a context.Context parameter), then replace context.TODO() with the ctx
passed into Manifests and thread that ctx through calls to AgentInstallNamespace
and getBuiltinValues (referencing AgentInstallNamespace, getBuiltinValues, and
Manifests) so all calls use the same context for timeouts/cancellation.
In `@pkg/addonmanager/controllers/registration/controller.go`:
- Around line 314-320: Update the priority comment above the namespace
assignment to reflect the current logic: the priority is default < annotation <
AgentInstallNamespace function (remove the outdated reference to
registrationOption.Namespace). Mention the specific symbols:
managedClusterAddonCopy.Status.Namespace,
addonapiv1beta1.InstallNamespaceAnnotation, and the AgentInstallNamespace
function so reviewers can locate the logic and confirm the comment matches the
code.
In `@pkg/agent/adapter.go`:
- Around line 165-175: The code in the PermissionConfig error handling block
returns the original err even after converting a v1alpha1 SubjectNotReadyError;
change the logic in the adapter.go function that calls reg.PermissionConfig
(which uses ToV1alpha1Addon) so that if errors.As(err,
&agentv1alpha1.SubjectNotReadyError) is true you return the converted
&SubjectNotReadyError{} immediately, otherwise return the original err; update
the if/return flow in that block to return the converted error instead of
falling through to return err.
In `@pkg/utils/addon_config_test.go`:
- Around line 170-174: The test is discarding the returned error from
AgentInstallNamespaceFromDeploymentConfigFunc; change the call in the subtest to
capture both return values (ns, err := nsFunc(context.TODO(), c.mca)) and assert
the error is nil before comparing namespace (use assert.NoError(t, err) or
require.NoError(t, err)), then keep assert.Equal(t, c.expected, ns) to ensure
failures surface correctly; update the subtests that use nsFunc to follow this
pattern.
In `@README.md`:
- Around line 23-24: The README links for ClusterManagementAddOn and
ManagedClusterAddOn point to non-existent addon/v1beta1 files; update both
Markdown link URLs to reference the existing addon/v1alpha1 API files (replace
addon/v1beta1/.../types_clustermanagementaddon.go and
types_managedclusteraddon.go with
addon/v1alpha1/.../types_clustermanagementaddon.go and
types_managedclusteraddon.go) so the ClusterManagementAddOn and
ManagedClusterAddOn links resolve to the current repository files.
In `@test/e2e/helloworld_test.go`:
- Around line 531-533: The Create calls for AddOnDeploymentConfigs are using
managedClusterName instead of the helper's namespace, causing a namespace
mismatch; update the Create(...) invocations to use the same namespace variable
passed to the existence check (i.e., call
hubAddOnClient.AddonV1beta1().AddOnDeploymentConfigs(namespace).Create(...)) and
make the same change for the other similar blocks that currently use
managedClusterName (the other AddOnDeploymentConfigs Create usages referenced in
the comment).
In `@test/integration-test.mk`:
- Around line 1-6: The Makefile currently streams and executes a remote script
via ENSURE_ENVTEST_SCRIPT in the envtest-setup target (setting
KUBEBUILDER_ASSETS by piping curl to bash); replace that pattern by first
pinning ENSURE_ENVTEST_SCRIPT to a specific tag/commit (not main) or vendor the
script into the repo, then change envtest-setup to curl -fsSL to a local file,
verify its checksum/signature, and only then execute it to populate
KUBEBUILDER_ASSETS (update references to ENSURE_ENVTEST_SCRIPT, envtest-setup,
and KUBEBUILDER_ASSETS accordingly).
In `@test/integration/kube/assertion_test.go`:
- Around line 95-100: The closure inside updateClusterManagementAddOn mutates
the fetched object (old) before checking the error from hubAddonClient...Get,
causing panics on transient read failures; change the closure to first check err
(return false so Eventually retries) and only mutate old when err == nil, and
apply the same fix to the other identical closure in this file that uses
hubAddonClient.AddonV1beta1().ClusterManagementAddOns().Get/Update.
- Around line 136-158: The current logic that updates addon.Status.Registrations
only sets Driver when it finds an existing KubeClient entry and silently does
nothing if registrations are present but contain no KubeClient; change the
routine that iterates addon.Status.Registrations to track whether a KubeClient
was found, and after the loop append a new
addonapiv1beta1.RegistrationConfig{Type: addonapiv1beta1.KubeClient, KubeClient:
&addonapiv1beta1.KubeClientConfig{Driver: driver}} if none was found, then call
hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).UpdateStatus(...)
as before; ensure the code still handles the empty-slice case as it currently
does.
---
Outside diff comments:
In `@pkg/addonmanager/controllers/agentdeploy/hosted_sync.go`:
- Around line 47-54: When installMode changes from hosted to default in the
hosted-mode check inside HostedModeInfoFunc handling, also clear the hosted-only
status conditions ManagedClusterAddOnHostingClusterValidity and
ManagedClusterAddOnHostingManifestApplied from the addon before returning;
locate the branch that calls s.cleanupDeployWork(ctx, addon) and
addonRemoveFinalizer(addon, addonapiv1beta1.AddonHostingManifestFinalizer) and
add logic to remove or update those two conditions on the addon status (e.g.,
via the same status condition helpers your codebase uses) so stale hosted status
is not left on the resource.
In `@pkg/addonmanager/controllers/cmaconfig/controller_test.go`:
- Around line 347-353: The test sets c.cmas (ClusterManagementAddOn objects) but
uses the ManagedClusterAddOns informer; update the informer to the
ClusterManagementAddOns informer so the test mirrors production wiring. In the
setup where addonInformers is created (using
addoninformers.NewSharedInformerFactory and assigned to addonInformer), replace
Addon().V1beta1().ManagedClusterAddOns().Informer() with
Addon().V1beta1().ClusterManagementAddOns().Informer() so
cmaConfigController.clusterManagementAddonIndexer receives the correct informer
indexer.
In `@pkg/addonmanager/controllers/cmaconfig/controller.go`:
- Around line 175-179: The code is swallowing errors from c.getConfigSpecHash
(called when setting
cma.Status.DefaultConfigReferences[i].DesiredConfig.SpecHash) by returning nil
instead of propagating the error; update both occurrences (the one assigning
SpecHash at the first call and the second occurrence around the later
assignment) to return the encountered error (e.g., return err) so the caller
sees failures from getConfigSpecHash rather than masking them, and ensure the
function signature allows returning the error up the call chain.
In `@pkg/addonmanager/controllers/cmamanagedby/controller.go`:
- Around line 84-93: The patch currently clears the lifecycle annotation value
to an empty string which leaves the CMA in an ambiguous state; update the logic
in the block that operates on cmaCopy (the DeepCopy) so that instead of setting
cmaCopy.Annotations[addonapiv1alpha1.AddonLifecycleAnnotationKey] = "" you
remove the key entirely (e.g., delete(cmaCopy.Annotations,
addonapiv1alpha1.AddonLifecycleAnnotationKey)) before calling
c.addonPatcher.PatchLabelAnnotations(ctx, cmaCopy, cmaCopy.ObjectMeta,
cma.ObjectMeta) so the addon manager will correctly recognize the resource as
manager-owned.
In `@pkg/addonmanager/controllers/registration/controller.go`:
- Line 299: Update the error message that still references the old field name
"CSRConfigurations" to use the new v1beta1 name "Configurations" in the
fmt.Errorf call used when patching the managedclusteraddon status; locate the
fmt.Errorf(...) that returns "failed to patch status condition(CSRConfigurations
nil) of managedclusteraddon: %w" and change the literal to "failed to patch
status condition(Configurations nil) of managedclusteraddon: %w" so logs reflect
the correct field name.
In `@pkg/agent/interface.go`:
- Around line 101-107: The comment for ManifestConfigs incorrectly references a
removed "Updaters" field; update the doc comment for ManifestConfigs (type
ManifestConfigs []workapiv1.ManifestConfigOption in AgentAddonOptions) to remove
the sentence about overriding the "Updaters" field and instead describe the
current behavior (e.g., that ManifestConfigs override any update strategy
defined on the manifest itself or other remaining fields) or simply remove that
clause so the comment only documents merging feedback rules and JSONPath
deduplication.
In `@pkg/utils/addon_config_test.go`:
- Around line 98-132: Restore the commented negative-path test case in
pkg/utils/addon_config_test.go: uncomment the test block that constructs a
AddOnDeploymentConfig via newTestAddOnDeploymentConfigGetter and a
ManagedClusterAddOn with Status.ConfigReferences containing
DesiredConfig.SpecHash set to "wronghash", and assert
AgentInstallNamespaceFromDeploymentConfigFunc (the function under test) returns
the empty string (expected ""). This re-adds the mismatched-spec-hash case so
the test suite fails if AgentInstallNamespaceFromDeploymentConfigFunc ignores
DesiredConfig.SpecHash.
In `@test/e2e/helloworld_helm_test.go`:
- Around line 49-58: The Get/Create block returns the original Get error even
when Create succeeds; update the logic in the
hubAddOnClient.AddonV1beta1().ManagedClusterAddOns(managedClusterName)
Get/Create handling so that when errors.IsNotFound(err) and the subsequent
Create(...) returns nil (cerr == nil) you do not return the stale err—either
return nil immediately after successful Create or set err = nil/continue
execution—only return err for non-NotFound failures or if Create fails (cerr !=
nil); reference the Get, Create calls and variables err, cerr,
managedClusterName, helloWorldHelmAddonName, helloworldhelmAddon to locate the
code to change.
---
Nitpick comments:
In `@pkg/addonfactory/addonfactory.go`:
- Around line 156-163: Update the GoDoc for WithAgentInstallNamespace on
AgentAddonFactory to reflect that it now sets the top-level
AgentAddonOptions.AgentInstallNamespace callback directly (not
registrationOption.AgentInstallNamespace); mention the signature nsFunc func(ctx
context.Context, addon *addonapiv1beta1.ManagedClusterAddOn) (string, error) and
describe that it overrides the default built-in namespace value by assigning it
to f.agentAddonOptions.AgentInstallNamespace and returning the factory for
chaining.
In `@pkg/addonfactory/template_agentaddon_test.go`:
- Around line 34-35: The test currently swallows the error from
clusterv1alpha1.Install(scheme) which defers a failure to manifest decoding;
change the call to check and fail immediately—capture the returned error from
clusterv1alpha1.Install(scheme) and call t.Fatalf (or use your test helper like
require.NoError) with a clear message if err != nil so the test fails fast;
update the code around the scheme := runtime.NewScheme() and
clusterv1alpha1.Install(scheme) references to perform this explicit error check.
In `@pkg/addonfactory/test_helper.go`:
- Around line 20-27: The code creates an annotations map and assigns
AnnotationValuesName twice; remove the redundant second assignment so
AnnotationValuesName is only set once when initializing annotations. Update the
block that builds annotations (the map literal and subsequent conditional
setting of addonapiv1beta1.InstallNamespaceAnnotation) to omit the extra
annotations[AnnotationValuesName] = values line so only the initial map literal
assignment remains.
In `@pkg/addonmanager/controllers/addonconfig/controller.go`:
- Around line 216-219: Remove the redundant nil check for
configReference.DesiredConfig inside the inner loop: the loop already continues
when configReference.DesiredConfig is nil earlier, so delete the conditional "if
configReference.DesiredConfig == nil { continue }" (or its equivalent) within
the loop that iterates over addon.Spec.Configs (variables: addonconfig,
configReference, DesiredConfig) to avoid unreachable code and keep behavior
unchanged.
In `@pkg/addonmanager/controllers/registration/controller.go`:
- Around line 139-142: Update the inline comment above the driver selection
logic to explicitly state that existingReg.KubeClient.Driver takes precedence
over config.KubeClient.Driver (i.e., once an agent has set a driver the hub-side
config is ignored), explain this is intentional for token-based flows where the
agent should control the driver, and note the potential for confusion for addon
authors expecting hub config precedence; reference the variables used (driver,
existingReg, existingReg.KubeClient.Driver, config.KubeClient.Driver) so future
readers understand why the override is implemented in the controller.go
registration logic.
- Around line 71-107: The current findExistingRegistration matches KubeClient
entries purely by positional index (in newConfigs vs existingRegistrations),
which is fragile; change findExistingRegistration to prefer matching KubeClient
registrations by a stable identifier extracted from the concrete
*agent.KubeClientRegistration (e.g., a client/cluster/name/serviceAccount field
on the KubeClientRegistration struct) by first casting newConfigs[index] to
*agent.KubeClientRegistration and comparing that stable key against each
existingRegistrations[j] with Type == KubeClient (and
existingRegistrations[j].KubeClient-equivalent field), and only fall back to the
current index-based logic if no stable key exists; update comments in
findExistingRegistration and add/adjust tests to cover key-based matching and
the fallback path.
In `@pkg/agent/adapter.go`:
- Around line 196-199: The conversion calls currently swallow returned errors
from Convert_v1beta1_ManagedClusterAddOnSpec_To_v1alpha1_ManagedClusterAddOnSpec
and
Convert_v1beta1_ManagedClusterAddOnStatus_To_v1alpha1_ManagedClusterAddOnStatus;
change the code to capture and handle those errors instead of assigning to `_`.
Specifically, replace the `_ =` calls with error checks (e.g. err :=
addonv1beta1.Convert_v1beta1_ManagedClusterAddOnSpec_To_v1alpha1_ManagedClusterAddOnSpec(...);
if err != nil { return nil, fmt.Errorf("spec conversion failed: %w", err) }) and
do the same for the status conversion, updating the enclosing function signature
to return an error (or, if changing the signature is infeasible, log the error
at debug level using the package logger) and update callers accordingly.
In `@pkg/utils/addon_config.go`:
- Around line 39-63: AgentInstallNamespaceFromDeploymentConfigFunc currently
ignores the incoming ctx when calling GetDesiredAddOnDeploymentConfig; update
GetDesiredAddOnDeploymentConfig to add a context-aware variant (e.g.,
GetDesiredAddOnDeploymentConfigWithContext(ctx, ...)) or overload it to accept
ctx, then call that new context-aware function from inside the returned closure
(in AgentInstallNamespaceFromDeploymentConfigFunc) passing the received ctx so
cancellation/deadlines are respected; update all references and tests to use the
new function name where appropriate.
In `@test/integration/cloudevents/agent_hosting_hook_deploy_test.go`:
- Around line 178-214: The test currently asserts manifest equality by raw-byte
comparison to deploymentHostingJson which can miss controllers that ignore
addonapiv1beta1.InstallNamespaceAnnotation; instead parse the
work.Spec.Workload.Manifests[0].Raw (and the hook-work manifest later) into an
object (e.g., unstructured map or Unstructured) and assert that
metadata.annotations[addonapiv1beta1.InstallNamespaceAnnotation] equals the
expected install namespace ("default") and/or that metadata.name/namespace
fields match expected values; update the checks around agentWorkLister ->
works[0] and the later hook-work assertion to inspect parsed manifest
annotations rather than raw-byte inequality so the test fails if the controller
does not set the InstallNamespaceAnnotation.
In `@test/integration/kube/agent_hosting_hook_deploy_test.go`:
- Around line 271-279: Replace the hard-coded string "HookManifestCompleted"
with the v1beta1 constant
addonapiv1beta1.ManagedClusterAddOnHookManifestCompleted in the assertion that
checks addon.Status.Conditions (the call to meta.IsStatusConditionFalse) so the
test uses the same ManagedClusterAddOnHookManifestCompleted constant used
elsewhere (e.g., later in this file) when evaluating the addon retrieved via
hubAddonClient.AddonV1beta1().ManagedClusterAddOns(...).Get(...).
In `@test/integration/kube/csr_test.go`:
- Around line 11-14: Reorder the import block so it follows Go conventions:
group standard library imports ("net", "time") together, add a blank line, then
place external package imports like
"open-cluster-management.io/addon-framework/pkg/agent" in their own group;
update the import section in the csr_test.go import block accordingly to
maintain grouping and spacing.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1e37b713-0269-41ca-ae1a-25a287ea8feb
⛔ Files ignored due to path filters (12)
go.sumis excluded by!**/*.sumvendor/modules.txtis excluded by!vendor/**vendor/open-cluster-management.io/api/addon/v1beta1/types_managedclusteraddon.gois excluded by!vendor/**vendor/open-cluster-management.io/api/operator/v1/0000_01_operator.open-cluster-management.io_clustermanagers.crd.yamlis excluded by!vendor/**vendor/open-cluster-management.io/api/operator/v1/types_clustermanager.gois excluded by!vendor/**vendor/open-cluster-management.io/api/operator/v1/zz_generated.deepcopy.gois excluded by!vendor/**vendor/open-cluster-management.io/sdk-go/pkg/cloudevents/clients/errors/errors.gois excluded by!vendor/**vendor/open-cluster-management.io/sdk-go/pkg/cloudevents/clients/store/watcher.gois excluded by!vendor/**vendor/open-cluster-management.io/sdk-go/pkg/cloudevents/clients/work/agent/client/manifestwork.gois excluded by!vendor/**vendor/open-cluster-management.io/sdk-go/pkg/cloudevents/clients/work/clientholder.gois excluded by!vendor/**vendor/open-cluster-management.io/sdk-go/pkg/cloudevents/clients/work/store/local.gois excluded by!vendor/**vendor/open-cluster-management.io/sdk-go/pkg/cloudevents/generic/options/v2/pubsub/transport.gois excluded by!vendor/**
📒 Files selected for processing (90)
README.mdcmd/example/helloworld/main.gocmd/example/helloworld_helm/main.goexamples/helloworld/helloworld.goexamples/helloworld/helloworld_test.goexamples/helloworld_agent/agent.goexamples/helloworld_helm/helloworld_helm.goexamples/helloworld_helm/helloworld_helm_test.goexamples/rbac/rbac.gogo.modpkg/addonfactory/addondeploymentconfig.gopkg/addonfactory/addondeploymentconfig_test.gopkg/addonfactory/addonfactory.gopkg/addonfactory/helm_agentaddon.gopkg/addonfactory/helm_agentaddon_test.gopkg/addonfactory/helper.gopkg/addonfactory/template_agentaddon.gopkg/addonfactory/template_agentaddon_test.gopkg/addonfactory/test_helper.gopkg/addonmanager/addontesting/helpers.gopkg/addonmanager/base_manager.gopkg/addonmanager/cloudevents/manager.gopkg/addonmanager/constants/constants.gopkg/addonmanager/controllers/addonconfig/controller.gopkg/addonmanager/controllers/addonconfig/controller_test.gopkg/addonmanager/controllers/agentdeploy/controller.gopkg/addonmanager/controllers/agentdeploy/default_hook_sync.gopkg/addonmanager/controllers/agentdeploy/default_hook_sync_test.gopkg/addonmanager/controllers/agentdeploy/default_sync.gopkg/addonmanager/controllers/agentdeploy/default_sync_test.gopkg/addonmanager/controllers/agentdeploy/healthcheck_sync.gopkg/addonmanager/controllers/agentdeploy/healthcheck_sync_test.gopkg/addonmanager/controllers/agentdeploy/hosted_hook_sync.gopkg/addonmanager/controllers/agentdeploy/hosted_hook_sync_test.gopkg/addonmanager/controllers/agentdeploy/hosted_sync.gopkg/addonmanager/controllers/agentdeploy/hosted_sync_test.gopkg/addonmanager/controllers/agentdeploy/util_test.gopkg/addonmanager/controllers/agentdeploy/utils.gopkg/addonmanager/controllers/certificate/csrapprove.gopkg/addonmanager/controllers/certificate/csrapprove_test.gopkg/addonmanager/controllers/certificate/csrsign.gopkg/addonmanager/controllers/certificate/csrsign_test.gopkg/addonmanager/controllers/cmaconfig/controller.gopkg/addonmanager/controllers/cmaconfig/controller_test.gopkg/addonmanager/controllers/cmamanagedby/controller.gopkg/addonmanager/controllers/cmamanagedby/controller_test.gopkg/addonmanager/controllers/registration/controller.gopkg/addonmanager/controllers/registration/controller_test.gopkg/addonmanager/manager.gopkg/agent/adapter.gopkg/agent/adapter_test.gopkg/agent/interface.gopkg/agent/registration.gopkg/agent/v1alpha1/interface.gopkg/index/index.gopkg/utils/addon.gopkg/utils/addon_config.gopkg/utils/addon_config_test.gopkg/utils/addon_test.gopkg/utils/csr_helper_test.gopkg/utils/csr_helpers.gopkg/utils/gvr.gopkg/utils/helpers.gopkg/utils/helpers_test.gopkg/utils/image.gopkg/utils/image_test.gopkg/utils/permission.gopkg/utils/permission_test.gopkg/utils/probe_helper.gotest/e2e/helloworld_helm_test.gotest/e2e/helloworld_test.gotest/e2ehosted/helloworld_hosted_test.gotest/e2etoken/helloworld_token_test.gotest/integration-test.mktest/integration/cloudevents/agent_deploy_test.gotest/integration/cloudevents/agent_hook_deploy_test.gotest/integration/cloudevents/agent_hosting_deploy_test.gotest/integration/cloudevents/agent_hosting_hook_deploy_test.gotest/integration/cloudevents/suite_test.gotest/integration/kube/agent_deploy_test.gotest/integration/kube/agent_hook_deploy_test.gotest/integration/kube/agent_hosting_deploy_test.gotest/integration/kube/agent_hosting_hook_deploy_test.gotest/integration/kube/assertion_test.gotest/integration/kube/cluster_management_addon_test.gotest/integration/kube/csr_test.gotest/integration/kube/deleted_managed_cluster_create_addon_test.gotest/integration/kube/multiworks_test.gotest/integration/kube/registration_test.gotest/integration/kube/suite_test.go
| // Priority (lowest to highest): default < annotation < registrationOption.Namespace < AgentInstallNamespace function | ||
| managedClusterAddonCopy.Status.Namespace = "open-cluster-management-agent-addon" | ||
|
|
||
| // Set the default namespace to registrationOption.Namespace | ||
| if len(registrationOption.Namespace) > 0 { | ||
| managedClusterAddonCopy.Status.Namespace = registrationOption.Namespace | ||
| } | ||
|
|
||
| if len(managedClusterAddonCopy.Spec.InstallNamespace) > 0 { | ||
| managedClusterAddonCopy.Status.Namespace = managedClusterAddonCopy.Spec.InstallNamespace | ||
| // Override with annotation if present | ||
| if installNs, ok := managedClusterAddonCopy.Annotations[addonapiv1beta1.InstallNamespaceAnnotation]; ok && len(installNs) > 0 { | ||
| managedClusterAddonCopy.Status.Namespace = installNs | ||
| } |
There was a problem hiding this comment.
Namespace priority comment is incomplete.
The comment on line 314 mentions registrationOption.Namespace in the priority chain, but this field no longer exists in the v1beta1 RegistrationOption. The actual priority is: default < annotation < AgentInstallNamespace function.
📝 Suggested comment fix
// explicitly set the default namespace value, since the mca.spec.installNamespace is deprceated and
// the addonDeploymentConfig.spec.agentInstallNamespace could be empty
- // Priority (lowest to highest): default < annotation < registrationOption.Namespace < AgentInstallNamespace function
+ // Priority (lowest to highest): default < annotation < AgentInstallNamespace function
managedClusterAddonCopy.Status.Namespace = "open-cluster-management-agent-addon"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/addonmanager/controllers/registration/controller.go` around lines 314 -
320, Update the priority comment above the namespace assignment to reflect the
current logic: the priority is default < annotation < AgentInstallNamespace
function (remove the outdated reference to registrationOption.Namespace).
Mention the specific symbols: managedClusterAddonCopy.Status.Namespace,
addonapiv1beta1.InstallNamespaceAnnotation, and the AgentInstallNamespace
function so reviewers can locate the logic and confirm the comment matches the
code.
| err := reg.PermissionConfig(cluster, ToV1alpha1Addon(addon)) | ||
| if err != nil { | ||
| // Convert v1alpha1 SubjectNotReadyError to v1beta1 SubjectNotReadyError so | ||
| // controllers that check for the v1beta1 type work correctly. | ||
| var v1alpha1SubjectErr *agentv1alpha1.SubjectNotReadyError | ||
| if errors.As(err, &v1alpha1SubjectErr) { | ||
| return &SubjectNotReadyError{} | ||
| } | ||
| } | ||
| return err | ||
| } |
There was a problem hiding this comment.
Error is returned even when conversion succeeds.
When PermissionConfig returns an error that is NOT a SubjectNotReadyError, the code converts the error type but still returns the original err. However, when it IS a SubjectNotReadyError, the conversion happens but then err (the original error) is still returned on line 174 instead of nil or the converted error.
🐛 Proposed fix
func (a *v1alpha1Adapter) SetHubPermissions(_ context.Context, cluster *clusterv1.ManagedCluster,
addon *addonapiv1beta1.ManagedClusterAddOn) error {
reg := a.inner.GetAgentAddonOptions().Registration
if reg == nil || reg.PermissionConfig == nil {
return nil
}
err := reg.PermissionConfig(cluster, ToV1alpha1Addon(addon))
if err != nil {
// Convert v1alpha1 SubjectNotReadyError to v1beta1 SubjectNotReadyError so
// controllers that check for the v1beta1 type work correctly.
var v1alpha1SubjectErr *agentv1alpha1.SubjectNotReadyError
if errors.As(err, &v1alpha1SubjectErr) {
return &SubjectNotReadyError{}
}
+ return err
}
- return err
+ return nil
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| err := reg.PermissionConfig(cluster, ToV1alpha1Addon(addon)) | |
| if err != nil { | |
| // Convert v1alpha1 SubjectNotReadyError to v1beta1 SubjectNotReadyError so | |
| // controllers that check for the v1beta1 type work correctly. | |
| var v1alpha1SubjectErr *agentv1alpha1.SubjectNotReadyError | |
| if errors.As(err, &v1alpha1SubjectErr) { | |
| return &SubjectNotReadyError{} | |
| } | |
| } | |
| return err | |
| } | |
| err := reg.PermissionConfig(cluster, ToV1alpha1Addon(addon)) | |
| if err != nil { | |
| // Convert v1alpha1 SubjectNotReadyError to v1beta1 SubjectNotReadyError so | |
| // controllers that check for the v1beta1 type work correctly. | |
| var v1alpha1SubjectErr *agentv1alpha1.SubjectNotReadyError | |
| if errors.As(err, &v1alpha1SubjectErr) { | |
| return &SubjectNotReadyError{} | |
| } | |
| return err | |
| } | |
| return nil | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/agent/adapter.go` around lines 165 - 175, The code in the
PermissionConfig error handling block returns the original err even after
converting a v1alpha1 SubjectNotReadyError; change the logic in the adapter.go
function that calls reg.PermissionConfig (which uses ToV1alpha1Addon) so that if
errors.As(err, &agentv1alpha1.SubjectNotReadyError) is true you return the
converted &SubjectNotReadyError{} immediately, otherwise return the original
err; update the if/return flow in that block to return the converted error
instead of falling through to return err.
| - **[ClusterManagementAddOn](https://github.com/open-cluster-management-io/api/blob/main/addon/v1beta1/types_clustermanagementaddon.go)**: Hub cluster resource that defines addon metadata and installation strategy | ||
| - **[ManagedClusterAddOn](https://github.com/open-cluster-management-io/api/blob/main/addon/v1beta1/types_managedclusteraddon.go)**: Managed cluster resource that represents addon installation state |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Verify that the v1beta1 API definition files exist in the api repository
echo "Checking ClusterManagementAddOn v1beta1 type definition..."
curl -s -o /dev/null -w "%{http_code}" \
"https://github.com/open-cluster-management-io/api/blob/main/addon/v1beta1/types_clustermanagementaddon.go"
echo ""
echo "Checking ManagedClusterAddOn v1beta1 type definition..."
curl -s -o /dev/null -w "%{http_code}" \
"https://github.com/open-cluster-management-io/api/blob/main/addon/v1beta1/types_managedclusteraddon.go"
echo ""
echo "Expected: HTTP 200 for both URLs"Repository: open-cluster-management-io/addon-framework
Length of output: 1850
🌐 Web query:
open-cluster-management-io/api addon v1beta1 types_clustermanagementaddon.go site:github.com
💡 Result:
The file open-cluster-management-io/api addon v1beta1 types_clustermanagement.go likely refers to the Go source file defining types for the ClusterManagementAddon resource in the addon API group at version v1beta1 within the Open Cluster Management (OCM) project's api repository (https://github.com/open-cluster-management-io/api). This file would contain struct definitions such as type ClusterManagementAddon struct { ... } with fields like Spec, Status, etc., along with supporting types for addon metadata, install strategies (e.g., Manual, Placements), supported configs, and conditions. It follows standard Kubernetes API conventions using metav1.TypeMeta, ObjectMeta, etc. Evidence shows: - The api repo defines core OCM APIs including addon/v1alpha1 with types_managedclusteraddon.go for ManagedClusterAddOn and similar for ClusterManagementAddOn. - Multiple issues/PRs reference upgrading addon APIs from v1alpha1 to v1beta1, including conversion webhooks (ocm#1288, #826), confirming v1beta1 exists for addon types like ClusterManagementAddOn. - Examples use apiVersion: addon.open-cluster-management.io/v1alpha1 for ClusterManagementAddOn, but v1beta1 is in development/upgraded state as of 2025-2026 releases. - No direct snippet of the exact file found, but structure inferred from related v1alpha1 files, framework imports (addonapiv1alpha1 "open-cluster-management.io/api/addon/v1alpha1"), and OCM addon docs. The exact file content is in https://github.com/open-cluster-management-io/api/tree/main/addon/v1beta1/types_clustermanagementaddon.go (presumed path based on convention; v1alpha1 uses types_managedclusteraddon.go). For latest, check the main branch of the api repo.
Citations:
- 1: https://github.com/open-cluster-management-io/api
- 2: https://github.com/open-cluster-management-io/api/blob/main/addon/v1alpha1/types_managedclusteraddon.go
- 3: Add addon conversion webhook for v1alpha1/v1beta1 API migration ocm#1288
- 4: https://github.com/open-cluster-management-io/addon-framework/blob/main/pkg/addonfactory/addonfactory.go
- 5: Allow addons to be installed by label, but also manually ocm#616
🌐 Web query:
open-cluster-management-io/api addon v1beta1 types_managedclusteraddon.go site:github.com
💡 Result:
No file named types_managedclusteraddon.go exists in v1beta1 path in the open-cluster-management-io/api repository. The ManagedClusterAddon types are defined in addon/v1alpha1/types_managedclusteraddon.go at https://github.com/open-cluster-management-io/api/blob/main/addon/v1alpha1/types_managedclusteraddon.go. All references in the addon-framework and documentation use addonapiv1alpha1 "open-cluster-management.io/api/addon/v1alpha1". Other APIs like ManagedServiceAccount have v1beta1 versions in different groups (authentication.open-cluster-management.io/v1beta1), but the core addon API remains v1alpha1. The queried file likely refers to this v1alpha1 file, as no v1beta1 equivalent was found across searches of the repo, releases, and related projects.
Citations:
- 1: https://github.com/open-cluster-management-io/api/blob/main/addon/v1alpha1/types_managedclusteraddon.go
- 2: https://github.com/open-cluster-management-io/addon-framework/blob/main/pkg/addonfactory/addonfactory.go
- 3: https://github.com/open-cluster-management-io/api
- 4: Upgrade the managedserviceaccount API to v1beta1 managed-serviceaccount#77
Update documentation links to reference existing v1alpha1 API files, not v1beta1.
The GitHub links in lines 23-24 point to v1beta1 files (types_clustermanagementaddon.go and types_managedclusteraddon.go in addon/v1beta1/) that do not exist in the open-cluster-management-io/api repository. Current references throughout the codebase use addon/v1alpha1/ versions. Either update the links to reference the existing v1alpha1 files or wait until v1beta1 API definitions are available in the repository before merging this documentation change.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` around lines 23 - 24, The README links for ClusterManagementAddOn
and ManagedClusterAddOn point to non-existent addon/v1beta1 files; update both
Markdown link URLs to reference the existing addon/v1alpha1 API files (replace
addon/v1beta1/.../types_clustermanagementaddon.go and
types_managedclusteraddon.go with
addon/v1alpha1/.../types_clustermanagementaddon.go and
types_managedclusteraddon.go) so the ClusterManagementAddOn and
ManagedClusterAddOn links resolve to the current repository files.
There was a problem hiding this comment.
I believe there are v1beta1 definition
There was a problem hiding this comment.
🧩 Analysis chain
Let me directly verify the URLs to confirm the files exist.
🏁 Script executed:
#!/bin/bash
echo "Checking ClusterManagementAddOn v1beta1..."
STATUS1=$(curl -s -o /dev/null -w "%{http_code}" \
"https://raw.githubusercontent.com/open-cluster-management-io/api/main/addon/v1beta1/types_clustermanagementaddon.go")
echo "HTTP status: $STATUS1"
echo "Checking ManagedClusterAddOn v1beta1..."
STATUS2=$(curl -s -o /dev/null -w "%{http_code}" \
"https://raw.githubusercontent.com/open-cluster-management-io/api/main/addon/v1beta1/types_managedclusteraddon.go")
echo "HTTP status: $STATUS2"
Length of output: 116
@qiujian16 You are correct — both files exist and return HTTP 200:
My earlier review comment was mistaken — the previous web search failed to confirm their existence, but the files are present in the open-cluster-management-io/api repository. The README links are valid and no changes are needed. Apologies for the false alarm!
| _, err := hubAddOnClient.AddonV1beta1().AddOnDeploymentConfigs(namespace).Get(context.Background(), deployConfigName, metav1.GetOptions{}) | ||
| if errors.IsNotFound(err) { | ||
| if _, err := hubAddOnClient.AddonV1alpha1().AddOnDeploymentConfigs(managedClusterName).Create( | ||
| if _, err := hubAddOnClient.AddonV1beta1().AddOnDeploymentConfigs(managedClusterName).Create( |
There was a problem hiding this comment.
Use the helper's namespace for the create calls.
Each helper does the existence check with AddOnDeploymentConfigs(namespace) but creates with AddOnDeploymentConfigs(managedClusterName). If these helpers are ever reused with a different namespace, the request path and ObjectMeta.Namespace diverge and the create will fail.
Suggested fix
- if _, err := hubAddOnClient.AddonV1beta1().AddOnDeploymentConfigs(managedClusterName).Create(
+ if _, err := hubAddOnClient.AddonV1beta1().AddOnDeploymentConfigs(namespace).Create(Also applies to: 560-563, 587-590, 613-616, 656-660
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/e2e/helloworld_test.go` around lines 531 - 533, The Create calls for
AddOnDeploymentConfigs are using managedClusterName instead of the helper's
namespace, causing a namespace mismatch; update the Create(...) invocations to
use the same namespace variable passed to the existence check (i.e., call
hubAddOnClient.AddonV1beta1().AddOnDeploymentConfigs(namespace).Create(...)) and
make the same change for the other similar blocks that currently use
managedClusterName (the other AddOnDeploymentConfigs Create usages referenced in
the comment).
| ENSURE_ENVTEST_SCRIPT := https://raw.githubusercontent.com/open-cluster-management-io/sdk-go/main/ci/envtest/ensure-envtest.sh | ||
|
|
||
| export KUBEBUILDER_ASSETS ?=$(TEST_TMP)/kubebuilder/bin | ||
|
|
||
| K8S_VERSION ?=1.30.0 | ||
| SETUP_ENVTEST := $(shell go env GOPATH)/bin/setup-envtest | ||
|
|
||
| # download the kubebuilder-tools to get kube-apiserver binaries from it | ||
| ensure-kubebuilder-tools: | ||
| ifeq "" "$(wildcard $(KUBEBUILDER_ASSETS))" | ||
| $(info Downloading kube-apiserver into '$(KUBEBUILDER_ASSETS)') | ||
| mkdir -p '$(KUBEBUILDER_ASSETS)' | ||
| ifeq "" "$(wildcard $(SETUP_ENVTEST))" | ||
| $(info Installing setup-envtest into '$(SETUP_ENVTEST)') | ||
| go install sigs.k8s.io/controller-runtime/tools/setup-envtest@release-0.22 | ||
| endif | ||
| ENVTEST_K8S_PATH=$$($(SETUP_ENVTEST) use $(K8S_VERSION) --bin-dir $(KUBEBUILDER_ASSETS) -p path); \ | ||
| if [ -z "$$ENVTEST_K8S_PATH" ]; then \ | ||
| echo "Error: setup-envtest returned empty path"; \ | ||
| exit 1; \ | ||
| fi; \ | ||
| if [ ! -d "$$ENVTEST_K8S_PATH" ]; then \ | ||
| echo "Error: setup-envtest path does not exist: $$ENVTEST_K8S_PATH"; \ | ||
| exit 1; \ | ||
| fi; \ | ||
| cp -r $$ENVTEST_K8S_PATH/* $(KUBEBUILDER_ASSETS)/ | ||
| else | ||
| $(info Using existing kube-apiserver from "$(KUBEBUILDER_ASSETS)") | ||
| endif | ||
| .PHONY: ensure-kubebuilder-tools | ||
| .PHONY: envtest-setup | ||
| envtest-setup: | ||
| $(eval export KUBEBUILDER_ASSETS=$(shell curl -fsSL $(ENSURE_ENVTEST_SCRIPT) | bash)) | ||
| @echo "KUBEBUILDER_ASSETS=$(KUBEBUILDER_ASSETS)" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if the remote script URL is accessible and inspect its content
curl -fsSL "https://raw.githubusercontent.com/open-cluster-management-io/sdk-go/main/ci/envtest/ensure-envtest.sh" | head -50
echo "---"
echo "Checking if there's a tagged/versioned alternative..."
# Check releases/tags in the sdk-go repo
gh api repos/open-cluster-management-io/sdk-go/tags --jq '.[].name' | head -10Repository: open-cluster-management-io/addon-framework
Length of output: 1850
🌐 Web query:
open-cluster-management-io/sdk-go repository GitHub releases tags
💡 Result:
The GitHub repository https://github.com/open-cluster-management-io/sdk-go does not have any formal GitHub Releases published. It is an active Go module repository (last push 2026-04-02) with versions managed via Go modules. Go module versions (from pkg.go.dev, as of 2026-04-03): - Stable releases: v1.2.0 (2026-01-28), v1.1.1 (2026-01-06) - Recent pre-releases: v1.2.1-0.20260323031834-e885ccee3f1b (2026-03-23, latest), v1.2.1-0.20260317083210-e8cd33ebb7cf (2026-03-17), and others up to March 2026. To view all tags/versions: https://github.com/open-cluster-management-io/sdk-go/tags or https://pkg.go.dev/open-cluster-management.io/sdk-go?tab=versions. This aligns with OCM project patterns where core repos like ocm and api use Go modules without GitHub release assets, while others like clusteradm have both.
Citations:
- 1: https://github.com/open-cluster-management-io/sdk-go
- 2: https://pkg.go.dev/open-cluster-management.io/sdk-go@v1.2.0
- 3: https://github.com/open-cluster-management-io/ocm
🌐 Web query:
curl bash security best practices supply chain risk mitigation
💡 Result:
Security best practices for using curl in bash scripts, with emphasis on mitigating supply chain risks, are derived from official curl documentation and expert analyses. ## General curl Security Best Practices (from official docs) - Always use HTTPS for transfers to ensure encryption and server authentication. Never use HTTP or unauthenticated protocols. {value: Always use HTTPS, citations: [5,12,15,16], confidence: high} - Verify TLS certificates: Do not use --insecure (-k) or disable verification. Use --cacert or custom CA bundles if needed. Restrict protocols with --proto. {value: Verify TLS certificates; avoid --insecure, citations: [5,12,15,16], confidence: high} - Limit redirects: Use --location-trusted sparingly; restrict protocols in redirects (default: HTTP/HTTPS/FTP/FTPS). {value: Limit redirects with --proto and avoid unrestricted auth, citations: [5,12], confidence: high} - Avoid command-line sensitive data: Use config files (-K) for URLs/passwords to prevent ps leaks. {value: Use -K for sensitive command-line args, citations: [5,12,15], confidence: high} - Restrict URL schemes: Use --proto to limit allowed protocols (e.g., --proto =https). Never trust user-provided URLs. {value: Use --proto to restrict schemes, citations: [5,12,16], confidence: high} - Disable unnecessary features: Avoid --anyauth; prefer specific safe auth like Digest. No file:// if untrusted input. {value: Prefer specific auth; disable risky options, citations: [5,12], confidence: high} - Timeouts: Set --max-time, --connect-timeout to prevent hangs. {value: Use timeouts, citations: [5,15], confidence: high} - Update curl regularly: Monitor CVEs (e.g., recent OpenSSL chain bypass CVE-2025-14819 fixed in 8.18.0). {value: Keep curl updated, citations: [8,22], confidence: high} ## Bash Scripting Specific Practices - Quote variables properly: Use double quotes for expansion, escape JSON carefully, or use jq for data: curl ... --data "$(jq -n --arg msg "$var" '{text: $msg}')". Avoid eval. {value: Proper quoting; use jq for JSON, citations: [17,20,21], confidence: high} - Arrays for args: Store curl args in arrays: args=(--header "Foo: bar" --data "$data"); "${args[@]}". Prevents word-splitting. {value: Use arrays for multi-word args, citations: [18,19], confidence: high} - Heredocs for data: --data @- <<EOF {json with "$var"} EOF. {value: Heredocs/printf for safe data, citations: [17,20], confidence: high} ## Supply Chain Risk Mitigation (curl | bash pattern) Avoid curl URL | bash entirely - it's a major supply chain vector: - Servers can detect curl|bash (timing, User-Agent) and serve malicious payloads different from browser view. Interrupted downloads execute partial malicious code. No integrity check. {value: Never pipe untrusted curl to bash, citations: [1,2,3,4], confidence: high} - Instead: 1. curl -O URL; review script; verify checksum (SHA256 from trusted source via different channel); bash script. {value: Download, inspect, checksum verify before execute, citations: [2,4,11], confidence: high} 2. Prefer package managers (apt/yum) with signatures. {value: Use signed packages over scripts, citations: [4,11], confidence: high} 3. For tools: Use signed releases, reproducible builds, SLSA attestations (curl project does reviews, 2FA, signed tags). Generate/publish SBOMs; use tools like GUAC for vuln blast radius. {value: Verify releases with signatures/SBOMs/patch planning, citations: [6,8,23], confidence: medium} 4. Sandbox: Run in containers/VMs; short-lived creds; monitor dev workflows. {value: Isolate execution, citations: [1,23], confidence: high} 5. Continuous monitoring: Scan deps, revoke unused tokens, MFA, branch protection. {value: MFA, token expiry, code scanning, citations: [23], confidence: high}
Citations:
- 1: https://javapro.io/2026/03/25/curl-bash-hacked-the-unseen-dangers-in-your-dev-lifecycle/
- 2: https://news.ycombinator.com/item?id=36697533
- 3: https://lukespademan.com/blog/the-dangers-of-curlbash
- 4: https://security.stackexchange.com/questions/213401/is-curl-something-sudo-bash-a-reasonably-safe-installation-method
- 5: https://curl.se/libcurl/c/libcurl-security.html
- 6: https://www.kusari.dev/blog/terror-of-curl
- 7: https://everything.curl.dev/project/security.html
- 8: https://medium.com/@esotericmeans/the-truth-about-curl-and-installing-software-securely-on-linux-63cd12e7befd
- 9: https://curl.se/docs/knownrisks.html
- 10: https://manpages.ubuntu.com/manpages/questing/man3/libcurl-security.3.html
- 11: https://curl.se/docs/manpage.html
- 12: https://stackoverflow.com/questions/17029902/using-curl-post-with-variables-defined-in-bash-script-functions
- 13: https://stackoverflow.com/questions/45948172/executing-a-curl-request-through-bash-script
- 14: https://stackoverflow.com/questions/37489695/how-to-use-a-variable-in-a-curl-request-with-bash
- 15: https://stackoverflow.com/questions/40852784/how-can-i-use-a-variable-in-a-curl-call-within-a-bash-script
- 16: http://curl.haxx.se/docs/CVE-2025-14819.html
- 17: https://github.blog/security/supply-chain-security/strengthening-supply-chain-security-preparing-for-the-next-malware-campaign/
Remote script execution via curl | bash introduces supply chain risk.
Fetching and executing a script from a remote URL at makefile parse time, particularly from the main branch, is a recognized security concern. This pattern allows servers to detect and serve different payloads specifically to curl|bash commands, and incomplete downloads can execute partial malicious code with no integrity verification.
Consider:
- Download to a file first, verify a checksum, then execute separately
- Pin to a specific commit or tag instead of
main - Vendor/check in the script directly with your codebase
This follows supply chain security best practices recommended for remote script execution.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/integration-test.mk` around lines 1 - 6, The Makefile currently streams
and executes a remote script via ENSURE_ENVTEST_SCRIPT in the envtest-setup
target (setting KUBEBUILDER_ASSETS by piping curl to bash); replace that pattern
by first pinning ENSURE_ENVTEST_SCRIPT to a specific tag/commit (not main) or
vendor the script into the repo, then change envtest-setup to curl -fsSL to a
local file, verify its checksum/signature, and only then execute it to populate
KUBEBUILDER_ASSETS (update references to ENSURE_ENVTEST_SCRIPT, envtest-setup,
and KUBEBUILDER_ASSETS accordingly).
| // Wait for registration controller to set registrations first | ||
| if len(addon.Status.Registrations) == 0 { | ||
| addon.Status.Registrations = []addonapiv1beta1.RegistrationConfig{ | ||
| { | ||
| Type: addonapiv1beta1.KubeClient, | ||
| KubeClient: &addonapiv1beta1.KubeClientConfig{ | ||
| Driver: driver, | ||
| }, | ||
| }, | ||
| } | ||
| } else { | ||
| for i := range addon.Status.Registrations { | ||
| if addon.Status.Registrations[i].Type == addonapiv1beta1.KubeClient { | ||
| if addon.Status.Registrations[i].KubeClient == nil { | ||
| addon.Status.Registrations[i].KubeClient = &addonapiv1beta1.KubeClientConfig{} | ||
| } | ||
| addon.Status.Registrations[i].KubeClient.Driver = driver | ||
| break | ||
| } | ||
| } | ||
| } | ||
|
|
||
| _, err = hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).UpdateStatus(context.Background(), addon, metav1.UpdateOptions{}) |
There was a problem hiding this comment.
Append a kube-client registration when the slice already contains other types.
If addon.Status.Registrations is non-empty but has no KubeClient entry, this helper exits the loop without changing anything and still returns success. That makes the driver update a silent no-op as soon as another registration type is present.
Suggested fix
if len(addon.Status.Registrations) == 0 {
addon.Status.Registrations = []addonapiv1beta1.RegistrationConfig{
{
Type: addonapiv1beta1.KubeClient,
KubeClient: &addonapiv1beta1.KubeClientConfig{
Driver: driver,
},
},
}
} else {
+ found := false
for i := range addon.Status.Registrations {
- if addon.Status.Registrations[i].Type == addonapiv1beta1.KubeClient {
- if addon.Status.Registrations[i].KubeClient == nil {
- addon.Status.Registrations[i].KubeClient = &addonapiv1beta1.KubeClientConfig{}
- }
- addon.Status.Registrations[i].KubeClient.Driver = driver
- break
- }
+ if addon.Status.Registrations[i].Type != addonapiv1beta1.KubeClient {
+ continue
+ }
+ if addon.Status.Registrations[i].KubeClient == nil {
+ addon.Status.Registrations[i].KubeClient = &addonapiv1beta1.KubeClientConfig{}
+ }
+ addon.Status.Registrations[i].KubeClient.Driver = driver
+ found = true
+ break
+ }
+ if !found {
+ addon.Status.Registrations = append(addon.Status.Registrations, addonapiv1beta1.RegistrationConfig{
+ Type: addonapiv1beta1.KubeClient,
+ KubeClient: &addonapiv1beta1.KubeClientConfig{
+ Driver: driver,
+ },
+ })
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Wait for registration controller to set registrations first | |
| if len(addon.Status.Registrations) == 0 { | |
| addon.Status.Registrations = []addonapiv1beta1.RegistrationConfig{ | |
| { | |
| Type: addonapiv1beta1.KubeClient, | |
| KubeClient: &addonapiv1beta1.KubeClientConfig{ | |
| Driver: driver, | |
| }, | |
| }, | |
| } | |
| } else { | |
| for i := range addon.Status.Registrations { | |
| if addon.Status.Registrations[i].Type == addonapiv1beta1.KubeClient { | |
| if addon.Status.Registrations[i].KubeClient == nil { | |
| addon.Status.Registrations[i].KubeClient = &addonapiv1beta1.KubeClientConfig{} | |
| } | |
| addon.Status.Registrations[i].KubeClient.Driver = driver | |
| break | |
| } | |
| } | |
| } | |
| _, err = hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).UpdateStatus(context.Background(), addon, metav1.UpdateOptions{}) | |
| if len(addon.Status.Registrations) == 0 { | |
| addon.Status.Registrations = []addonapiv1beta1.RegistrationConfig{ | |
| { | |
| Type: addonapiv1beta1.KubeClient, | |
| KubeClient: &addonapiv1beta1.KubeClientConfig{ | |
| Driver: driver, | |
| }, | |
| }, | |
| } | |
| } else { | |
| found := false | |
| for i := range addon.Status.Registrations { | |
| if addon.Status.Registrations[i].Type != addonapiv1beta1.KubeClient { | |
| continue | |
| } | |
| if addon.Status.Registrations[i].KubeClient == nil { | |
| addon.Status.Registrations[i].KubeClient = &addonapiv1beta1.KubeClientConfig{} | |
| } | |
| addon.Status.Registrations[i].KubeClient.Driver = driver | |
| found = true | |
| break | |
| } | |
| if !found { | |
| addon.Status.Registrations = append(addon.Status.Registrations, addonapiv1beta1.RegistrationConfig{ | |
| Type: addonapiv1beta1.KubeClient, | |
| KubeClient: &addonapiv1beta1.KubeClientConfig{ | |
| Driver: driver, | |
| }, | |
| }) | |
| } | |
| } | |
| _, err = hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).UpdateStatus(context.Background(), addon, metav1.UpdateOptions{}) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/integration/kube/assertion_test.go` around lines 136 - 158, The current
logic that updates addon.Status.Registrations only sets Driver when it finds an
existing KubeClient entry and silently does nothing if registrations are present
but contain no KubeClient; change the routine that iterates
addon.Status.Registrations to track whether a KubeClient was found, and after
the loop append a new addonapiv1beta1.RegistrationConfig{Type:
addonapiv1beta1.KubeClient, KubeClient:
&addonapiv1beta1.KubeClientConfig{Driver: driver}} if none was found, then call
hubAddonClient.AddonV1beta1().ManagedClusterAddOns(namespace).UpdateStatus(...)
as before; ensure the code still handles the empty-slice case as it currently
does.
| RegistrationAPI() addonv1beta1.RegistrationConfig | ||
| } | ||
|
|
||
| // KubeClientRegistration configures kubeClient-type registration. |
There was a problem hiding this comment.
add a comments, that user/groups might be overriden when platform use token as the driver.
| @@ -131,19 +134,7 @@ type RegistrationOption struct { | |||
| // CSRConfigurations returns a list of csr configuration for the adddon agent in a managed cluster. | |||
There was a problem hiding this comment.
update the comment, also explain this is to replace previous CSRConfigurations
Signed-off-by: Jian Qiu <jqiu@redhat.com>
84cffe7 to
5d42776
Compare
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
pkg/addonfactory/template_agentaddon.go (1)
121-149:⚠️ Potential issue | 🟠 MajorAvoid a nil-function panic in
getBuiltinValues.Line 148 dereferences
HostedModeInfoFuncunconditionally, but this option is optional elsewhere in the codebase. A template addon without hosted-mode wiring will panic during manifest rendering instead of defaulting to normal install mode.🔧 Proposed fix
- builtinValues.InstallMode, _ = a.agentAddonOptions.HostedModeInfoFunc(addon, cluster) + builtinValues.InstallMode = constants.InstallModeDefault + if a.agentAddonOptions.HostedModeInfoFunc != nil { + builtinValues.InstallMode, _ = a.agentAddonOptions.HostedModeInfoFunc(addon, cluster) + }Also add
open-cluster-management.io/addon-framework/pkg/addonmanager/constantsto the import block.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonfactory/template_agentaddon.go` around lines 121 - 149, The getBuiltinValues function unconditionally calls a.agentAddonOptions.HostedModeInfoFunc causing a nil-function panic when that option is not set; modify getBuiltinValues to check if a.agentAddonOptions.HostedModeInfoFunc != nil before invoking it and, if nil, set builtinValues.InstallMode to the normal/default install mode (use the constant from open-cluster-management.io/addon-framework/pkg/addonmanager/constants); also add that constants package to the import block so the default install mode symbol is available.pkg/addonmanager/controllers/registration/controller.go (1)
247-259:⚠️ Potential issue | 🟠 MajorClear stale registration fields in the nil-registration fast paths.
Both early returns patch only the condition on top of a
DeepCopyof the existing addon. If an addon previously publishedStatus.RegistrationsorStatus.Namespaceand later dropsRegistration/Configurations, those old values will stay visible indefinitely.💡 Suggested fix
if registrationOption == nil { + managedClusterAddonCopy.Status.Registrations = nil + managedClusterAddonCopy.Status.Namespace = "" meta.SetStatusCondition(&managedClusterAddonCopy.Status.Conditions, metav1.Condition{ Type: addonapiv1beta1.ManagedClusterAddOnRegistrationApplied, Status: metav1.ConditionTrue, @@ if registrationOption.Configurations == nil { + managedClusterAddonCopy.Status.Registrations = nil + managedClusterAddonCopy.Status.Namespace = "" meta.SetStatusCondition(&managedClusterAddonCopy.Status.Conditions, metav1.Condition{ Type: addonapiv1beta1.ManagedClusterAddOnRegistrationApplied, Status: metav1.ConditionTrue,Also applies to: 290-301
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/registration/controller.go` around lines 247 - 259, The early-return path that handles registrationOption == nil must clear any stale status fields so old Status.Registrations and Status.Namespace (and any registration-related fields) are removed from managedClusterAddonCopy before calling addonPatcher.PatchStatus; update the block that reads registrationOption := agentAddon.GetAgentAddonOptions().Registration to explicitly set managedClusterAddonCopy.Status.Registrations = nil and managedClusterAddonCopy.Status.Namespace = "" (and clear other registration/configuration related status fields if present) prior to setting the condition and calling addonPatcher.PatchStatus, and apply the same clearing logic in the other nil-registration fast path referenced around the second early return (the block around lines 290-301) so stale values are not preserved.pkg/utils/addon_config.go (1)
37-47:⚠️ Potential issue | 🟠 MajorPass the caller
ctxinto deployment-config resolution.The new callback signature takes
ctx, but the lookup still goes throughGetDesiredAddOnDeploymentConfig(...)and thenadcgetter.Get(context.TODO(), ...). That drops caller cancellation and timeouts on everyAddOnDeploymentConfigread.💡 Suggested fix
func AgentInstallNamespaceFromDeploymentConfigFunc( adcgetter AddOnDeploymentConfigGetter, ) func(ctx context.Context, addon *addonapiv1beta1.ManagedClusterAddOn) (string, error) { return func(ctx context.Context, addon *addonapiv1beta1.ManagedClusterAddOn) (string, error) { @@ - config, err := GetDesiredAddOnDeploymentConfig(addon, adcgetter) + config, err := GetDesiredAddOnDeploymentConfig(ctx, addon, adcgetter) if err != nil { return "", fmt.Errorf("failed to get deployment config for addon %s: %v", addon.Name, err) } @@ func GetDesiredAddOnDeploymentConfig( + ctx context.Context, addon *addonapiv1beta1.ManagedClusterAddOn, adcgetter AddOnDeploymentConfigGetter, ) (*addonapiv1beta1.AddOnDeploymentConfig, error) { @@ - adc, err := adcgetter.Get(context.TODO(), desiredConfig.Namespace, desiredConfig.Name) + adc, err := adcgetter.Get(ctx, desiredConfig.Namespace, desiredConfig.Name)Also applies to: 66-84
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/addon_config.go` around lines 37 - 47, The deploy-config lookup is using context.TODO() and discards the caller's ctx; update AgentInstallNamespaceFromDeploymentConfigFunc so that when it calls GetDesiredAddOnDeploymentConfig it passes the incoming ctx, and ensure GetDesiredAddOnDeploymentConfig and the underlying AddOnDeploymentConfigGetter.Get invocation use that ctx (replace context.TODO()/hardcoded contexts with the supplied ctx). Locate AgentInstallNamespaceFromDeploymentConfigFunc and the helper GetDesiredAddOnDeploymentConfig and modify their calls to propagate ctx (also fix the other similar block at the second occurrence around the 66-84 region).pkg/addonmanager/controllers/certificate/csrapprove.go (1)
147-155:⚠️ Potential issue | 🟠 MajorDeep-copy the CSR before appending approval conditions.
csrLister.Getreturns cache-backed objects. Mutatingcsr.Status.Conditionsdirectly in line 170 taints the shared informer cache. Usecsr.DeepCopy()before modification, consistent with the pattern already used incsrsign.go(line 92) in the same package.Suggested fix
approve := registrationOption.CSRApproveCheck(ctx, managedCluster, managedClusterAddon, csr) if !approve { klog.V(4).Infof("addon csr %q cannont be auto approved due to approve check fails", csr.GetName()) return nil } - csr.Status.Conditions = append(csr.Status.Conditions, certificatesv1.CertificateSigningRequestCondition{ + csrToUpdate := csr.DeepCopy() + csrToUpdate.Status.Conditions = append(csrToUpdate.Status.Conditions, certificatesv1.CertificateSigningRequestCondition{ Type: certificatesv1.CertificateApproved, Status: corev1.ConditionTrue, Reason: "AutoApprovedByHubCSRApprovingController", Message: "Auto approving addon agent certificate.", }) - _, err := c.kubeClient.CertificatesV1().CertificateSigningRequests().UpdateApproval(ctx, csr.GetName(), csr, metav1.UpdateOptions{}) + _, err := c.kubeClient.CertificatesV1().CertificateSigningRequests().UpdateApproval(ctx, csrToUpdate.GetName(), csrToUpdate, metav1.UpdateOptions{})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/certificate/csrapprove.go` around lines 147 - 155, The CSR returned by csrApprovingController.getCSR is a cache-backed object and must be deep-copied before any mutation; update getCSR to return csr.DeepCopy() (or nil on NotFound) so callers that append approval conditions (the approving path that currently mutates csr.Status.Conditions) operate on a copy. Follow the same pattern used in csrsign.go (use CertificateSigningRequest.DeepCopy()) to avoid tainting the shared informer cache.pkg/addonmanager/controllers/agentdeploy/utils.go (1)
61-79:⚠️ Potential issue | 🟡 MinorPersist the cleaned finalizer list even when the target finalizer already exists.
If
addon.Finalizerscontains bothfinalizerand any deprecated v1alpha1 finalizer, Line 74 returns beforerstis written back. That leaves deprecated finalizers on upgraded addons even though this helper is supposed to scrub them.🐛 Proposed fix
func addonAddFinalizer(addon *addonapiv1beta1.ManagedClusterAddOn, finalizer string) bool { if addon.Finalizers == nil { addon.SetFinalizers([]string{finalizer}) return true } - var rst []string + var ( + rst []string + exists bool + ) for _, f := range addon.Finalizers { // remove deperecated finalizers also if f == addonapiv1alpha1.AddonDeprecatedHostingManifestFinalizer || f == addonapiv1alpha1.AddonDeprecatedPreDeleteHookFinalizer || f == addonapiv1alpha1.AddonDeprecatedHostingPreDeleteHookFinalizer { continue } + if f == finalizer { + exists = true + } rst = append(rst, f) } - for _, f := range addon.Finalizers { - if f == finalizer { - return false - } + if exists { + if len(rst) != len(addon.Finalizers) { + addon.SetFinalizers(rst) + return true + } + return false } rst = append(rst, finalizer) addon.SetFinalizers(rst) return true }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/agentdeploy/utils.go` around lines 61 - 79, The loop builds a cleaned finalizer slice (rst) but returns early if the target `finalizer` already exists, so deprecated finalizers never get persisted; modify the logic around the second loop that checks `for _, f := range addon.Finalizers { if f == finalizer { return false } }` so that instead of returning immediately you first call `addon.SetFinalizers(rst)` to persist the scrubbed list, then if the original list contained `finalizer` skip appending it (or re-append if absent) — use the existing symbols `addon.Finalizers`, `finalizer`, the deprecated constants `addonapiv1alpha1.AddonDeprecatedHostingManifestFinalizer`/`AddonDeprecatedPreDeleteHookFinalizer`/`AddonDeprecatedHostingPreDeleteHookFinalizer`, and `addon.SetFinalizers(rst)` to locate and implement this change.
♻️ Duplicate comments (3)
pkg/addonmanager/controllers/registration/controller.go (1)
312-313:⚠️ Potential issue | 🟡 MinorFix the namespace precedence comment.
The comment above
managedClusterAddonCopy.Status.Namespacestill mentionsregistrationOption.Namespace, but this code now resolves namespace as default <addonapiv1beta1.InstallNamespaceAnnotation<AgentInstallNamespace.📝 Suggested comment fix
- // Priority (lowest to highest): default < annotation < registrationOption.Namespace < AgentInstallNamespace function + // Priority (lowest to highest): default < annotation < AgentInstallNamespace function🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/registration/controller.go` around lines 312 - 313, Update the outdated comment above managedClusterAddonCopy.Status.Namespace to reflect the current namespace precedence: default < addonapiv1beta1.InstallNamespaceAnnotation < AgentInstallNamespace; remove the reference to registrationOption.Namespace and mention the AgentInstallNamespace function as the highest priority source to match the implementation in the AgentInstallNamespace(...) call and the code path that sets the namespace.pkg/agent/adapter.go (1)
159-167:⚠️ Potential issue | 🔴 CriticalReturn
&SubjectNotReadyError{}after a successful conversion.After
errors.Assucceeds, Line 165 still returns the original v1alpha1 error. The registration controller will treat that as a hard failure instead of the pending state.💡 Suggested fix
if err != nil { // Convert v1alpha1 SubjectNotReadyError to v1beta1 SubjectNotReadyError so // controllers that check for the v1beta1 type work correctly. var v1alpha1SubjectErr *agentv1alpha1.SubjectNotReadyError if errors.As(err, &v1alpha1SubjectErr) { - return err + return &SubjectNotReadyError{} } return err }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/agent/adapter.go` around lines 159 - 167, The code currently detects a v1alpha1 SubjectNotReadyError via errors.As but returns the original error; change the branch that handles var v1alpha1SubjectErr *agentv1alpha1.SubjectNotReadyError to construct and return the equivalent v1beta1 error (e.g. create and return an &agentv1beta1.SubjectNotReadyError populated from v1alpha1SubjectErr's fields) so callers get the converted pending-state error instead of the original v1alpha1 error; locate this logic around reg.PermissionConfig(...) and ToV1alpha1Addon(...) and replace the current return err inside the errors.As branch with returning the new v1beta1 error instance.pkg/agent/interface.go (1)
134-137:⚠️ Potential issue | 🟡 MinorRefresh the godoc for
Configurations.Line 137 renamed the field, but the comment still describes the removed
CSRConfigurations/CSRConfigurationcontract. That makes the v1beta1 public API misleading in generated docs.📝 Suggested wording
- // CSRConfigurations returns a list of csr configuration for the adddon agent in a managed cluster. - // A csr will be created from the managed cluster for addon agent with each CSRConfiguration. + // Configurations returns the registration configurations for the addon agent in a managed cluster. + // This replaces the previous CSRConfigurations callback. // +required Configurations RegistrationConfigurationsFunc🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/agent/interface.go` around lines 134 - 137, The godoc above the Configurations field is stale (mentions CSRConfigurations/CSRConfiguration) and must be updated to describe the new field and its contract: replace the comment to explain that Configurations is a required RegistrationConfigurationsFunc which returns the registration configurations used by the managed-cluster addon agent (one configuration per registration scenario) and that a CSR will be created by the managed cluster agent according to each returned configuration; keep the +required marker and ensure the wording matches the semantics of RegistrationConfigurationsFunc and the current public API.
🧹 Nitpick comments (4)
pkg/utils/permission_test.go (1)
72-212: Consider extracting shared table fixtures to reduce duplication.
TestBindKubeClientClusterRole_PendingErrorandTestBindKubeClientRole_PendingErrorduplicate most case construction. A shared addon fixture helper (or reusable case generator) would make future v1beta1 field changes safer and cheaper to maintain.Also applies to: 247-387
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/utils/permission_test.go` around lines 72 - 212, The tests duplicate ManagedClusterAddOn fixtures across TestBindKubeClientClusterRole_PendingError and TestBindKubeClientRole_PendingError (the tests' local variable "tests" building addonv1beta1.ManagedClusterAddOn with varied Status.Registrations); refactor by extracting a helper factory (e.g., newTestAddon or buildAddonWithRegistrations) that accepts registration configs (or preset names like "empty-subject", "custom-signer", "valid-user", "valid-groups") and returns a ManagedClusterAddOn, then replace the repeated inline constructions in both TestBindKubeClientClusterRole_PendingError and TestBindKubeClientRole_PendingError with calls to that helper to reduce duplication and make future v1beta1 field changes easier to update.pkg/addonmanager/base_manager.go (1)
201-220: Stale comment no longer reflects the implementation.The comment on lines 201-204 states that CSR controllers should only spawn "if v1 CSR api is supported" and references v1beta1 CSR API considerations. However, the conditional check (
utils.IsCSRSupported) has been removed and the controllers are now always started unconditionally.Either update the comment to reflect the new unconditional behavior, or remove it entirely since v1beta1 CSR is no longer supported in modern Kubernetes versions.
📝 Suggested comment update or removal
- // Spawn the following controllers only if v1 CSR api is supported in the - // hub cluster. Under v1beta1 CSR api, all the CSR objects will be signed - // by the kube-controller-manager so custom CSR controller should be - // disabled to avoid conflict. + // Start CSR approving and signing controllers for addon certificate management. csrApproveController := certificate.NewCSRApprovingController(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/base_manager.go` around lines 201 - 220, The existing comment above the csrApproveController and csrSignController creation (referencing v1 vs v1beta1 CSR API and conditional spawning) is stale because the code no longer checks utils.IsCSRSupported and these controllers are always created; update or remove that comment so it accurately states that NewCSRApprovingController and NewCSRSignController are now instantiated unconditionally (or simply delete the block of explanatory text about v1beta1 support).pkg/addonmanager/controllers/addonconfig/controller.go (1)
216-219: Redundant nil check forDesiredConfig.The check
if configReference.DesiredConfig == nilon line 217 is redundant since we alreadycontinueon line 192-194 whenDesiredConfigis nil. This inner check will never be true.🧹 Remove redundant check
// update desired spec hash only for the configs in spec for _, addonconfig := range addon.Spec.Configs { - if configReference.DesiredConfig == nil { - continue - } - if configReference.ConfigGroupResource == addonconfig.ConfigGroupResource && configReference.DesiredConfig.ConfigReferent == addonconfig.ConfigReferent {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/addonconfig/controller.go` around lines 216 - 219, Remove the redundant nil check for configReference.DesiredConfig inside the inner loop: since the outer loop already continues when DesiredConfig is nil, delete the inner "if configReference.DesiredConfig == nil { continue }" check in the loop iterating over addon.Spec.Configs (the loop using the addonconfig variable and configReference.DesiredConfig) so the code relies on the prior guard and avoids unreachable branches.pkg/addonmanager/controllers/certificate/csrsign_test.go (1)
86-100: Test case has duplicate name.The test case on line 87 has the name
"csr with cert already"which duplicates the test case on line 67. While this appears to be pre-existing, it makes test output confusing. The second test case (line 87) actually tests signing an approved CSR without existing certificate, so the name should reflect that.📝 Fix duplicate test name
{ - name: "csr with cert already", + name: "sign approved csr", cluster: []runtime.Object{addontesting.NewManagedCluster("cluster1")}, addon: []runtime.Object{addontesting.NewAddon("test", "cluster1")}, csr: []runtime.Object{addontesting.NewApprovedCSR("test", "cluster1")},🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/addonmanager/controllers/certificate/csrsign_test.go` around lines 86 - 100, The duplicate test name "csr with cert already" should be changed to accurately describe the scenario being tested; update the test case name in csrsign_test.go (the table entry that uses testSignAgent{name: "test", cert: []byte("test")} and validateCSRActions that checks CSR.Status.Certificate) to something like "sign approved CSR without existing cert" or "approved CSR gets signed" so it no longer duplicates the earlier test and reflects that this case verifies signing an approved CSR that lacked a certificate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/addonfactory/helm_agentaddon.go`:
- Around line 71-75: The Manifests method now receives a ctx but you dropped it
inside the helper chain; update the code so HelmAgentAddon.Manifests passes the
incoming ctx into renderManifests (and any downstream calls) and change any
context.TODO() usages in resolveInstallNamespace, resolveHostingCluster and
related helpers to accept and use that ctx so cancellations/deadlines propagate;
locate functions renderManifests, resolveInstallNamespace, resolveHostingCluster
(and their callers at the other mentioned blocks) and add a ctx parameter where
missing, thread the ctx through callers, and replace context.TODO() with the
propagated ctx.
In `@pkg/addonmanager/controllers/agentdeploy/controller.go`:
- Around line 246-248: The current gate only checks for existence of the
RegistrationApplied condition; update the check in controller.go to fetch the
condition via meta.FindStatusCondition(addon.Status.Conditions,
addonapiv1beta1.ManagedClusterAddOnRegistrationApplied) and return early unless
the condition is present AND its Status equals metav1.ConditionTrue (i.e., if
cond == nil || cond.Status != metav1.ConditionTrue { return nil }). Ensure
metav1 is imported/used for ConditionTrue and reference the same
addon.Status.Conditions and the ManagedClusterAddOnRegistrationApplied constant.
In `@pkg/agent/adapter.go`:
- Around line 184-194: ToV1alpha1Addon is failing to restore
Spec.InstallNamespace because ToV1beta1Addon stores it in
addonv1beta1.InstallNamespaceAnnotation but ToV1alpha1Addon never decodes that
annotation; update ToV1alpha1Addon to read
addonv1beta1.InstallNamespaceAnnotation from in.ObjectMeta.Annotations (or
in.GetAnnotations()) and set out.Spec.InstallNamespace accordingly after the
Convert_* calls, and make the analogous change in the companion conversion
function referenced at lines 210-228 so both single-item and list conversions
restore Spec.InstallNamespace.
- Around line 80-101: GetAgentAddonOptions currently wraps only
AgentInstallNamespace/Namespace into cfg.AgentInstallNamespace and never assigns
cfg.Registration, so the registration callbacks (RegistrationConfigs,
ApproveCSR, Sign, SetHubPermissions) are not exposed; set cfg.Registration to a
new registration struct that mirrors opts.Registration but wraps each v1alpha1
callback into the v1beta1 shape (use ToV1alpha1Addon(addon) where callbacks
expect a v1alpha1 addon) and preserve nils/defaults. Specifically, inside the
block that checks opts.Registration, construct cfg.Registration with fields
RegistrationConfigs, ApproveCSR, Sign, SetHubPermissions (and any other
non-wired fields) and for each non-nil function in opts.Registration assign a
wrapper that converts the addon parameter via ToV1alpha1Addon before invoking
the original v1alpha1 callback so v1beta1 controllers see the wrapped
registration behavior.
In `@pkg/agent/registration.go`:
- Around line 24-39: RegistrationAPI currently returns registration with Type:
KubeClient but leaves registration.KubeClient nil when c.User and c.Groups are
empty, which prevents later fields like Driver (now under KubeClientConfig) from
being set; change RegistrationAPI (method on KubeClientRegistration) to always
initialize registration.KubeClient = &addonv1beta1.KubeClientConfig{} and then,
if c.User or c.Groups are present, populate the KubeClient.Subject.BaseSubject
with User and Groups (i.e., always create the KubeClientConfig struct but
conditionally set its Subject).
In `@test/integration/cloudevents/suite_test.go`:
- Around line 136-153: The loop that updates CRD storage versions (using
apiExtensionsClient and iterating over the slice of crdName strings) is missing
the addondeploymentconfigs CRD; add
"addondeploymentconfigs.addon.open-cluster-management.io" to the []string{...}
list alongside "managedclusteraddons.addon.open-cluster-management.io" and
"clustermanagementaddons.addon.open-cluster-management.io" so the code that
toggles crd.Spec.Versions[*].Storage and sets crd.Status.StoredVersions =
[]string{"v1beta1"} also runs for addondeploymentconfigs.
---
Outside diff comments:
In `@pkg/addonfactory/template_agentaddon.go`:
- Around line 121-149: The getBuiltinValues function unconditionally calls
a.agentAddonOptions.HostedModeInfoFunc causing a nil-function panic when that
option is not set; modify getBuiltinValues to check if
a.agentAddonOptions.HostedModeInfoFunc != nil before invoking it and, if nil,
set builtinValues.InstallMode to the normal/default install mode (use the
constant from
open-cluster-management.io/addon-framework/pkg/addonmanager/constants); also add
that constants package to the import block so the default install mode symbol is
available.
In `@pkg/addonmanager/controllers/agentdeploy/utils.go`:
- Around line 61-79: The loop builds a cleaned finalizer slice (rst) but returns
early if the target `finalizer` already exists, so deprecated finalizers never
get persisted; modify the logic around the second loop that checks `for _, f :=
range addon.Finalizers { if f == finalizer { return false } }` so that instead
of returning immediately you first call `addon.SetFinalizers(rst)` to persist
the scrubbed list, then if the original list contained `finalizer` skip
appending it (or re-append if absent) — use the existing symbols
`addon.Finalizers`, `finalizer`, the deprecated constants
`addonapiv1alpha1.AddonDeprecatedHostingManifestFinalizer`/`AddonDeprecatedPreDeleteHookFinalizer`/`AddonDeprecatedHostingPreDeleteHookFinalizer`,
and `addon.SetFinalizers(rst)` to locate and implement this change.
In `@pkg/addonmanager/controllers/certificate/csrapprove.go`:
- Around line 147-155: The CSR returned by csrApprovingController.getCSR is a
cache-backed object and must be deep-copied before any mutation; update getCSR
to return csr.DeepCopy() (or nil on NotFound) so callers that append approval
conditions (the approving path that currently mutates csr.Status.Conditions)
operate on a copy. Follow the same pattern used in csrsign.go (use
CertificateSigningRequest.DeepCopy()) to avoid tainting the shared informer
cache.
In `@pkg/addonmanager/controllers/registration/controller.go`:
- Around line 247-259: The early-return path that handles registrationOption ==
nil must clear any stale status fields so old Status.Registrations and
Status.Namespace (and any registration-related fields) are removed from
managedClusterAddonCopy before calling addonPatcher.PatchStatus; update the
block that reads registrationOption :=
agentAddon.GetAgentAddonOptions().Registration to explicitly set
managedClusterAddonCopy.Status.Registrations = nil and
managedClusterAddonCopy.Status.Namespace = "" (and clear other
registration/configuration related status fields if present) prior to setting
the condition and calling addonPatcher.PatchStatus, and apply the same clearing
logic in the other nil-registration fast path referenced around the second early
return (the block around lines 290-301) so stale values are not preserved.
In `@pkg/utils/addon_config.go`:
- Around line 37-47: The deploy-config lookup is using context.TODO() and
discards the caller's ctx; update AgentInstallNamespaceFromDeploymentConfigFunc
so that when it calls GetDesiredAddOnDeploymentConfig it passes the incoming
ctx, and ensure GetDesiredAddOnDeploymentConfig and the underlying
AddOnDeploymentConfigGetter.Get invocation use that ctx (replace
context.TODO()/hardcoded contexts with the supplied ctx). Locate
AgentInstallNamespaceFromDeploymentConfigFunc and the helper
GetDesiredAddOnDeploymentConfig and modify their calls to propagate ctx (also
fix the other similar block at the second occurrence around the 66-84 region).
---
Duplicate comments:
In `@pkg/addonmanager/controllers/registration/controller.go`:
- Around line 312-313: Update the outdated comment above
managedClusterAddonCopy.Status.Namespace to reflect the current namespace
precedence: default < addonapiv1beta1.InstallNamespaceAnnotation <
AgentInstallNamespace; remove the reference to registrationOption.Namespace and
mention the AgentInstallNamespace function as the highest priority source to
match the implementation in the AgentInstallNamespace(...) call and the code
path that sets the namespace.
In `@pkg/agent/adapter.go`:
- Around line 159-167: The code currently detects a v1alpha1
SubjectNotReadyError via errors.As but returns the original error; change the
branch that handles var v1alpha1SubjectErr *agentv1alpha1.SubjectNotReadyError
to construct and return the equivalent v1beta1 error (e.g. create and return an
&agentv1beta1.SubjectNotReadyError populated from v1alpha1SubjectErr's fields)
so callers get the converted pending-state error instead of the original
v1alpha1 error; locate this logic around reg.PermissionConfig(...) and
ToV1alpha1Addon(...) and replace the current return err inside the errors.As
branch with returning the new v1beta1 error instance.
In `@pkg/agent/interface.go`:
- Around line 134-137: The godoc above the Configurations field is stale
(mentions CSRConfigurations/CSRConfiguration) and must be updated to describe
the new field and its contract: replace the comment to explain that
Configurations is a required RegistrationConfigurationsFunc which returns the
registration configurations used by the managed-cluster addon agent (one
configuration per registration scenario) and that a CSR will be created by the
managed cluster agent according to each returned configuration; keep the
+required marker and ensure the wording matches the semantics of
RegistrationConfigurationsFunc and the current public API.
---
Nitpick comments:
In `@pkg/addonmanager/base_manager.go`:
- Around line 201-220: The existing comment above the csrApproveController and
csrSignController creation (referencing v1 vs v1beta1 CSR API and conditional
spawning) is stale because the code no longer checks utils.IsCSRSupported and
these controllers are always created; update or remove that comment so it
accurately states that NewCSRApprovingController and NewCSRSignController are
now instantiated unconditionally (or simply delete the block of explanatory text
about v1beta1 support).
In `@pkg/addonmanager/controllers/addonconfig/controller.go`:
- Around line 216-219: Remove the redundant nil check for
configReference.DesiredConfig inside the inner loop: since the outer loop
already continues when DesiredConfig is nil, delete the inner "if
configReference.DesiredConfig == nil { continue }" check in the loop iterating
over addon.Spec.Configs (the loop using the addonconfig variable and
configReference.DesiredConfig) so the code relies on the prior guard and avoids
unreachable branches.
In `@pkg/addonmanager/controllers/certificate/csrsign_test.go`:
- Around line 86-100: The duplicate test name "csr with cert already" should be
changed to accurately describe the scenario being tested; update the test case
name in csrsign_test.go (the table entry that uses testSignAgent{name: "test",
cert: []byte("test")} and validateCSRActions that checks CSR.Status.Certificate)
to something like "sign approved CSR without existing cert" or "approved CSR
gets signed" so it no longer duplicates the earlier test and reflects that this
case verifies signing an approved CSR that lacked a certificate.
In `@pkg/utils/permission_test.go`:
- Around line 72-212: The tests duplicate ManagedClusterAddOn fixtures across
TestBindKubeClientClusterRole_PendingError and
TestBindKubeClientRole_PendingError (the tests' local variable "tests" building
addonv1beta1.ManagedClusterAddOn with varied Status.Registrations); refactor by
extracting a helper factory (e.g., newTestAddon or buildAddonWithRegistrations)
that accepts registration configs (or preset names like "empty-subject",
"custom-signer", "valid-user", "valid-groups") and returns a
ManagedClusterAddOn, then replace the repeated inline constructions in both
TestBindKubeClientClusterRole_PendingError and
TestBindKubeClientRole_PendingError with calls to that helper to reduce
duplication and make future v1beta1 field changes easier to update.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 6eb7a618-b39e-4e7d-a67d-3a94b04c48de
📒 Files selected for processing (56)
cmd/example/helloworld/main.gocmd/example/helloworld_helm/main.goexamples/helloworld/helloworld.goexamples/helloworld/helloworld_test.goexamples/helloworld_agent/agent.goexamples/helloworld_helm/helloworld_helm_test.goexamples/rbac/rbac.gopkg/addonfactory/addondeploymentconfig.gopkg/addonfactory/addonfactory.gopkg/addonfactory/helm_agentaddon.gopkg/addonfactory/helm_agentaddon_test.gopkg/addonfactory/template_agentaddon.gopkg/addonfactory/template_agentaddon_test.gopkg/addonmanager/base_manager.gopkg/addonmanager/cloudevents/manager.gopkg/addonmanager/controllers/addonconfig/controller.gopkg/addonmanager/controllers/agentdeploy/controller.gopkg/addonmanager/controllers/agentdeploy/default_hook_sync.gopkg/addonmanager/controllers/agentdeploy/default_sync.gopkg/addonmanager/controllers/agentdeploy/default_sync_test.gopkg/addonmanager/controllers/agentdeploy/healthcheck_sync.gopkg/addonmanager/controllers/agentdeploy/healthcheck_sync_test.gopkg/addonmanager/controllers/agentdeploy/hosted_hook_sync.gopkg/addonmanager/controllers/agentdeploy/hosted_sync.gopkg/addonmanager/controllers/agentdeploy/hosted_sync_test.gopkg/addonmanager/controllers/agentdeploy/util_test.gopkg/addonmanager/controllers/agentdeploy/utils.gopkg/addonmanager/controllers/certificate/csrapprove.gopkg/addonmanager/controllers/certificate/csrapprove_test.gopkg/addonmanager/controllers/certificate/csrsign.gopkg/addonmanager/controllers/certificate/csrsign_test.gopkg/addonmanager/controllers/cmaconfig/controller.gopkg/addonmanager/controllers/cmamanagedby/controller.gopkg/addonmanager/controllers/cmamanagedby/controller_test.gopkg/addonmanager/controllers/registration/controller.gopkg/addonmanager/controllers/registration/controller_test.gopkg/addonmanager/manager.gopkg/agent/adapter.gopkg/agent/adapter_test.gopkg/agent/interface.gopkg/agent/registration.gopkg/agent/v1alpha1/interface.gopkg/utils/addon_config.gopkg/utils/addon_config_test.gopkg/utils/csr_helper_test.gopkg/utils/csr_helpers.gopkg/utils/permission.gopkg/utils/permission_test.gotest/e2e/helloworld_test.gotest/integration-test.mktest/integration/cloudevents/suite_test.gotest/integration/kube/assertion_test.gotest/integration/kube/cluster_management_addon_test.gotest/integration/kube/csr_test.gotest/integration/kube/registration_test.gotest/integration/kube/suite_test.go
✅ Files skipped from review due to trivial changes (6)
- pkg/utils/csr_helper_test.go
- pkg/agent/adapter_test.go
- test/e2e/helloworld_test.go
- pkg/addonmanager/controllers/agentdeploy/healthcheck_sync.go
- pkg/addonmanager/controllers/cmaconfig/controller.go
- pkg/addonmanager/controllers/cmamanagedby/controller_test.go
🚧 Files skipped from review as they are similar to previous changes (12)
- test/integration/kube/csr_test.go
- examples/rbac/rbac.go
- test/integration/kube/suite_test.go
- pkg/addonfactory/helm_agentaddon_test.go
- examples/helloworld/helloworld_test.go
- pkg/addonmanager/cloudevents/manager.go
- pkg/addonmanager/controllers/agentdeploy/healthcheck_sync_test.go
- examples/helloworld_helm/helloworld_helm_test.go
- pkg/addonmanager/controllers/agentdeploy/default_sync.go
- test/integration-test.mk
- pkg/addonmanager/controllers/agentdeploy/hosted_sync_test.go
- pkg/utils/permission.go
| func (a *HelmAgentAddon) Manifests( | ||
| ctx context.Context, | ||
| cluster *clusterv1.ManagedCluster, | ||
| addon *addonapiv1alpha1.ManagedClusterAddOn) ([]runtime.Object, error) { | ||
| addon *addonapiv1beta1.ManagedClusterAddOn) ([]runtime.Object, error) { | ||
| objects, err := a.renderManifests(cluster, addon) |
There was a problem hiding this comment.
Thread the caller ctx through manifest rendering.
Manifests now accepts a context, but the helper chain still drops it and resolves install namespace / hosting-cluster data with context.TODO(). That means deadlines and cancellation are ignored on the two callback/client paths this migration just made context-aware.
💡 Suggested fix
func (a *HelmAgentAddon) Manifests(
ctx context.Context,
cluster *clusterv1.ManagedCluster,
addon *addonapiv1beta1.ManagedClusterAddOn) ([]runtime.Object, error) {
- objects, err := a.renderManifests(cluster, addon)
+ objects, err := a.renderManifests(ctx, cluster, addon)
if err != nil {
return nil, err
}
@@
-func (a *HelmAgentAddon) renderManifests(
- cluster *clusterv1.ManagedCluster,
+func (a *HelmAgentAddon) renderManifests(
+ ctx context.Context,
+ cluster *clusterv1.ManagedCluster,
addon *addonapiv1beta1.ManagedClusterAddOn) ([]runtime.Object, error) {
@@
- values, err := a.getValues(cluster, addon)
+ values, err := a.getValues(ctx, cluster, addon)
@@
-func (a *HelmAgentAddon) getValueAgentInstallNamespace(addon *addonapiv1beta1.ManagedClusterAddOn) (string, error) {
+func (a *HelmAgentAddon) getValueAgentInstallNamespace(ctx context.Context, addon *addonapiv1beta1.ManagedClusterAddOn) (string, error) {
@@
- ns, err := a.agentAddonOptions.AgentInstallNamespace(context.TODO(), addon)
+ ns, err := a.agentAddonOptions.AgentInstallNamespace(ctx, addon)
@@
- Get(context.TODO(), hostingClusterName, metav1.GetOptions{})
+ Get(ctx, hostingClusterName, metav1.GetOptions{})Also applies to: 218-228, 261-277
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/addonfactory/helm_agentaddon.go` around lines 71 - 75, The Manifests
method now receives a ctx but you dropped it inside the helper chain; update the
code so HelmAgentAddon.Manifests passes the incoming ctx into renderManifests
(and any downstream calls) and change any context.TODO() usages in
resolveInstallNamespace, resolveHostingCluster and related helpers to accept and
use that ctx so cancellations/deadlines propagate; locate functions
renderManifests, resolveInstallNamespace, resolveHostingCluster (and their
callers at the other mentioned blocks) and add a ctx parameter where missing,
thread the ctx through callers, and replace context.TODO() with the propagated
ctx.
| // to deploy agents if there is RegistrationApplied condition. | ||
| if meta.FindStatusCondition(addon.Status.Conditions, addonapiv1alpha1.ManagedClusterAddOnRegistrationApplied) == nil { | ||
| if meta.FindStatusCondition(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnRegistrationApplied) == nil { | ||
| return nil |
There was a problem hiding this comment.
Require RegistrationApplied=True before reconciling works.
Line 247 only checks whether the condition exists. A ManagedClusterAddOnRegistrationApplied=False status still passes this gate, so deploy and hook syncers can run before registration has actually succeeded.
🔧 Proposed fix
- if meta.FindStatusCondition(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnRegistrationApplied) == nil {
+ if !meta.IsStatusConditionTrue(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnRegistrationApplied) {
return nil
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // to deploy agents if there is RegistrationApplied condition. | |
| if meta.FindStatusCondition(addon.Status.Conditions, addonapiv1alpha1.ManagedClusterAddOnRegistrationApplied) == nil { | |
| if meta.FindStatusCondition(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnRegistrationApplied) == nil { | |
| return nil | |
| // to deploy agents if there is RegistrationApplied condition. | |
| if !meta.IsStatusConditionTrue(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnRegistrationApplied) { | |
| return nil |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/addonmanager/controllers/agentdeploy/controller.go` around lines 246 -
248, The current gate only checks for existence of the RegistrationApplied
condition; update the check in controller.go to fetch the condition via
meta.FindStatusCondition(addon.Status.Conditions,
addonapiv1beta1.ManagedClusterAddOnRegistrationApplied) and return early unless
the condition is present AND its Status equals metav1.ConditionTrue (i.e., if
cond == nil || cond.Status != metav1.ConditionTrue { return nil }). Ensure
metav1 is imported/used for ConditionTrue and reference the same
addon.Status.Conditions and the ManagedClusterAddOnRegistrationApplied constant.
There was a problem hiding this comment.
yes, we just need to ensure that registration condition is added.
There was a problem hiding this comment.
@qiujian16, understood — the check is intentional: as long as the ManagedClusterAddOnRegistrationApplied condition has been added (regardless of its status value), it signals that registration has been processed and deploy reconciliation should proceed. I'll keep the note for future context.
✏️ Learnings added
Learnt from: qiujian16
Repo: open-cluster-management-io/addon-framework PR: 379
File: pkg/addonmanager/controllers/agentdeploy/controller.go:246-248
Timestamp: 2026-04-07T13:35:50.099Z
Learning: In `pkg/addonmanager/controllers/agentdeploy/controller.go` (addon-framework), checking `meta.FindStatusCondition(addon.Status.Conditions, addonapiv1beta1.ManagedClusterAddOnRegistrationApplied) == nil` to gate deploy reconciliation is intentional. The design only requires that the condition *exists* (has been set by the registration controller), not that it is True. Both True and False values are valid to proceed with deployment.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
| if reg := opts.Registration; reg != nil { | ||
| switch { | ||
| case reg.AgentInstallNamespace != nil: | ||
| fn := reg.AgentInstallNamespace | ||
| cfg.AgentInstallNamespace = func(_ context.Context, addon *addonv1beta1.ManagedClusterAddOn) (string, error) { | ||
| return fn(ToV1alpha1Addon(addon)) | ||
| } | ||
| case reg.Namespace != "": | ||
| ns := reg.Namespace | ||
| cfg.AgentInstallNamespace = func(_ context.Context, addon *addonv1beta1.ManagedClusterAddOn) (string, error) { | ||
| // Preserve v1alpha1 behavior: Spec.InstallNamespace (deprecated but still | ||
| // supported) takes priority over the static Registration.Namespace. | ||
| // Spec.InstallNamespace is encoded in a well-known annotation by ToV1beta1Addon. | ||
| if addon != nil { | ||
| if v, ok := addon.Annotations[addonv1beta1.InstallNamespaceAnnotation]; ok && len(v) > 0 { | ||
| return v, nil | ||
| } | ||
| } | ||
| return ns, nil | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Wire the wrapped registration callbacks into cfg.Registration.
The adapter defines RegistrationConfigs, ApproveCSR, Sign, and SetHubPermissions below, but GetAgentAddonOptions never exposes them via cfg.Registration. Wrapped v1alpha1 addons will therefore look unregistered to the v1beta1 controllers.
💡 Suggested fix
if reg := opts.Registration; reg != nil {
+ if reg.CSRConfigurations != nil || reg.CSRApproveCheck != nil || reg.CSRSign != nil || reg.PermissionConfig != nil {
+ cfg.Registration = &RegistrationOption{}
+ if reg.CSRConfigurations != nil {
+ cfg.Registration.Configurations = a.RegistrationConfigs
+ }
+ if reg.CSRApproveCheck != nil {
+ cfg.Registration.CSRApproveCheck = a.ApproveCSR
+ }
+ if reg.CSRSign != nil {
+ cfg.Registration.CSRSign = a.Sign
+ }
+ if reg.PermissionConfig != nil {
+ cfg.Registration.PermissionConfig = a.SetHubPermissions
+ }
+ }
+
switch {
case reg.AgentInstallNamespace != nil:
fn := reg.AgentInstallNamespace📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if reg := opts.Registration; reg != nil { | |
| switch { | |
| case reg.AgentInstallNamespace != nil: | |
| fn := reg.AgentInstallNamespace | |
| cfg.AgentInstallNamespace = func(_ context.Context, addon *addonv1beta1.ManagedClusterAddOn) (string, error) { | |
| return fn(ToV1alpha1Addon(addon)) | |
| } | |
| case reg.Namespace != "": | |
| ns := reg.Namespace | |
| cfg.AgentInstallNamespace = func(_ context.Context, addon *addonv1beta1.ManagedClusterAddOn) (string, error) { | |
| // Preserve v1alpha1 behavior: Spec.InstallNamespace (deprecated but still | |
| // supported) takes priority over the static Registration.Namespace. | |
| // Spec.InstallNamespace is encoded in a well-known annotation by ToV1beta1Addon. | |
| if addon != nil { | |
| if v, ok := addon.Annotations[addonv1beta1.InstallNamespaceAnnotation]; ok && len(v) > 0 { | |
| return v, nil | |
| } | |
| } | |
| return ns, nil | |
| } | |
| } | |
| } | |
| if reg := opts.Registration; reg != nil { | |
| if reg.CSRConfigurations != nil || reg.CSRApproveCheck != nil || reg.CSRSign != nil || reg.PermissionConfig != nil { | |
| cfg.Registration = &RegistrationOption{} | |
| if reg.CSRConfigurations != nil { | |
| cfg.Registration.Configurations = a.RegistrationConfigs | |
| } | |
| if reg.CSRApproveCheck != nil { | |
| cfg.Registration.CSRApproveCheck = a.ApproveCSR | |
| } | |
| if reg.CSRSign != nil { | |
| cfg.Registration.CSRSign = a.Sign | |
| } | |
| if reg.PermissionConfig != nil { | |
| cfg.Registration.PermissionConfig = a.SetHubPermissions | |
| } | |
| } | |
| switch { | |
| case reg.AgentInstallNamespace != nil: | |
| fn := reg.AgentInstallNamespace | |
| cfg.AgentInstallNamespace = func(_ context.Context, addon *addonv1beta1.ManagedClusterAddOn) (string, error) { | |
| return fn(ToV1alpha1Addon(addon)) | |
| } | |
| case reg.Namespace != "": | |
| ns := reg.Namespace | |
| cfg.AgentInstallNamespace = func(_ context.Context, addon *addonv1beta1.ManagedClusterAddOn) (string, error) { | |
| // Preserve v1alpha1 behavior: Spec.InstallNamespace (deprecated but still | |
| // supported) takes priority over the static Registration.Namespace. | |
| // Spec.InstallNamespace is encoded in a well-known annotation by ToV1beta1Addon. | |
| if addon != nil { | |
| if v, ok := addon.Annotations[addonv1beta1.InstallNamespaceAnnotation]; ok && len(v) > 0 { | |
| return v, nil | |
| } | |
| } | |
| return ns, nil | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/agent/adapter.go` around lines 80 - 101, GetAgentAddonOptions currently
wraps only AgentInstallNamespace/Namespace into cfg.AgentInstallNamespace and
never assigns cfg.Registration, so the registration callbacks
(RegistrationConfigs, ApproveCSR, Sign, SetHubPermissions) are not exposed; set
cfg.Registration to a new registration struct that mirrors opts.Registration but
wraps each v1alpha1 callback into the v1beta1 shape (use ToV1alpha1Addon(addon)
where callbacks expect a v1alpha1 addon) and preserve nils/defaults.
Specifically, inside the block that checks opts.Registration, construct
cfg.Registration with fields RegistrationConfigs, ApproveCSR, Sign,
SetHubPermissions (and any other non-wired fields) and for each non-nil function
in opts.Registration assign a wrapper that converts the addon parameter via
ToV1alpha1Addon before invoking the original v1alpha1 callback so v1beta1
controllers see the wrapped registration behavior.
| func ToV1alpha1Addon(in *addonv1beta1.ManagedClusterAddOn) *addonv1alpha1.ManagedClusterAddOn { | ||
| if in == nil { | ||
| return nil | ||
| } | ||
| out := &addonv1alpha1.ManagedClusterAddOn{ | ||
| ObjectMeta: in.ObjectMeta, | ||
| } | ||
| // nil scope is safe: none of the reachable Convert_* functions dereference it. | ||
| _ = addonv1beta1.Convert_v1beta1_ManagedClusterAddOnSpec_To_v1alpha1_ManagedClusterAddOnSpec(&in.Spec, &out.Spec, nil) | ||
| _ = addonv1beta1.Convert_v1beta1_ManagedClusterAddOnStatus_To_v1alpha1_ManagedClusterAddOnStatus(&in.Status, &out.Status, nil) | ||
| return out |
There was a problem hiding this comment.
Restore the deprecated install namespace on v1alpha1 conversion.
ToV1beta1Addon persists Spec.InstallNamespace in addonv1beta1.InstallNamespaceAnnotation, but ToV1alpha1Addon never decodes it back. Any wrapped v1alpha1 addon that still reads addon.Spec.InstallNamespace will always see "".
💡 Suggested fix
func ToV1alpha1Addon(in *addonv1beta1.ManagedClusterAddOn) *addonv1alpha1.ManagedClusterAddOn {
if in == nil {
return nil
}
out := &addonv1alpha1.ManagedClusterAddOn{
ObjectMeta: in.ObjectMeta,
}
// nil scope is safe: none of the reachable Convert_* functions dereference it.
_ = addonv1beta1.Convert_v1beta1_ManagedClusterAddOnSpec_To_v1alpha1_ManagedClusterAddOnSpec(&in.Spec, &out.Spec, nil)
_ = addonv1beta1.Convert_v1beta1_ManagedClusterAddOnStatus_To_v1alpha1_ManagedClusterAddOnStatus(&in.Status, &out.Status, nil)
+ //nolint:staticcheck
+ if ns := in.Annotations[addonv1beta1.InstallNamespaceAnnotation]; len(ns) > 0 {
+ out.Spec.InstallNamespace = ns
+ }
return out
}Also applies to: 210-228
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/agent/adapter.go` around lines 184 - 194, ToV1alpha1Addon is failing to
restore Spec.InstallNamespace because ToV1beta1Addon stores it in
addonv1beta1.InstallNamespaceAnnotation but ToV1alpha1Addon never decodes that
annotation; update ToV1alpha1Addon to read
addonv1beta1.InstallNamespaceAnnotation from in.ObjectMeta.Annotations (or
in.GetAnnotations()) and set out.Spec.InstallNamespace accordingly after the
Convert_* calls, and make the analogous change in the companion conversion
function referenced at lines 210-228 so both single-item and list conversions
restore Spec.InstallNamespace.
| func (c *KubeClientRegistration) RegistrationAPI() addonv1beta1.RegistrationConfig { | ||
| registration := addonv1beta1.RegistrationConfig{ | ||
| Type: addonv1beta1.KubeClient, | ||
| } | ||
|
|
||
| if c.User != "" || len(c.Groups) > 0 { | ||
| registration.KubeClient = &addonv1beta1.KubeClientConfig{ | ||
| Subject: addonv1beta1.KubeClientSubject{ | ||
| BaseSubject: addonv1beta1.BaseSubject{ | ||
| User: c.User, | ||
| Groups: c.Groups, | ||
| }, | ||
| }, | ||
| } | ||
| } | ||
| return registration |
There was a problem hiding this comment.
Always emit a KubeClientConfig for kubeclient registrations.
The new default path in this PR uses &agent.KubeClientRegistration{}. Here that serializes to Type: KubeClient with KubeClient == nil, so there is no nested object available to carry Driver later even though Driver now lives under KubeClientConfig.
🔧 Proposed fix
func (c *KubeClientRegistration) RegistrationAPI() addonv1beta1.RegistrationConfig {
- registration := addonv1beta1.RegistrationConfig{
- Type: addonv1beta1.KubeClient,
- }
-
- if c.User != "" || len(c.Groups) > 0 {
- registration.KubeClient = &addonv1beta1.KubeClientConfig{
- Subject: addonv1beta1.KubeClientSubject{
- BaseSubject: addonv1beta1.BaseSubject{
- User: c.User,
- Groups: c.Groups,
- },
- },
- }
- }
- return registration
+ return addonv1beta1.RegistrationConfig{
+ Type: addonv1beta1.KubeClient,
+ KubeClient: &addonv1beta1.KubeClientConfig{
+ Subject: addonv1beta1.KubeClientSubject{
+ BaseSubject: addonv1beta1.BaseSubject{
+ User: c.User,
+ Groups: c.Groups,
+ },
+ },
+ },
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (c *KubeClientRegistration) RegistrationAPI() addonv1beta1.RegistrationConfig { | |
| registration := addonv1beta1.RegistrationConfig{ | |
| Type: addonv1beta1.KubeClient, | |
| } | |
| if c.User != "" || len(c.Groups) > 0 { | |
| registration.KubeClient = &addonv1beta1.KubeClientConfig{ | |
| Subject: addonv1beta1.KubeClientSubject{ | |
| BaseSubject: addonv1beta1.BaseSubject{ | |
| User: c.User, | |
| Groups: c.Groups, | |
| }, | |
| }, | |
| } | |
| } | |
| return registration | |
| func (c *KubeClientRegistration) RegistrationAPI() addonv1beta1.RegistrationConfig { | |
| return addonv1beta1.RegistrationConfig{ | |
| Type: addonv1beta1.KubeClient, | |
| KubeClient: &addonv1beta1.KubeClientConfig{ | |
| Subject: addonv1beta1.KubeClientSubject{ | |
| BaseSubject: addonv1beta1.BaseSubject{ | |
| User: c.User, | |
| Groups: c.Groups, | |
| }, | |
| }, | |
| }, | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/agent/registration.go` around lines 24 - 39, RegistrationAPI currently
returns registration with Type: KubeClient but leaves registration.KubeClient
nil when c.User and c.Groups are empty, which prevents later fields like Driver
(now under KubeClientConfig) from being set; change RegistrationAPI (method on
KubeClientRegistration) to always initialize registration.KubeClient =
&addonv1beta1.KubeClientConfig{} and then, if c.User or c.Groups are present,
populate the KubeClient.Subject.BaseSubject with User and Groups (i.e., always
create the KubeClientConfig struct but conditionally set its Subject).
|
PR needs rebase. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
Interface is the most important and needs to be reviewed. Decision to keep it in v1.3.0 release or not depends on if the interface is good. |
@qiujian16 I was looking through PR: #368 from @haoqing0110 instead of this one - is that the correct thing to do? I think from the best I can tell the interface changes are good there, just had some questions about how the new agentinstallns field works. |
|
yes, please take a look at that one. I will close this PR /close |
|
@qiujian16: Closed this PR. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Summary
Related issue(s)
Fixes #
Summary by CodeRabbit
New Features
Refactor
Documentation