Skip to content

HYPERFLEET-999 - refactor: standardize adapter conditions for deletions#40

Open
ldornele wants to merge 3 commits intoopenshift-hyperfleet:mainfrom
ldornele:HYPERFLEET-999
Open

HYPERFLEET-999 - refactor: standardize adapter conditions for deletions#40
ldornele wants to merge 3 commits intoopenshift-hyperfleet:mainfrom
ldornele:HYPERFLEET-999

Conversation

@ldornele
Copy link
Copy Markdown

@ldornele ldornele commented Apr 30, 2026

Summary

Adds deletion lifecycle support to all three adapter task configurations (adapter1, adapter2, adapter3), enabling proper resource cleanup when clusters/nodepools are deleted.

Changes

Preconditions

  • Added is_deleting variable that checks for deleted_time field in cluster/nodepool status
  • Updated validationCheck expression to trigger adapter execution when resource is being deleted (is_deleting || existing_conditions)

Resource Lifecycle

  • Added lifecycle.delete configuration with propagation policies:
    • adapter1 (ConfigMap): Background propagation
    • adapter2 (ManifestWork): Foreground propagation to ensure nested resources deleted first
    • adapter3 (ConfigMap): Background propagation
  • Lifecycle triggers when is_deleting expression evaluates to true

Status Conditions

  • Updated Applied condition to return status: False with reason: ResourceDeleted during deletion
  • Updated Available condition to return status: False with reason: ResourceDeleted during deletion
  • Added new Finalized condition type:
    • Tracks deletion completion by checking if resources are successfully removed
    • status: True with reason: CleanupConfirmed when all resources deleted
    • status: False with reason: CleanupInProgress while deletion ongoing

Why This Matters

Without proper deletion lifecycle handling:

  • Resources could be orphaned when clusters/nodepools are deleted
  • No visibility into deletion progress or completion
  • Adapters wouldn't execute cleanup logic during resource deletion

This change ensures:

  • Clean resource removal without orphans
  • Observable deletion state through Finalized condition
  • Consistent deletion handling across all adapters

Test Plan

  • Deploy adapters with updated configurations
  • Create test cluster/nodepool
  • Delete cluster/nodepool and verify:
    • Adapter execution triggers (precondition passes with is_deleting)
    • Applied/Available conditions transition to False with ResourceDeleted reason
    • Finalized condition progresses from CleanupInProgress to CleanupConfirmed
    • All managed resources (ConfigMaps, ManifestWork) are removed
    • No orphaned resources remain in target namespaces

Summary by CodeRabbit

  • New Features
    • Improved deletion handling across adapters: conditional deletion hooks and foreground cleanup ensure nested resources are removed when clusters are being deleted.
    • Deletion-aware status reporting: Applied and Available explicitly reflect deletion progress with dedicated reasons/messages, and a new Finalized condition indicates cleanup success, in-progress, or failure.

@openshift-ci openshift-ci Bot requested review from jsell-rh and pnguyen44 April 30, 2026 03:14
@openshift-ci
Copy link
Copy Markdown

openshift-ci Bot commented Apr 30, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign crizzo71 for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 30, 2026

Walkthrough

This PR makes three Helm adapter task configs deletion-aware. Each task computes an is_deleting flag from the relevant resource's deleted_time, permits validation during deletion, and adds deletion lifecycle behavior for managed manifests (ConfigMap deletion hooks in adapter1/3; ManifestWork foreground deletion in adapter2). Post-processing condition logic is updated so Applied and Available report deletion-specific statuses/messages and Health/Finalized reflect adapter execution and cleanup progress or failure. Finalized is introduced to indicate cleanup completion, in-progress deletion, or cleanup failure based on is_deleting, resource existence, and adapter execution status.

Sequence Diagram(s)

sequenceDiagram
    participant Resource as Cluster/Nodepool
    participant Adapter as Adapter Task
    participant K8s as Kubernetes API (ManifestWork/ConfigMap)
    participant Store as Adapter Execution Status

    Resource->>Adapter: deleted_time present -> compute is_deleting=true
    Adapter->>Adapter: validationCheck allows run when is_deleting
    Adapter->>K8s: apply manifests or trigger deletion lifecycle (delete hook / foreground)
    alt is_deleting == true
        K8s->>K8s: perform deletion (ConfigMap delete / ManifestWork foreground)
        K8s-->>Adapter: resource missing or deletion in progress
    else
        K8s-->>Adapter: resource applied / available
    end
    Adapter->>Store: update adapter.executionStatus, errorReason/errorMessage
    Adapter->>Adapter: evaluate conditions (Applied, Available, Health, Finalized)
    Adapter-->>Resource: report status with deletion-aware reasons/messages and Finalized state
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly summarizes the main change: adding deletion lifecycle support with standardized conditions across three adapter configurations, directly reflecting the PR's core objective.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Review rate limit: 9/10 reviews remaining, refill in 6 minutes.

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@helm/adapter1/adapter-task-config.yaml`:
- Around line 151-163: The current expressions (using is_deleting and
adapter.?executionStatus.orValue("") == "success") gate all deletion semantics
on executionStatus == "success", causing in-flight or failed deletions to be
reported as "ResourceActive"; update the three expressions (the boolean
Finalized expression, reason.expression, and message.expression) to first check
is_deleting and then branch by executionStatus: when is_deleting is false return
ResourceActive/False as before, when is_deleting is true then if
resources.?resource0.hasValue() is false return CleanupConfirmed/True with a
success message, else if adapter.?executionStatus.orValue("") == "failed" return
CleanupFailed/False with an appropriate failure message, else (executionStatus
empty/in-progress) return CleanupInProgress/False with an in-progress message —
implement these changes in the expressions referencing is_deleting,
adapter.?executionStatus, and resources.?resource0.

In `@helm/adapter2/adapter-task-config.yaml`:
- Around line 302-328: The Finalized condition now reports deletion completion
but the existing Applied and Available condition expressions still evaluate as
True during deletion; update the Applied and Available condition expressions
(the same blocks that currently compute "Applied"/"Available") to short-circuit
when is_deleting is true or when adapter.?executionStatus.orValue("") ==
"success" during deletion so they return "False" (and appropriate
reason/message) while cleanup is in progress; ensure you reference the same
resource presence checks (resources.?resource0, resources.?namespace0,
resources.?configmap0) and the adapter.?executionStatus check so
Applied/Available become false during deletion and only Finalized indicates
completion.

In `@helm/adapter3/adapter-task-config.yaml`:
- Around line 149-161: The current logic checks is_deleting &&
adapter.?executionStatus.orValue("") == "success" which hides deletion progress
when executionStatus is non-success (failed/retrying); update the three
expressions (the condition that produces the boolean, the reason, and the
message) to instead check is_deleting && adapter.?executionStatus.orValue("") !=
"" so any executionStatus shows deletion state, then branch on
resources.?resource0.hasValue() to emit "CleanupConfirmed"/"CleanupInProgress"
(or "CleanupFailed" when adapter.?executionStatus.orValue("") != "success") and
include the actual adapter.?executionStatus.orValue("") in the message to
surface failure/retry status rather than always returning "ResourceActive".
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Enterprise

Run ID: 221dae4f-5227-4537-a6c9-fcd89b17cd9a

📥 Commits

Reviewing files that changed from the base of the PR and between c8bfa3c and 0054ca9.

📒 Files selected for processing (3)
  • helm/adapter1/adapter-task-config.yaml
  • helm/adapter2/adapter-task-config.yaml
  • helm/adapter3/adapter-task-config.yaml

Comment thread helm/adapter1/adapter-task-config.yaml Outdated
Comment thread helm/adapter2/adapter-task-config.yaml
Comment thread helm/adapter3/adapter-task-config.yaml Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
helm/adapter2/adapter-task-config.yaml (1)

45-53: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Deletion can still be blocked by the earlier precondition.

Line 45 still hard-requires readyConditionStatus == "False", so a cluster that is Ready=True and being deleted will fail clusterStatus before the new validationCheck can allow cleanup. Fold is_deleting into that gate, or move the readiness check entirely into validationCheck.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter2/adapter-task-config.yaml` around lines 45 - 53, The current
precondition under conditions uses readyConditionStatus == "False" which blocks
deletion flows because it runs before validationCheck; update the readiness gate
so it also allows deletion by including is_deleting in the same check (e.g.,
change the conditions entry for readyConditionStatus to include an OR with
is_deleting) or remove the standalone readiness condition and fold the readiness
logic entirely into the validationCheck expression (readyConditionStatus ==
"False" || is_deleting) so a cluster with Ready=True but is_deleting can pass
and proceed to cleanup; adjust the block containing conditions,
readyConditionStatus, and validationCheck accordingly.
♻️ Duplicate comments (1)
helm/adapter2/adapter-task-config.yaml (1)

302-334: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Finalized is deletion-aware, but Applied/Available can still stay True.

This still produces contradictory status during teardown: the new Finalized block reports cleanup progress, while the existing Applied/Available expressions at Line 229 onward can continue reflecting the pre-delete state. Those two conditions should also short-circuit on is_deleting and report False/ResourceDeleted.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter2/adapter-task-config.yaml` around lines 302 - 334, The Applied
and Available status expressions must short-circuit on is_deleting to avoid
contradictory statuses during teardown: update the existing Applied and
Available expression blocks (the same style used in the Finalized block) so they
first check !is_deleting and return "True" (or the existing active message) when
not deleting, but when is_deleting return status "False" and set reason to
"ResourceDeleted" (and message to something like "Resource is being deleted" or
"Resource deleted" depending on whether nested resources remain), and reuse
adapter.?executionStatus.orValue("") == "failed" to switch to
"DeletionFailed"/"Deletion in progress" as appropriate; locate and adjust the
Applied and Available expressions that reference resources.?resource0,
resources.?namespace0, resources.?configmap0 and adapter.?executionStatus to
implement this short-circuiting.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@helm/adapter2/adapter-task-config.yaml`:
- Around line 45-53: The current precondition under conditions uses
readyConditionStatus == "False" which blocks deletion flows because it runs
before validationCheck; update the readiness gate so it also allows deletion by
including is_deleting in the same check (e.g., change the conditions entry for
readyConditionStatus to include an OR with is_deleting) or remove the standalone
readiness condition and fold the readiness logic entirely into the
validationCheck expression (readyConditionStatus == "False" || is_deleting) so a
cluster with Ready=True but is_deleting can pass and proceed to cleanup; adjust
the block containing conditions, readyConditionStatus, and validationCheck
accordingly.

---

Duplicate comments:
In `@helm/adapter2/adapter-task-config.yaml`:
- Around line 302-334: The Applied and Available status expressions must
short-circuit on is_deleting to avoid contradictory statuses during teardown:
update the existing Applied and Available expression blocks (the same style used
in the Finalized block) so they first check !is_deleting and return "True" (or
the existing active message) when not deleting, but when is_deleting return
status "False" and set reason to "ResourceDeleted" (and message to something
like "Resource is being deleted" or "Resource deleted" depending on whether
nested resources remain), and reuse adapter.?executionStatus.orValue("") ==
"failed" to switch to "DeletionFailed"/"Deletion in progress" as appropriate;
locate and adjust the Applied and Available expressions that reference
resources.?resource0, resources.?namespace0, resources.?configmap0 and
adapter.?executionStatus to implement this short-circuiting.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Enterprise

Run ID: 91efedfb-ae5d-417f-ada8-d656aaa6350a

📥 Commits

Reviewing files that changed from the base of the PR and between 0054ca9 and a3b7875.

📒 Files selected for processing (3)
  • helm/adapter1/adapter-task-config.yaml
  • helm/adapter2/adapter-task-config.yaml
  • helm/adapter3/adapter-task-config.yaml
🚧 Files skipped from review as they are similar to previous changes (1)
  • helm/adapter1/adapter-task-config.yaml

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@helm/adapter3/adapter-task-config.yaml`:
- Around line 142-148: The status defaults to "Unknown" while reason/message
still default to "Healthy"/success text; update the expressions for reason
(adapter.?errorReason) and message (adapter.?errorMessage) to return a neutral
in-progress/unknown value when adapter.?executionStatus is neither "success" nor
"failed" so they align with the status. Specifically, change the default
branches used alongside adapter.?executionStatus.orValue("") in the status
expression so that when executionStatus != "success" && != "failed" the reason
becomes e.g. "InProgress" or "Unknown" and the message becomes a matching
neutral text like "Adapter execution in progress or state unknown" instead of
"Healthy" / success text.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Enterprise

Run ID: 384ca211-ad95-4cd8-9a71-427638496933

📥 Commits

Reviewing files that changed from the base of the PR and between a3b7875 and 981ddd1.

📒 Files selected for processing (3)
  • helm/adapter1/adapter-task-config.yaml
  • helm/adapter2/adapter-task-config.yaml
  • helm/adapter3/adapter-task-config.yaml
🚧 Files skipped from review as they are similar to previous changes (2)
  • helm/adapter1/adapter-task-config.yaml
  • helm/adapter2/adapter-task-config.yaml

Comment on lines +142 to +148
adapter.?executionStatus.orValue("") == "success" ? "True" : (adapter.?executionStatus.orValue("") == "failed" ? "False" : "Unknown")
reason:
expression: |
has(resources.resource0.data.nodepoolId)
? "ConfigMap data available"
: "ConfigMap data not yet available"
adapter.?errorReason.orValue("") != "" ? adapter.?errorReason.orValue("") : "Healthy"
message:
expression: |
toJson(resources.resource0)
adapter.?errorMessage.orValue("") != "" ? adapter.?errorMessage.orValue("") : "All adapter operations completed successfully"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Health status is inconsistent with its own reason/message defaults.

When execution is neither success nor failed, status is "Unknown" but reason/message still default to "Healthy" and a success message. This can mask in-progress/retrying states in monitoring and automation.

Suggested fix
           - type: "Health"
             status:
               expression: |
                 adapter.?executionStatus.orValue("") == "success" ? "True" : (adapter.?executionStatus.orValue("") == "failed" ? "False" : "Unknown")
             reason:
               expression: |
-                adapter.?errorReason.orValue("") != "" ? adapter.?errorReason.orValue("") : "Healthy"
+                adapter.?executionStatus.orValue("") == "failed"
+                  ? adapter.?errorReason.orValue("ExecutionFailed")
+                  : (adapter.?executionStatus.orValue("") == "success"
+                      ? "Healthy"
+                      : "ExecutionInProgress")
             message:
               expression: |
-                adapter.?errorMessage.orValue("") != "" ? adapter.?errorMessage.orValue("") : "All adapter operations completed successfully"
+                adapter.?executionStatus.orValue("") == "failed"
+                  ? adapter.?errorMessage.orValue("Adapter execution failed")
+                  : (adapter.?executionStatus.orValue("") == "success"
+                      ? "All adapter operations completed successfully"
+                      : "Adapter execution is still in progress")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
adapter.?executionStatus.orValue("") == "success" ? "True" : (adapter.?executionStatus.orValue("") == "failed" ? "False" : "Unknown")
reason:
expression: |
has(resources.resource0.data.nodepoolId)
? "ConfigMap data available"
: "ConfigMap data not yet available"
adapter.?errorReason.orValue("") != "" ? adapter.?errorReason.orValue("") : "Healthy"
message:
expression: |
toJson(resources.resource0)
adapter.?errorMessage.orValue("") != "" ? adapter.?errorMessage.orValue("") : "All adapter operations completed successfully"
adapter.?executionStatus.orValue("") == "success" ? "True" : (adapter.?executionStatus.orValue("") == "failed" ? "False" : "Unknown")
reason:
expression: |
adapter.?executionStatus.orValue("") == "failed"
? adapter.?errorReason.orValue("ExecutionFailed")
: (adapter.?executionStatus.orValue("") == "success"
? "Healthy"
: "ExecutionInProgress")
message:
expression: |
adapter.?executionStatus.orValue("") == "failed"
? adapter.?errorMessage.orValue("Adapter execution failed")
: (adapter.?executionStatus.orValue("") == "success"
? "All adapter operations completed successfully"
: "Adapter execution is still in progress")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter3/adapter-task-config.yaml` around lines 142 - 148, The status
defaults to "Unknown" while reason/message still default to "Healthy"/success
text; update the expressions for reason (adapter.?errorReason) and message
(adapter.?errorMessage) to return a neutral in-progress/unknown value when
adapter.?executionStatus is neither "success" nor "failed" so they align with
the status. Specifically, change the default branches used alongside
adapter.?executionStatus.orValue("") in the status expression so that when
executionStatus != "success" && != "failed" the reason becomes e.g. "InProgress"
or "Unknown" and the message becomes a matching neutral text like "Adapter
execution in progress or state unknown" instead of "Healthy" / success text.

status:
expression: |
has(resources.resource0.metadata.creationTimestamp) ? "True" : "False"
is_deleting
Copy link
Copy Markdown
Contributor

@rh-amarin rh-amarin Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Being in deletion phase is not a direct indication of Applied not being True

The natural condition is still the resource being found in the discovery phase IMO

It wold be the same for reason and the Available condition values

!is_deleting
? (resources.?resource0.hasValue() && has(resources.resource0.status) && has(resources.resource0.status.conditions) && resources.resource0.status.conditions.filter(c, has(c.type) && c.type == "Applied").size() > 0 ? resources.resource0.status.conditions.filter(c, c.type == "Applied")[0].reason : "ManifestWorkNotDiscovered")
: (!resources.?resource0.hasValue()
? "ResourceDeleted"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tip

nit — non-blocking suggestion

Category: Inconsistency

During deletion, adapter2 reports granular reasons for Applied/Available ("ResourceDeleted" / "DeletionFailed" / "DeletionInProgress"), but adapter1/adapter3 always report just "ResourceDeleted" regardless of actual deletion state. Since the PR title is "standardize adapter conditions," the reasons should be consistent across all three adapters.

If you keep the is_deleting wrapper on Applied/Available (see @rh-amarin's comment above), consider making adapter1/3 match adapter2's granularity — or simplifying adapter2 to match adapter1/3. If you remove the wrapper per Angel's suggestion, this resolves naturally.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants