Skip to content

Create cluster w/ 2 node pools, delete cluster by deleting cx rg#4004

Open
mgahagan73 wants to merge 11 commits intomainfrom
cluster-delete-by-cx-rg
Open

Create cluster w/ 2 node pools, delete cluster by deleting cx rg#4004
mgahagan73 wants to merge 11 commits intomainfrom
cluster-delete-by-cx-rg

Conversation

@mgahagan73
Copy link
Collaborator

@mgahagan73 mgahagan73 commented Feb 4, 2026

ARO-23928

What

See if we can delete a 2 node pool cluster by deleting the customer resource group.

Why

Special notes for your reviewer

@openshift-ci openshift-ci bot requested review from deads2k and patriksuba February 4, 2026 21:18
@mgahagan73 mgahagan73 marked this pull request as draft February 4, 2026 21:19
@mgahagan73 mgahagan73 added ai-assisted AI/LLM tool was used to help create this MR and removed do-not-merge/hold labels Feb 4, 2026
@mgahagan73
Copy link
Collaborator Author

/test integration

@mgahagan73
Copy link
Collaborator Author

/test e2e-parallel

@mgahagan73
Copy link
Collaborator Author

/test e2e-parallel

2 similar comments
@mgahagan73
Copy link
Collaborator Author

/test e2e-parallel

@mgahagan73
Copy link
Collaborator Author

/test e2e-parallel

@mgahagan73
Copy link
Collaborator Author

/test stage-e2e-parallel

@mgahagan73
Copy link
Collaborator Author

Currently this job cannot run properly in the pre-merge PR testing environment. Confirmed that it works as expected in stage. Is the best course of action here to label it so it only runs in stage? That way it will not block merge requests in the way we currently run them (MR tests jobs run on INT but with under a dev cluster) At any rate we can probably start reviewing it now.

@mgahagan73 mgahagan73 marked this pull request as ready for review February 11, 2026 18:30
@mgahagan73
Copy link
Collaborator Author

/test e2e-parallel

@mgahagan73
Copy link
Collaborator Author

/test e2e-parallel

Copy link
Collaborator

@mbukatov mbukatov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Proposing two changes, otherwise it looks good.

labels.RequireNothing,
labels.Critical,
labels.Positive,
labels.AroRpApiCompatible,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should not have labels.AroRpApiCompatible label, since this use case won't work with sheer ARO RP API only, but relies on ARM integration.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This means that the first env this test can be executed is INT.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AroRpApiCompatible removed

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should run on int, stage and prod but I'll kick off a run on each to double-check.

labels.Critical,
labels.Positive,
labels.AroRpApiCompatible,
labels.TeardownValidation,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also you don't need labels.TeardownValidation, since it was expected to be used with per run cluster tests, after a cluster was deleted.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TeardownValidation removed

rgClient := tc.GetARMResourcesClientFactoryOrDie(ctx).NewResourceGroupsClient()
networkClient, err := tc.GetARMNetworkClientFactory(ctx)
Expect(err).NotTo(HaveOccurred())
err = framework.DeleteResourceGroup(ctx, rgClient, networkClient, *resourceGroup.Name, false, 45*time.Minute)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just wonder if 45 minutes reasonable timeout for a full deletion.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good point since we allow 45 minutes for the cluster to be removed through the usual means, I'll add 15 extra minutes.

@mgahagan73
Copy link
Collaborator Author

/test stage-e2e-parallel prod-e2e-parallel integration-e2e-parallel

@openshift-ci
Copy link

openshift-ci bot commented Feb 25, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: mgahagan73
Once this PR has been reviewed and has the lgtm label, please assign raelga for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ai-assisted AI/LLM tool was used to help create this MR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants