Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
75 changes: 58 additions & 17 deletions CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,8 @@ Use `ginkgo.By()` for major steps ONLY. Do NOT use inside `Eventually` closures:
```go
// CORRECT
ginkgo.By("waiting for cluster to become Ready")
err := h.WaitForClusterPhase(ctx, clusterID, openapi.Ready, timeout)
Eventually(h.PollCluster(ctx, clusterID), timeout, h.Cfg.Polling.Interval).
Should(helper.HaveResourceCondition(client.ConditionTypeReady, openapi.ResourceConditionStatusTrue))

// INCORRECT - never do this
Eventually(func() {
Expand All @@ -148,18 +149,55 @@ Eventually(func() {
}).Should(Succeed())
```

### Async Operations
### Async Operations — Pollers + Custom Matchers

Use `Eventually` with `g.Expect()` (not `Expect()`):
Use **pollers** (thin functions returning current state) with **custom matchers** (reusable assertions). This keeps `Eventually` visible at the call site and avoids combinatorial helper function explosion.

**Wait for a resource condition** (cluster or nodepool):
```go
Eventually(h.PollCluster(ctx, clusterID), h.Cfg.Timeouts.Cluster.Ready, h.Cfg.Polling.Interval).
Should(helper.HaveResourceCondition(client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue))

Eventually(h.PollNodePool(ctx, clusterID, npID), h.Cfg.Timeouts.NodePool.Ready, h.Cfg.Polling.Interval).
Should(helper.HaveResourceCondition(client.ConditionTypeReady, openapi.ResourceConditionStatusTrue))
```

**Wait for adapter conditions** (works for both cluster and nodepool adapters):
```go
Eventually(h.PollClusterAdapterStatuses(ctx, clusterID), timeout, h.Cfg.Polling.Interval).
Should(helper.HaveAllAdaptersWithCondition(h.Cfg.Adapters.Cluster, client.ConditionTypeFinalized, openapi.AdapterConditionStatusTrue))

Eventually(h.PollNodePoolAdapterStatuses(ctx, clusterID, npID), timeout, h.Cfg.Polling.Interval).
Should(helper.HaveAllAdaptersAtGeneration(h.Cfg.Adapters.NodePool, expectedGen))
```

**Wait for hard-delete** (resource returns 404):
```go
Eventually(h.PollClusterHTTPStatus(ctx, clusterID), timeout, h.Cfg.Polling.Interval).
Should(Equal(http.StatusNotFound))
```

**Wait for namespace cleanup**:
```go
Eventually(h.PollNamespacesByPrefix(ctx, clusterID), timeout, h.Cfg.Polling.Interval).
Should(BeEmpty())
```

**For one-off complex assertions**, use `Eventually` with `func(g Gomega)` and `g.Expect()` (not `Expect()`):
```go
Eventually(func(g Gomega) {
cluster, err := h.Client.GetCluster(ctx, clusterID)
statuses, err := h.Client.GetClusterStatuses(ctx, clusterID)
g.Expect(err).NotTo(HaveOccurred())
g.Expect(cluster.Status.Phase).To(Equal(openapi.Ready))
}, timeout, pollInterval).Should(Succeed())
// complex multi-field validation...
}, timeout, h.Cfg.Polling.Interval).Should(Succeed())
```

Available pollers: `PollCluster`, `PollNodePool`, `PollClusterAdapterStatuses`, `PollNodePoolAdapterStatuses`, `PollClusterHTTPStatus`, `PollNodePoolHTTPStatus`, `PollNamespacesByPrefix` — see `pkg/helper/pollers.go`.

Available matchers: `HaveResourceCondition`, `HaveAllAdaptersWithCondition`, `HaveAllAdaptersAtGeneration` — see `pkg/helper/matchers.go`.

**Do NOT** create `WaitFor*` wrapper functions that hide `Eventually` inside helpers.

### Resource Cleanup

ALWAYS implement cleanup in `AfterEach`:
Expand Down Expand Up @@ -202,10 +240,11 @@ Available variables: `.Random`, `.Timestamp`. See `pkg/client/payload.go`.
- **Use `ginkgo.By()` in `Eventually`**: Only use at top-level test steps
- **Import test packages**: Do NOT import `e2e/*` packages in production code
- **Edit OpenAPI schema**: Schema is maintained in hyperfleet-api repo
- **Create `WaitFor*` wrapper functions**: Use pollers + custom matchers instead (see Async Operations)

### DO

- **Use helper functions**: Prefer `h.WaitForClusterPhase()` over manual polling
- **Use pollers + matchers**: Prefer `Eventually(h.PollCluster(...)).Should(helper.HaveResourceCondition(...))` over raw `Eventually` with inline closures
- **Use config values**: `h.Cfg.Timeouts.*` for timeouts, `h.Cfg.Polling.*` for intervals
- **Store resource IDs**: Save IDs in variables for cleanup
- **Check errors**: Use `Expect(err).NotTo(HaveOccurred())`
Expand Down Expand Up @@ -268,23 +307,25 @@ Expect(err).NotTo(HaveOccurred())
clusterID = *cluster.Id
```

### Wait for Phase
### Wait for Condition

```go
err = h.WaitForClusterPhase(ctx, clusterID, openapi.Ready, h.Cfg.Timeouts.Cluster.Ready)
Expect(err).NotTo(HaveOccurred())
Eventually(h.PollCluster(ctx, clusterID), h.Cfg.Timeouts.Cluster.Ready, h.Cfg.Polling.Interval).
Should(helper.HaveResourceCondition(client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue))
```

### Verify Conditions
### Wait for All Adapters

```go
statuses, err := h.Client.GetClusterStatuses(ctx, clusterID)
Expect(err).NotTo(HaveOccurred())
Eventually(h.PollClusterAdapterStatuses(ctx, clusterID), h.Cfg.Timeouts.Adapter.Processing, h.Cfg.Polling.Interval).
Should(helper.HaveAllAdaptersAtGeneration(h.Cfg.Adapters.Cluster, expectedGen))
```

for _, adapter := range statuses.Items {
hasApplied := h.HasCondition(adapter.Conditions, client.ConditionTypeApplied, openapi.True)
Expect(hasApplied).To(BeTrue())
}
### Verify Conditions (synchronous)

```go
hasReconciled := h.HasResourceCondition(cluster.Status.Conditions, client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue)
Expect(hasReconciled).To(BeTrue())
```

## Documentation
Expand Down
106 changes: 69 additions & 37 deletions docs/development.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,7 @@ import (
. "github.com/onsi/gomega"

"github.com/openshift-hyperfleet/hyperfleet-e2e/pkg/api/openapi"
"github.com/openshift-hyperfleet/hyperfleet-e2e/pkg/client"
"github.com/openshift-hyperfleet/hyperfleet-e2e/pkg/helper"
"github.com/openshift-hyperfleet/hyperfleet-e2e/pkg/labels"
)
Expand All @@ -90,13 +91,13 @@ var _ = ginkgo.Describe(testName,

ginkgo.It("should create cluster successfully", func(ctx context.Context) {
ginkgo.By("submitting cluster creation request")
cluster, err := h.Client.CreateClusterFromPayload(ctx, "testdata/payloads/clusters/cluster-request.json")
cluster, err := h.Client.CreateClusterFromPayload(ctx, h.TestDataPath("payloads/clusters/cluster-request.json"))
Expect(err).NotTo(HaveOccurred())
clusterID = *cluster.Id

ginkgo.By("waiting for cluster to become Ready")
err = h.WaitForClusterPhase(ctx, clusterID, openapi.Ready, h.Cfg.Timeouts.Cluster.Ready)
Expect(err).NotTo(HaveOccurred())
ginkgo.By("waiting for cluster to become Reconciled")
Eventually(h.PollCluster(ctx, clusterID), h.Cfg.Timeouts.Cluster.Ready, h.Cfg.Polling.Interval).
Should(helper.HaveResourceCondition(client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue))
})

ginkgo.AfterEach(func(ctx context.Context) {
Expand Down Expand Up @@ -209,40 +210,70 @@ ginkgo.AfterEach(func(ctx context.Context) {
```go
// Basic assertions
Expect(err).NotTo(HaveOccurred())
Expect(cluster.ID).NotTo(BeEmpty())
Expect(cluster.Status.Phase).To(Equal(openapi.Ready))
Expect(cluster.Id).NotTo(BeNil())
Expect(cluster.Generation).To(Equal(int32(1)))

// Async: use pollers + custom matchers (preferred)
Eventually(h.PollCluster(ctx, clusterID), h.Cfg.Timeouts.Cluster.Ready, h.Cfg.Polling.Interval).
Should(helper.HaveResourceCondition(client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue))

// Eventually for async operations
// Async: use func(g Gomega) for complex one-off assertions
Eventually(func(g Gomega) {
cluster, err := h.Client.GetCluster(ctx, clusterID)
statuses, err := h.Client.GetClusterStatuses(ctx, clusterID)
g.Expect(err).NotTo(HaveOccurred())
g.Expect(cluster.Status.Phase).To(Equal(openapi.Ready))
}, h.Cfg.Timeouts.Cluster.Ready, h.Cfg.Polling.Interval).Should(Succeed())
// multi-field validation...
}, timeout, h.Cfg.Polling.Interval).Should(Succeed())
```

**Important**: Inside `Eventually` closures, use `g.Expect()` instead of `Expect()`

## Using Helper Functions
## Using Pollers and Matchers

### Wait for Cluster Ready
The framework uses **pollers** (functions that fetch current state) and **custom matchers** (reusable Gomega assertions) to compose async checks. This avoids a combinatorial explosion of `WaitFor*` helper functions.

### Wait for Resource Condition

```go
err = h.WaitForClusterPhase(ctx, clusterID, openapi.Ready, h.Cfg.Timeouts.Cluster.Ready)
Expect(err).NotTo(HaveOccurred())
// Cluster
Eventually(h.PollCluster(ctx, clusterID), h.Cfg.Timeouts.Cluster.Ready, h.Cfg.Polling.Interval).
Should(helper.HaveResourceCondition(client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue))

// NodePool (same matcher, different poller)
Eventually(h.PollNodePool(ctx, clusterID, npID), h.Cfg.Timeouts.NodePool.Ready, h.Cfg.Polling.Interval).
Should(helper.HaveResourceCondition(client.ConditionTypeReady, openapi.ResourceConditionStatusTrue))
```

### Check Adapter Conditions
### Wait for Adapter Conditions

```go
statuses, err := h.Client.GetClusterStatuses(ctx, clusterID)
Expect(err).NotTo(HaveOccurred())
// All adapters finalized
Eventually(h.PollClusterAdapterStatuses(ctx, clusterID), timeout, h.Cfg.Polling.Interval).
Should(helper.HaveAllAdaptersWithCondition(h.Cfg.Adapters.Cluster, client.ConditionTypeFinalized, openapi.AdapterConditionStatusTrue))

for _, adapter := range statuses.Items {
hasApplied := h.HasCondition(adapter.Conditions, client.ConditionTypeApplied, openapi.True)
Expect(hasApplied).To(BeTrue())
}
// All adapters at a specific generation with Applied+Available+Health=True
Eventually(h.PollClusterAdapterStatuses(ctx, clusterID), timeout, h.Cfg.Polling.Interval).
Should(helper.HaveAllAdaptersAtGeneration(h.Cfg.Adapters.Cluster, expectedGen))
```

### Wait for Hard-Delete

```go
Eventually(h.PollClusterHTTPStatus(ctx, clusterID), timeout, h.Cfg.Polling.Interval).
Should(Equal(http.StatusNotFound))
```

### Check Conditions Synchronously

```go
hasReconciled := h.HasResourceCondition(cluster.Status.Conditions, client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue)
Expect(hasReconciled).To(BeTrue())

hasApplied := h.HasAdapterCondition(adapter.Conditions, client.ConditionTypeApplied, openapi.AdapterConditionStatusTrue)
Expect(hasApplied).To(BeTrue())
```

Available pollers: see `pkg/helper/pollers.go`. Available matchers: see `pkg/helper/matchers.go`.

## Best Practices

### DO ✅
Expand All @@ -253,7 +284,7 @@ for _, adapter := range statuses.Items {
- Clean up resources in `AfterEach`
- Use timeout values from config
- Store resource IDs for cleanup
- Use helper functions when available
- Use pollers + custom matchers for async waits (see `pkg/helper/pollers.go`, `pkg/helper/matchers.go`)

### DON'T ❌

Expand All @@ -262,6 +293,7 @@ for _, adapter := range statuses.Items {
- Don't hardcode timeouts (use config values)
- Don't skip cleanup (unless debugging)
- Don't ignore errors
- Don't create `WaitFor*` wrapper functions that hide `Eventually` — use pollers + matchers instead

## Adding New Tests

Expand Down Expand Up @@ -321,31 +353,31 @@ cluster, err := h.Client.CreateClusterFromPayload(ctx, "testdata/payloads/cluste
Expect(err).NotTo(HaveOccurred())
```

### Wait for Phase Transition
### Wait for Condition

```go
Eventually(func(g Gomega) {
cluster, err := h.Client.GetCluster(ctx, clusterID)
g.Expect(err).NotTo(HaveOccurred())
g.Expect(cluster.Status.Phase).To(Equal(openapi.Ready))
}, timeout, pollInterval).Should(Succeed())
Eventually(h.PollCluster(ctx, clusterID), h.Cfg.Timeouts.Cluster.Ready, h.Cfg.Polling.Interval).
Should(helper.HaveResourceCondition(client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue))
```

### Verify All Adapter Conditions
### Wait for All Adapters at Generation

```go
Eventually(h.PollClusterAdapterStatuses(ctx, clusterID), h.Cfg.Timeouts.Adapter.Processing, h.Cfg.Polling.Interval).
Should(helper.HaveAllAdaptersAtGeneration(h.Cfg.Adapters.Cluster, expectedGen))
```

### Verify Adapter Conditions Synchronously

```go
statuses, err := h.Client.GetClusterStatuses(ctx, clusterID)
Expect(err).NotTo(HaveOccurred())

for _, adapter := range statuses.Items {
adapterName := adapter.Adapter
ginkgo.By(fmt.Sprintf("verifying adapter %s conditions", adapterName))

hasApplied := h.HasCondition(adapter.Conditions, client.ConditionTypeApplied, openapi.True)
Expect(hasApplied).To(BeTrue(), "adapter %s should have Applied=True", adapterName)

hasAvailable := h.HasCondition(adapter.Conditions, client.ConditionTypeAvailable, openapi.True)
Expect(hasAvailable).To(BeTrue(), "adapter %s should have Available=True", adapterName)
Expect(h.HasAdapterCondition(adapter.Conditions, client.ConditionTypeApplied, openapi.AdapterConditionStatusTrue)).To(BeTrue(),
"adapter %s should have Applied=True", adapter.Adapter)
Expect(h.HasAdapterCondition(adapter.Conditions, client.ConditionTypeAvailable, openapi.AdapterConditionStatusTrue)).To(BeTrue(),
"adapter %s should have Available=True", adapter.Adapter)
}
```

Expand Down
10 changes: 2 additions & 8 deletions e2e/cluster/concurrent_creation.go
Original file line number Diff line number Diff line change
Expand Up @@ -87,14 +87,8 @@ var _ = ginkgo.Describe("[Suite: cluster][concurrent] System can process concurr
ginkgo.By("Wait for all clusters to reach Ready=True and Available=True")
for i, clusterID := range clusterIDs {
ginkgo.GinkgoWriter.Printf("Waiting for cluster %d (%s) to become Ready...\n", i, clusterID)
err := h.WaitForClusterCondition(
ctx,
clusterID,
client.ConditionTypeReady,
openapi.ResourceConditionStatusTrue,
h.Cfg.Timeouts.Cluster.Ready,
)
Expect(err).NotTo(HaveOccurred(), "cluster %d (%s) should reach Ready=True", i, clusterID)
Eventually(h.PollCluster(ctx, clusterID), h.Cfg.Timeouts.Cluster.Ready, h.Cfg.Polling.Interval).
Should(helper.HaveResourceCondition(client.ConditionTypeReady, openapi.ResourceConditionStatusTrue))

cluster, err := h.Client.GetCluster(ctx, clusterID)
Expect(err).NotTo(HaveOccurred(), "failed to get cluster %d (%s)", i, clusterID)
Expand Down
20 changes: 4 additions & 16 deletions e2e/cluster/creation.go
Original file line number Diff line number Diff line change
Expand Up @@ -130,14 +130,8 @@ var _ = ginkgo.Describe("[Suite: cluster][baseline] Cluster Resource Type Lifecy
ginkgo.By("Verify final cluster state")
// Wait for cluster Ready condition and verify both Ready and Available conditions are True
// This confirms the cluster has reached the desired end state
err = h.WaitForClusterCondition(
ctx,
clusterID,
client.ConditionTypeReady,
openapi.ResourceConditionStatusTrue,
h.Cfg.Timeouts.Cluster.Ready,
)
Expect(err).NotTo(HaveOccurred(), "cluster Ready condition should transition to True")
Eventually(h.PollCluster(ctx, clusterID), h.Cfg.Timeouts.Cluster.Ready, h.Cfg.Polling.Interval).
Should(helper.HaveResourceCondition(client.ConditionTypeReady, openapi.ResourceConditionStatusTrue))

finalCluster, err := h.Client.GetCluster(ctx, clusterID)
Expect(err).NotTo(HaveOccurred(), "failed to get final cluster state")
Expand Down Expand Up @@ -238,14 +232,8 @@ var _ = ginkgo.Describe("[Suite: cluster][baseline] Cluster Resource Type Lifecy
ginkgo.By("Verify final cluster state to ensure Ready before cleanup")
// Wait for cluster Ready condition to prevent namespace deletion conflicts
// Without this, adapters may still be creating resources during cleanup
err := h.WaitForClusterCondition(
ctx,
clusterID,
client.ConditionTypeReady,
openapi.ResourceConditionStatusTrue,
h.Cfg.Timeouts.Cluster.Ready,
)
Expect(err).NotTo(HaveOccurred(), "cluster Ready condition should transition to True before cleanup")
Eventually(h.PollCluster(ctx, clusterID), h.Cfg.Timeouts.Cluster.Ready, h.Cfg.Polling.Interval).
Should(helper.HaveResourceCondition(client.ConditionTypeReady, openapi.ResourceConditionStatusTrue))
})
})

Expand Down
Loading