-
Notifications
You must be signed in to change notification settings - Fork 10
Update lastTransitionTime for volume and nic in the machineCondition
#1428
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Update lastTransitionTime for volume and nic in the machineCondition
#1428
Conversation
2e148a0 to
6c44f35
Compare
6c44f35 to
30733fa
Compare
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
WalkthroughMachine status updates now merge newly computed conditions with existing ones via Changes
Sequence Diagram(s)(No sequence diagrams generated — changes are internal condition merging and per-item timestamp aggregation, not a new multi-component control flow.) Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In @poollet/machinepoollet/controllers/machine_controller.go:
- Around line 628-635: When status == corev1.ConditionTrue and iterating
nicStatuses to set lastTransitionTime, the code currently picks the first
non-nil LastStateTransitionTime; instead choose the most recent timestamp (the
max) among all nicStatus.LastStateTransitionTime values similar to
computeVolumesReadyCondition. Update the loop over nicStatuses to compare each
non-nil nicStatus.LastStateTransitionTime and assign lastTransitionTime to the
latest (largest) time, leaving lastTransitionTime nil if none found.
- Around line 651-671: The mergeMachineConditions function currently appends new
conditions and never removes old ones, causing unbounded growth; change it to
replace or update the existing condition entry for the same Type instead of
appending: build a map from condition Type to its index from the existing slice,
then for each newCond in current either overwrite the existing entry at that
index (updating Status, Reason, Message, and LastTransitionTime appropriately)
or insert if not present; ensure you update lastStatus bookkeeping (the same map
or recompute) so duplicate types are not left behind and the Conditions slice
remains deduplicated per Type.
- Around line 586-593: The current block that sets lastTransitionTime when
status == corev1.ConditionTrue takes the first non-nil
vs.LastStateTransitionTime from volumeStatuses which can understate when the
condition actually transitioned; instead iterate all volumeStatuses and compute
the most recent (maximum) LastStateTransitionTime (compare pointers/dereferenced
times safely, skipping nils) and assign that max to lastTransitionTime so it
reflects the latest volume transition when the overall condition is true.
- Around line 554-556: Change the hard-coded string Type for the MachineReady
condition to use the explicit enum type like the other conditions: replace the
current Type: "MachineReady" in the Machine condition construction with Type:
computev1alpha1.MachineConditionType("MachineReady") so it matches the pattern
used for VolumesReady and NetworkInterfacesReady within the same controller.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
poollet/machinepoollet/controllers/machine_controller.gopoollet/machinepoollet/controllers/machine_controller_test.go
🧰 Additional context used
🧬 Code graph analysis (2)
poollet/machinepoollet/controllers/machine_controller.go (3)
iri/apis/machine/v1alpha1/api.pb.go (3)
Conditions(991-1000)Conditions(1013-1013)Conditions(1028-1030)api/compute/v1alpha1/machine_types.go (4)
VolumeStateAttached(186-186)MachineCondition(193-204)MachineConditionType(190-190)NetworkInterfaceStateAttached(162-162)internal/apis/compute/machine_types.go (4)
VolumeStateAttached(174-174)MachineCondition(181-192)MachineConditionType(178-178)NetworkInterfaceStateAttached(150-150)
poollet/machinepoollet/controllers/machine_controller_test.go (2)
api/compute/v1alpha1/machine_types.go (1)
MachineConditionType(190-190)internal/apis/compute/machine_types.go (1)
MachineConditionType(178-178)
🔇 Additional comments (4)
poollet/machinepoollet/controllers/machine_controller_test.go (2)
199-220: LGTM! Test expectations correctly updated.The test assertions properly align with the controller changes:
- Condition type renamed to
"MachineReady"- Volume and NIC readiness messages now use dynamic resource names (
volume.Name,nic.Name)
508-536: Condition type updates are consistent with the controller changes.poollet/machinepoollet/controllers/machine_controller.go (2)
507-508: Condition handling refactored to use incremental merging.The status update now computes conditions and merges them with existing conditions, supporting condition history persistence as per PR objectives.
1022-1022: Good catch fixing the log message.The log message now correctly reflects that this handler deals with network interfaces, not secrets.
lastTransitionTime for volume and nic in the machineConditionlastTransitionTime for volume and nic in the machineCondition
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @poollet/machinepoollet/controllers/machine_controller.go:
- Around line 649-669: mergeMachineConditions currently appends a new entry each
time a condition status changes, causing duplicate entries per condition type;
change it to update conditions in-place and return exactly one condition per
type: iterate current conditions and for each newCond find the existing
condition in the result (by comparing condition Type), if found update its
Status, Reason, Message (and set LastTransitionTime to
newCond.LastTransitionTime or now if Status changed), otherwise append newCond;
ensure you preserve other fields and do not keep multiple entries for the same
Type so the returned slice contains a single, updated condition per type (refer
to mergeMachineConditions, MachineCondition and corev1.ConditionStatus).
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
poollet/machinepoollet/controllers/machine_controller.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: check-codegen
- GitHub Check: test
- GitHub Check: lint
🔇 Additional comments (4)
poollet/machinepoollet/controllers/machine_controller.go (4)
507-508: Status update flow looks correct.The approach of computing conditions then merging with existing ones is sound. The correctness depends on the
mergeMachineConditionsimplementation—see comments on that function below.
554-561: LGTM: Condition type rename is appropriate.Using
"MachineReady"is more explicit and avoids ambiguity with generic "Ready" conditions. Ensure any external consumers or dashboards querying this condition type are updated accordingly.
563-606: Per-volume transition time logic is well implemented.The approach correctly:
- Uses the specific volume's
LastStateTransitionTimewhen not ready- Finds the maximum transition time when all volumes are ready (representing when the condition became True)
- Falls back to
nowwhen no transition time is available- Handles local-disk volumes without a
VolumeRef.Name
608-647: Network interface transition time logic mirrors volume logic correctly.Consistent implementation with
computeVolumesReadyCondition. The per-NIC transition time handling follows the same correct pattern.
f115742 to
a5f1d86
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @poollet/machinepoollet/controllers/machine_controller.go:
- Around line 650-689: The mergeMachineConditions function produces
non-deterministic ordering because it iterates over the conditionsByType map; to
fix, gather the map keys into a slice, sort the keys (e.g., lexicographically),
then iterate over the sorted keys to append each type's conditions to result
(keeping the existing per-type sort by LastTransitionTime and the
maxConditionsPerType trimming). Update the loop that currently does "for _,
conds := range conditionsByType" to instead iterate "for _, typ := range
sortedKeys { conds := conditionsByType[typ]; ... }" so result ordering is
deterministic across reconciles.
🧹 Nitpick comments (1)
poollet/machinepoollet/controllers/machine_controller.go (1)
556-556: Consider defining constants for condition types.Using string literals for condition types like
"MachineReady","VolumesReady", and"NetworkInterfacesReady"works but reduces maintainability. Define constants in the API types package for consistency and to prevent typos.// In api/compute/v1alpha1/machine_types.go const ( MachineConditionMachineReady MachineConditionType = "MachineReady" MachineConditionVolumesReady MachineConditionType = "VolumesReady" MachineConditionNetworkInterfacesReady MachineConditionType = "NetworkInterfacesReady" )
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
poollet/machinepoollet/controllers/machine_controller.go
🧰 Additional context used
🧬 Code graph analysis (1)
poollet/machinepoollet/controllers/machine_controller.go (3)
iri/apis/machine/v1alpha1/api.pb.go (3)
Conditions(991-1000)Conditions(1013-1013)Conditions(1028-1030)api/compute/v1alpha1/machine_types.go (4)
MachineConditionType(190-190)VolumeStateAttached(186-186)MachineCondition(193-204)NetworkInterfaceStateAttached(162-162)internal/apis/compute/machine_types.go (4)
MachineConditionType(178-178)VolumeStateAttached(174-174)MachineCondition(181-192)NetworkInterfaceStateAttached(150-150)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: test
- GitHub Check: lint
🔇 Additional comments (5)
poollet/machinepoollet/controllers/machine_controller.go (5)
508-509: LGTM!The workflow correctly computes conditions based on current state and merges them with existing conditions to maintain history.
517-541: LGTM!The function correctly aggregates machine conditions and the empty
Typechecks provide defensive programming against invalid conditions.
564-607: LGTM!The per-item
LastTransitionTimelogic correctly:
- Uses the failing volume's transition time when not all volumes are ready
- Derives the latest transition time across all volumes when all are ready
- Falls back to current time when no transition times are available
609-648: LGTM!The network interface condition logic correctly mirrors the volume logic, with appropriate per-item
LastTransitionTimehandling.
661-667: Condition history correctly tracks status transitions only.The logic appends conditions only when status changes, which is the correct behavior for maintaining a transition history. Note that this means
ReasonandMessagevalues are preserved from the point of transition, not updated if they change while status remains the same.
a5f1d86 to
79e2fa4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @poollet/machinepoollet/controllers/machine_controller.go:
- Around line 650-696: mergeMachineConditions currently retains existing
condition types forever so stale types (e.g., VolumesReady) persist; fix by
computing a set of condition types present in the incoming current slice and
prune any existing conditions whose type is not in that set (or alternatively
append a new "no longer applicable" condition with
Status=Unknown/Reason="ConditionNoLongerApplicable") before grouping/sorting;
update the logic in mergeMachineConditions (use currentTypes map built from
current, filter the initial updated slice accordingly, and then continue with
the existing grouping/limit logic using maxConditionsPerType) so stale condition
types are removed or explicitly marked.
🧹 Nitpick comments (1)
poollet/machinepoollet/controllers/machine_controller.go (1)
556-556: Consider defining constants for condition types.The condition type strings
"MachineReady","VolumesReady", and"NetworkInterfacesReady"are used as literals. Defining them as constants would prevent typos and improve maintainability.Example constant definitions
const ( MachineConditionTypeMachineReady = computev1alpha1.MachineConditionType("MachineReady") MachineConditionTypeVolumesReady = computev1alpha1.MachineConditionType("VolumesReady") MachineConditionTypeNetworkInterfacesReady = computev1alpha1.MachineConditionType("NetworkInterfacesReady") )
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
poollet/machinepoollet/controllers/machine_controller.go
🧰 Additional context used
🧬 Code graph analysis (1)
poollet/machinepoollet/controllers/machine_controller.go (4)
iri/apis/machine/v1alpha1/api.pb.go (3)
Conditions(991-1000)Conditions(1013-1013)Conditions(1028-1030)api/compute/v1alpha1/machine_types.go (4)
MachineConditionType(190-190)VolumeStateAttached(186-186)MachineCondition(193-204)NetworkInterfaceStateAttached(162-162)internal/apis/compute/machine_types.go (4)
MachineConditionType(178-178)VolumeStateAttached(174-174)MachineCondition(181-192)NetworkInterfaceStateAttached(150-150)client-go/applyconfigurations/compute/v1alpha1/machinecondition.go (1)
MachineCondition(26-28)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: lint
- GitHub Check: test
🔇 Additional comments (4)
poollet/machinepoollet/controllers/machine_controller.go (4)
508-509: LGTM!The refactoring to compute conditions separately and then merge with existing conditions is a clean approach for persisting condition history while deduplicating by status changes.
564-607: LGTM!The per-item
LastTransitionTimehandling correctly derives the condition's transition time from individual volume states. The fallback tovs.Namefor local-disk volumes (where no actual volume resource exists) is a good defensive handling.
669-693: LGTM on the pruning logic.The approach of sorting by
LastTransitionTimeascending and keeping the last 10 entries per type correctly preserves the most recent condition history. The deterministic ordering by type name ensures consistent results across reconciliations.Consider adding a brief comment explaining why
maxConditionsPerType = 10was chosen if there's a specific rationale (e.g., memory constraints, observability tooling requirements).
617-623: NetworkInterfaceRef.Name is always populated—the comparison to volume handling is not applicable.Unlike volumes, which can have an empty
VolumeRef.Namefor local-disk volumes (requiring the fallback on line 575-577),NetworkInterfaceRef.Nameis guaranteed to be populated by the controller. TheMachineNetworkInterfaceName()function (inapi/compute/v1alpha1/util.go) returns either:
- The explicit
NetworkInterfaceRef.Nameif a reference exists- A generated ephemeral name (via
MachineEphemeralNetworkInterfaceName()) if ephemeral is used- Empty only if neither is set (invalid state filtered earlier)
There is no "local-disk equivalent" for NICs that would result in empty names. The code at lines 617-623 is correct as-is.
Likely an incorrect or invalid review comment.
79e2fa4 to
f296da2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
poollet/machinepoollet/controllers/machine_controller.go (2)
556-556: Define constants for MachineCondition types to align with codebase patterns.The strings
"MachineReady"(line 556),"VolumesReady"(line 601), and"NetworkInterfacesReady"(line 642) are hardcoded. Consider defining these as constants inapi/compute/v1alpha1/machine_types.go, following the established pattern used for other string types likePower,VolumeState,NetworkInterfaceState, andMachineState.
617-626: TheNetworkInterfaceRef.Nameis always populated during status creation (both inconvertIRINetworkInterfaceStatus()and the fallback path ingetNetworkInterfaceStatusesForMachine()set it tonicName, which is derived frommachineNic.Name). Unlike the volumes case, whereVolumeRef.Namecan genuinely be empty for local-disk volumes, there is no equivalent scenario for network interfaces whereNetworkInterfaceRef.Namewould be empty. The current code is correct and does not require defensive fallback logic. If consistency with the volumes pattern is desired for future-proofing, the suggested refactor could be applied, but it addresses no actual defect.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
poollet/machinepoollet/controllers/machine_controller.gopoollet/machinepoollet/controllers/machine_controller_test.go
🧰 Additional context used
🧬 Code graph analysis (1)
poollet/machinepoollet/controllers/machine_controller_test.go (2)
api/compute/v1alpha1/machine_types.go (1)
MachineConditionType(190-190)internal/apis/compute/machine_types.go (1)
MachineConditionType(178-178)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: lint
- GitHub Check: test
🔇 Additional comments (6)
poollet/machinepoollet/controllers/machine_controller.go (3)
508-509: LGTM!The refactoring to compute conditions separately and then merge them with existing conditions is clean and aligns with the PR objective to persist condition history.
564-607: LGTM!The per-item
LastTransitionTimelogic correctly handles:
- Local-disk volumes (fallback to
vs.NamewhenVolumeRef.Nameis empty)- Not-ready state (uses the failing volume's transition time)
- All-ready state (uses the latest transition time among all volumes)
The fallback to
nowwhen no transition time is available is appropriate.
650-696: LGTM!The
mergeMachineConditionsfunction correctly:
- Preserves existing conditions
- Appends new conditions only on status transitions (avoiding duplicates)
- Caps history at 10 entries per type to prevent unbounded growth
- Uses deterministic sorting (by type, then by
LastTransitionTime) for consistent outputThe logic aligns well with the PR objective to persist condition history.
poollet/machinepoollet/controllers/machine_controller_test.go (3)
199-220: LGTM!The test assertions correctly use dynamic resource names (
volume.Name,nic.Name) to match the updated controller behavior, and the condition type is updated to"MachineReady"as expected.
508-516: LGTM!Test correctly validates that a running machine has the
MachineReadycondition set toTrue.
528-536: LGTM!Test correctly validates that a terminating machine has the
MachineReadycondition set toFalse.
f296da2 to
3a06b52
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@poollet/machinepoollet/controllers/machine_controller.go`:
- Around line 650-696: The mergeMachineConditions function currently keeps all
existing conditions even when a condition type no longer appears in current
(causing stale types to persist); update mergeMachineConditions to detect
condition types present in current (build a set from the current slice) and drop
any existing conditions whose type is absent from that set so obsolete types are
removed from the merged result; keep the rest of the logic (appending new status
changes, grouping by type, enforcing maxConditionsPerType, sorting) intact and
reference the symbols mergeMachineConditions, existing, current,
conditionsByType and maxConditionsPerType when making the change.
🧹 Nitpick comments (1)
poollet/machinepoollet/controllers/machine_controller.go (1)
686-692: Consider edge case: condition comparison uses last occurrence in existing slice.The
lastStatusmap is built by iterating overexistingconditions, which may contain multiple entries per type. The comparison at line 663 uses the status of the last occurrence inexisting(due to map overwrites during iteration). Since conditions are sorted ascending by time after previous merges, this correctly reflects the most recent status.This is correct behavior, but adding a brief comment explaining that
lastStatuscaptures the most recent status per type would improve maintainability.📝 Suggested documentation improvement
func (r *MachineReconciler) mergeMachineConditions( existing []computev1alpha1.MachineCondition, current []computev1alpha1.MachineCondition, ) []computev1alpha1.MachineCondition { updated := append([]computev1alpha1.MachineCondition{}, existing...) + // Build map of most recent status per condition type. + // Since existing conditions are sorted by time (oldest first), + // the last occurrence per type reflects the most recent status. lastStatus := make(map[string]corev1.ConditionStatus, len(existing)) for _, c := range existing { lastStatus[string(c.Type)] = c.Status }
| func (r *MachineReconciler) mergeMachineConditions( | ||
| existing []computev1alpha1.MachineCondition, | ||
| current []computev1alpha1.MachineCondition, | ||
| ) []computev1alpha1.MachineCondition { | ||
| updated := append([]computev1alpha1.MachineCondition{}, existing...) | ||
|
|
||
| lastStatus := make(map[string]corev1.ConditionStatus, len(existing)) | ||
| for _, c := range existing { | ||
| lastStatus[string(c.Type)] = c.Status | ||
| } | ||
|
|
||
| for _, newCond := range current { | ||
| conditionType := string(newCond.Type) | ||
| if prev, ok := lastStatus[conditionType]; !ok || prev != newCond.Status { | ||
| updated = append(updated, newCond) | ||
| lastStatus[conditionType] = newCond.Status | ||
| } | ||
| } | ||
|
|
||
| // Group conditions by type and keep only the latest 10 per type (by LastTransitionTime) | ||
| const maxConditionsPerType = 10 | ||
| conditionsByType := make(map[string][]computev1alpha1.MachineCondition) | ||
| for _, cond := range updated { | ||
| condType := string(cond.Type) | ||
| conditionsByType[condType] = append(conditionsByType[condType], cond) | ||
| } | ||
|
|
||
| var result []computev1alpha1.MachineCondition | ||
| // Sort condition types for deterministic ordering | ||
| types := make([]string, 0, len(conditionsByType)) | ||
| for t := range conditionsByType { | ||
| types = append(types, t) | ||
| } | ||
| sort.Strings(types) | ||
| for _, t := range types { | ||
| conds := conditionsByType[t] | ||
| sort.Slice(conds, func(i, j int) bool { | ||
| return conds[i].LastTransitionTime.Before(&conds[j].LastTransitionTime) | ||
| }) | ||
| if len(conds) > maxConditionsPerType { | ||
| conds = conds[len(conds)-maxConditionsPerType:] | ||
| } | ||
| result = append(result, conds...) | ||
| } | ||
|
|
||
| return result | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if there's documentation or comments about condition history retention behavior
rg -n -i "stale|history|retain|persist" poollet/machinepoollet/controllers/machine_controller.go
# Check if there are tests covering the stale condition scenario
rg -n -A 5 "mergeMachineConditions" poollet/machinepoollet/controllers/machine_controller_test.goRepository: ironcore-dev/ironcore
Length of output: 47
🏁 Script executed:
#!/bin/bash
# List files to understand directory structure
find . -name "machine_controller.go" -type f
find . -name "machine_controller_test.go" -type f
# Check if the files exist and look for any comments around condition handling
find . -name "machine_controller.go" -type f -exec head -700 {} \; | tail -100Repository: ironcore-dev/ironcore
Length of output: 3575
🏁 Script executed:
#!/bin/bash
# Check for mergeMachineConditions tests
grep -n -A 20 "mergeMachineConditions" ./poollet/machinepoollet/controllers/machine_controller_test.go
# Check how current conditions are generated - look for function calls to condition functions
grep -n "computeVolumesReadyCondition\|computeNetworkInterfacesReadyCondition" ./poollet/machinepoollet/controllers/machine_controller.go
# Find where mergeMachineConditions is called to understand the flow
grep -n "mergeMachineConditions" ./poollet/machinepoollet/controllers/machine_controller.goRepository: ironcore-dev/ironcore
Length of output: 728
🏁 Script executed:
#!/bin/bash
# Read the computeVolumesReadyCondition function to see what it returns when empty
sed -n '564,608p' ./poollet/machinepoollet/controllers/machine_controller.go
# Check where computedConds is built to understand the full flow
sed -n '520,545p' ./poollet/machinepoollet/controllers/machine_controller.goRepository: ironcore-dev/ironcore
Length of output: 2410
🏁 Script executed:
#!/bin/bash
# Search for mergeMachineConditions tests more thoroughly
grep -n "mergeMachineConditions\|TestMergeMachine" ./poollet/machinepoollet/controllers/machine_controller_test.go
# Also check if there are any tests about condition removal or stale conditions
grep -n -i "stale\|remove.*condition\|condition.*remov" ./poollet/machinepoollet/controllers/machine_controller_test.go
# Look at the actual test file around mergeMachineConditions if it exists
grep -n -B 5 -A 30 "func.*mergeMachine" ./poollet/machinepoollet/controllers/machine_controller_test.goRepository: ironcore-dev/ironcore
Length of output: 47
🏁 Script executed:
#!/bin/bash
# List test functions in the test file
grep -n "^func Test" ./poollet/machinepoollet/controllers/machine_controller_test.go | head -20
# Get overview of test file
wc -l ./poollet/machinepoollet/controllers/machine_controller_test.go
# Search for any mention of conditions in tests
grep -n -i "condition" ./poollet/machinepoollet/controllers/machine_controller_test.go | head -20Repository: ironcore-dev/ironcore
Length of output: 1454
🏁 Script executed:
#!/bin/bash
# Search for any documentation files mentioning conditions
find . -name "*.md" -type f -exec grep -l "condition" {} \; | head -10
# Search for any design docs in common locations
find . -path "*/docs/*" -name "*.md" -type f 2>/dev/null | head -10
# Check if there are any comments in the reconciliation logic about condition handling
sed -n '490,550p' ./poollet/machinepoollet/controllers/machine_controller.go | grep -n -i "condition\|TODO\|FIXME"
# Look at the entire reconciliation function to understand condition flow
grep -n "func (r \*MachineReconciler) Reconcile" ./poollet/machinepoollet/controllers/machine_controller.goRepository: ironcore-dev/ironcore
Length of output: 1857
🏁 Script executed:
#!/bin/bash
# Get the full mergeMachineConditions function and surrounding context
sed -n '650,700p' ./poollet/machinepoollet/controllers/machine_controller.go
# Check if there's any cleanup or removal logic elsewhere
grep -n "Status.Conditions\s*=" ./poollet/machinepoollet/controllers/machine_controller.go | head -10Repository: ironcore-dev/ironcore
Length of output: 1851
🏁 Script executed:
#!/bin/bash
# Search for condition-related documentation
grep -n -i "condition\|history" ./docs/architecture/*.md ./docs/concepts/*.md 2>/dev/null | grep -i "condition"
# Check if there are any proposals or design decisions about conditions
grep -r -i "condition.*history\|audit\|retain" ./docs/proposals/ 2>/dev/null
# Look for any comments in related files about condition lifecycle
grep -n -i "stale\|obsolete\|expired\|cleanup" ./poollet/machinepoollet/controllers/*.goRepository: ironcore-dev/ironcore
Length of output: 3404
Remove stale conditions when their corresponding resources are deleted.
The current merge logic preserves all existing conditions and only appends new ones when status changes. This causes conditions from removed resources (e.g., VolumesReady after all volumes are deleted) to persist indefinitely. For example, a machine that previously had volumes with VolumesReady=True will continue showing that condition even after all volumes are removed.
While the maxConditionsPerType cap at 10 prevents unbounded growth, the fundamental issue is that obsolete condition types remain in the status, creating misleading information about the machine's current state.
Consider either: (1) explicitly removing condition types that are no longer applicable (when a resource type goes from having items to zero), or (2) adding a mechanism to mark/filter stale conditions so they're preserved for audit/history but clearly distinguished from current state.
🤖 Prompt for AI Agents
In `@poollet/machinepoollet/controllers/machine_controller.go` around lines 650 -
696, The mergeMachineConditions function currently keeps all existing conditions
even when a condition type no longer appears in current (causing stale types to
persist); update mergeMachineConditions to detect condition types present in
current (build a set from the current slice) and drop any existing conditions
whose type is absent from that set so obsolete types are removed from the merged
result; keep the rest of the logic (appending new status changes, grouping by
type, enforcing maxConditionsPerType, sorting) intact and reference the symbols
mergeMachineConditions, existing, current, conditionsByType and
maxConditionsPerType when making the change.
Proposed Changes
machineConditionfor volume and nic with the respectivelastStateTransitionTimemachineConditionimplementation to persist theConditionshistory/ref Integrate
conditionsAPI for all relevant API resources in IronCore #1343Summary by CodeRabbit
Bug Fixes
Tests
✏️ Tip: You can customize this high-level summary in your review settings.