-
Notifications
You must be signed in to change notification settings - Fork 521
CNV-67916: hypershift, kubevirt, add enhancement for IP management #1855
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@qinqon: This pull request references CNV-67916 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the epic to target the "4.21.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
enhancements/hypershift/kubevirt-static-ip-address-management.md
Outdated
Show resolved
Hide resolved
enhancements/hypershift/kubevirt-static-ip-address-management.md
Outdated
Show resolved
Hide resolved
enhancements/hypershift/kubevirt-static-ip-address-management.md
Outdated
Show resolved
Hide resolved
enhancements/hypershift/kubevirt-static-ip-address-management.md
Outdated
Show resolved
Hide resolved
enhancements/hypershift/kubevirt-static-ip-address-management.md
Outdated
Show resolved
Hide resolved
enhancements/hypershift/kubevirt-static-ip-address-management.md
Outdated
Show resolved
Hide resolved
enhancements/hypershift/kubevirt-static-ip-address-management.md
Outdated
Show resolved
Hide resolved
enhancements/hypershift/kubevirt-static-ip-address-management.md
Outdated
Show resolved
Hide resolved
b8d3daf to
b6cb5a1
Compare
|
/cc @maiqueb |
|
/cc @orenc1 |
| // Addresses specify IP ranges from which addresses will be allocated for this interface. | ||
| // Supports CIDR notation, hyphenated ranges, and single IPs. | ||
| // +required | ||
| Addresses []string `json:"addresses"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should probably use hostedcluster_types things like CIDRBlock or ipNet, or maybe define a new one that can be CEL validated if it is really necessary to have ranges and single IPs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should probably use hostedcluster_types things like CIDRBlock or ipNet, or maybe define a new one that can be CEL validated if it is really necessary to have ranges and single IPs
Since this can be a cidr, ip range or single IP I cannot use those types, so this has to be a string slice, I will add a CEL validation.
|
|
||
| **Management Cluster Components**: | ||
| - Cluster API Provider KubeVirt controller (KubevirtMachineTemplate controller): Enhanced with IP allocation logic | ||
| - HyperShift operator (NodePool controller): Updated to handle network configuration in NodePool spec |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Won't it also be necessary to update the ignition generation for the the netconfig?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Won't it also be necessary to update the ignition generation for the the netconfig?
The fact that we implmenet "IP allocation logic" using virtual machine cloud init network data it's implementation details, so it's part of the Cluster API Provider Kubevirt controller changes
|
|
||
| - Adds complexity to the hypershift kubevirt controllers: | ||
| - implementing and ip pool mechanism | ||
| - Needs to parse and understand the openshift network config format |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/openshift/openstack/ ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/openshift/openstack/ ?
Right, I will also add the it should unerstand either openstack or netplan depending of the kind of networkData we want to use either configdrive or nocloud
| - Symptom: New KubevirtMachine resources remain in Pending state | ||
| - Detection: Check KubevirtMachine events and status conditions | ||
| - Log output: "No available IPs in pool for interface X" | ||
| - Metric: `kubevirt_ippool_exhausted{cluster="X"}` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this going to be a status condition on the NodePool?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this going to be a status condition on the NodePool?
It should, I will add that
maiqueb
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you.
|
|
||
| - Enable static IP address assignment for HyperShift guest cluster nodes running on KubeVirt | ||
| - Provide a flexible API for defining IP address pools with support for various IP range formats (CIDR, hyphenated ranges, single IPs) | ||
| - Support both IPv4 and IPv6 addressing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so the kubevirt VMs which are actually the hosted cluster nodes can be ipv6 nodes ?
do we support single stack ipv6 ? or only dual-stack ?
Just trying to figure out the test matrix we need to have later on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so the kubevirt VMs which are actually the hosted cluster nodes can be ipv6 nodes ?
do we support single stack ipv6 ? or only dual-stack ?
Just trying to figure out the test matrix we need to have later on.
For hyperhsift kubevirt we don't use KubeVirt DHCP mechanism we use OVN DHCPOptions instead this allow us to support single stack ipv4/ipv6 and dual stack.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but isn't this about localnet secondary interfaces ?
AFAICT we do not have OVN DHCPOpts for localnet secondaries - only on the pod network (masquerade or primary UDN attachments).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but isn't this about localnet secondary interfaces ?
AFAICT we do not have OVN
DHCPOptsfor localnet secondaries - only on the pod network (masquerade or primary UDN attachments).
Right, I mixed things up, so since this is static IP assignment and we don't depend on KubeVirt DHCP capabilities we support sintle stack (both ipv4 and ipv6) and dual stack.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, that's aligned with my expectations.
OK, so the test matrix will be complex. Do we need to have anything in origin ?...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, that's aligned with my expectations.
OK, so the test matrix will be complex. Do we need to have anything in origin ?...
At origin we test dual stack
enhancements/hypershift/kubevirt-static-ip-address-management.md
Outdated
Show resolved
Hide resolved
enhancements/hypershift/kubevirt-static-ip-address-management.md
Outdated
Show resolved
Hide resolved
| Migrating nodepools nodes from dynamic to static IPs is not expected, those nodes | ||
| will be reconstructed and new ip assigned. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm sorry, but I don't follow this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm sorry, but I don't follow this.
I will reword this
| - BootstrapNetworkConfig in KubevirtMachineSpec (cluster-api-provider-kubevirt) | ||
| - BootstrapNetworkConfig in nodepool's KubevirtPlatformSpec (hypershift) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm forced to say one of the most confusing things to me (I'm not HCP savvy) was to differentiate between these two CRDs.
When I would - as a hosted cluster "owner" - use one of these versus the other ?
Which of these should I use and why ? Or do I need to use both ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm forced to say one of the most confusing things to me (I'm not HCP savvy) was to differentiate between these two CRDs.
When I would - as a hosted cluster "owner" - use one of these versus the other ?
Which of these should I use and why ? Or do I need to use both ?
At hypershift you use the NodePool CRD, KubevirtMachine is internal to hypershift.
| 1. **IP Pool Exhaustion**: Machine creation fails with clear error. Operator must expand pool or delete unused machines. | ||
| 2. **Invalid Network Configuration**: HostedCluster creation fails validation. Operator must correct configuration. | ||
| 3. **Network Config Application Failure**: Node fails to boot or is unreachable. Visible in VM console logs. Operator must verify network configuration correctness. | ||
| 4. **IP Allocation Conflict**: Different nodepools using same network but with overlapping IPs should fail with clear error. Operator should fix the subnets. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why should this matter ? The way I see it, this would become an issue if the node pools are being used at the same time in the same node
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why should this matter ? The way I see it, this would become an issue if the node pools are being used at the same time in the same node
Nodepools cannot be used for the same node, since they are definining those nodes, issue is that one nodepool can contribute to the ip exhaustion of the other if they have overlapping subnets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what I don't follow is how can one network have multiple node pools.
Can you point me towards documentation about the node pool ? What does it reflect, and how it relates to the network ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the nodepool kubevirt specific api hass the additionalNetworks field, to specify the NAD name, this enhancement is adding the IPAM configuration at same API level, and a hosted cluster can have multiple nodepools, this means multiple kubevirt nodepools can have conflicting IPAM configuration, and that's what we want to detect.
| - Resolution: Verify OpenStack network config format correctness, check gateway/DNS reachability | ||
|
|
||
| **IP Allocation Errors**: | ||
| - Symptom: Machine creation fails or succeeds with incorrect IP |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it realistic to include incorrect IP allocation in this metric ?
Asking because (in my own limited mental model) we could prevent this, instead of capturing it in a metric.
Couldn't we ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it realistic to include incorrect IP allocation in this metric ?
Asking because (in my own limited mental model) we could prevent this, instead of capturing it in a metric.
Couldn't we ?
Agree since we are already using CEL to ensure propre IPs are configured at the addresses
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it realistic to include incorrect IP allocation in this metric ?
Asking because (in my own limited mental model) we could prevent this, instead of capturing it in a metric.
Couldn't we ?
Agree since we are already using CEL to ensure propre IPs are configured at the addresses, I will remove it
| - New nodes will use DHCP | ||
| - No impact on running workloads |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not so fast - I'm not so sure this is this easy / there aren't impacts.
Let's say you're using static IPs, then you stop using it. You remove the NetworkConfig from the template spec.
- Your existing nodes with static IPs will continue to function.
- a new node will gets its IP via DHCP
- what will prevent this new node from getting an IP which is already in use by the static IPs counter part ?
AFAIU, you don't have anything to map the static IPs from the node pool to the DHCP pool availability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not so fast - I'm not so sure this is this easy / there aren't impacts.
Let's say you're using static IPs, then you stop using it. You remove the
NetworkConfigfrom the template spec.
- Your existing nodes with static IPs will continue to function.
- a new node will gets its IP via DHCP
- what will prevent this new node from getting an IP which is already in use by the static IPs counter part ?
AFAIU, you don't have anything to map the static IPs from the node pool to the DHCP pool availability.
Even though I think we should not support this, it may work if NodePool is changed so bootstrap network config is removed and virtual machines are restarted.
908013f to
5db1dd5
Compare
a60fca8 to
c7dcbdb
Compare
Signed-off-by: Enrique Llorente <ellorent@redhat.com>
c7dcbdb to
8287292
Compare
|
@qinqon: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
This enhancement proposes adding static IP address management (IPAM) capabilities to HyperShift when using the KubeVirt provider and the KubeVirt nodepool is not attached to default pod network using a multus layer2 network to replace it.
The implementation enables operators to define IP pools and network configurations that are automatically allocated to virtual machines during cluster provisioning. This functionality addresses the need for predictable, static network configuration in environments where DHCP is not available or desirable, particularly in on-premises and edge deployments.
https://issues.redhat.com/browse/CNV-67916