-
Notifications
You must be signed in to change notification settings - Fork 4.2k
update node info processors to include unschedulable nodes #8520
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: elmiko The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
i'm working on adding more unit tests for this behavior, but i wanted to share this solution so we could start talking about it. |
a0ebb28
to
3270172
Compare
i've rewritten this patch to use all nodes as the secondary value instead of using a new list of ready unschedulable nodes. |
i need to do a little more testing on this locally, but i think this is fine for review. |
// Last resort - unready/unschedulable nodes. | ||
for _, node := range nodes { | ||
// we want to check not only the ready nodes, but also ready unschedulable nodes. | ||
for _, node := range append(nodes, allNodes...) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm not sure that this is appropriate to append these. theoretically the allNodes
should already contain nodes
. i'm going to test this out using just allNodes
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
due to filtering that happens in obtainNodeLists
, we need to combine both lists of nodes here.
3270172
to
cb2649a
Compare
i updated the argument names in the |
it seems like the update to the mixed node processor needs a little more investigation. |
cb2649a
to
fd53c0b
Compare
it looks like we need both the |
fd53c0b
to
906a939
Compare
rebased |
@jackfrancis @towca any chance at a review here? |
cluster-autoscaler/processors/nodeinfosprovider/mixed_nodeinfos_processor.go
Outdated
Show resolved
Hide resolved
i can put together a patch like this and give it some tests. |
This change ensures that a sanitized node has its .spec.unschedulable field set to false.
This change passes all the nodes to the mixed node info provider processor that is called from `RunOnce`. The change is to allow unschedulable and unready nodes to be processed as bad canidates during the node info template generation. The Process function has been updated to separate nodes into good and bad candidates to make the filtering match the original intent.
906a939
to
5244a8f
Compare
rebased and updated with the requested changes. |
} | ||
newNode.Labels[apiv1.LabelHostname] = newName | ||
|
||
newNode.Spec.Unschedulable = false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a bit scary, we're essentially making the Unschedulable
field work like a status taint without explicitly configuring it like we do for taints. Are we sure this is a good idea in the general case? Maybe we should extend the TaintConfig
so that the behavior can be configured explicitly?
At minimum I think we should add a comment explaining that:
- Normally when we pick the base Node to sanitize we pick a healthy, schedulable Node and assume that a new one will be similarly healthy.
- If there are no healthy, schedulable Nodes to pick and
NodeGroup.TemplateNodeInfo()
returnscloudprovider.ErrNotImplemented
, we'll try sanitizng an unhealthy one that might have theUnschedulable
field set totrue
. In this case we clear the field during sanitization on the assumption that the unschedulability is transient/specific to a given Node, and a new Node will not have the field set.
Is this something we should be assuming? Does anyone know how likely it is to happen in practice that we have a NodeGroup in which all new Nodes appear with the Unschedulable
field set for an extended period of time?
I'm mostly worried about this breaking backwards compatibility for cloud providers that don't implement NodeGroup.TemplateNodeInfo()
:
- Assume such a cloud provider can have a NodeGroup where all Nodes are expected to have the
Unschedulable
field set for extended periods of time. If CA scales the group up, a new Node will also have the field set and the Pod won't be able to schedule. - Before this change, CA would see all Nodes from the NodeGroup as "bad" candidates, so it wouldn't be able to create a nodeInfo for the NodeGroup, so it wouldn't attempt to scale it up. If there is another NodeGroup with healthy Nodes that can work for the pending Pods, CA would scale that one instead.
- After this change, CA would sanitize one of the bad candidates, and possibly attempt to scale up a NodeGroup that doesn't actually work.
@x13n @jackfrancis WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does anyone know how likely it is to happen in practice that we have a NodeGroup in which all new Nodes appear with the Unschedulable field set for an extended period of time?
For the "all new Nodes" scenario it would be an external cloudprovider scenario where we rely upon the cloudprovider to set Unschedulable
to False after certain conditions are met. That sounds plausible and not merely theoretical to me?
I think to address your concern @towca we have to have some confidence that we can ignore the Unschedulable
field value rather than simply overwrite it every time. A couple of paths forward:
- as you point out, determine with some confidence the distinction between (1) "node that is Unschedulable for a reason that has something do with its group", i.e., if we replicate out more nodes in that group they will inherit that Unschedulable outcome and (2) node that is Unschedulable for reasons that are unique and non-replicable
- just add a new feature flag "IncludeUnschedulableNodeCandidates" (or something like that) that is opt-in -- perhaps the conditions that would induce a user to want to include Unschedulable nodes for inclusion in node template candidacy are environmental and not easily discerned by standard k8s API signals, in which case we leave it to the user to determine that they want to use this strategy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the Unschedulable
field was key to the bug we are seeing, as the user is essentially setting Unschedulable: true
for all the nodes in a node group as part of their upgrade process. new nodes would not have entered with Unschedulable: true
, so this created a situation in the autoscaler where it could not process those nodes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So in the concrete situation you're describing there is a known (temporary) Unschedulable: true
node state during regular upgrades, that can occasionally intersect w/ operational capacity needs, and when those intersect, new infra is failing to be provisioned in a timely way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right, for example, i want to use the autoscaler to force an expansion of a node group. i manually set all the nodes to .spec.unschedulable = true
, i taint the nodes, then i start evicting workloads.
in theory, this will cause the autoscaler to see the new pending pods and make more nodes.
but, if all the nodes in the node group are marked as unschedulable, then the autoscaler will not be able to produce a valid template from the observable nodes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The more we discuss this the more I think this is a specific feature flag but not something we'd want to do by default. @towca wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'll add this to the agenda for tomorrow's sig meeting.
What type of PR is this?
/kind bug
What this PR does / why we need it:
This PR adds a new lister for ready unschedulable nodes, it also connects that lister to a new parameter in the node info processors
Process
function. This change enables the autoscaler to use unschedulable, but otherwise ready, nodes as a last resort when creating node templates for scheduling simulation.Which issue(s) this PR fixes:
Fixes #8380
Special notes for your reviewer:
I'm not sure if this is the best way to solve this problem, but i am proposing this for further discussion and design.
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: