During a node scale-down, first all regular pods and then finally the critical pods (e.g. kube-proxy, ebs-csi-node) are deleted by Kubernetes. We have experienced infrequently that Robusta through the event forwarder reported that these critical pods are killed by OOMKilled event.
We have another Kubernetes event tracking pipeline through Alloy that has left no trace of this, so I'm suspecting it could be something within kubewatch. Haven't seen any other issue, so created this.
Screenshot of the message:

And an example screenshot:

Just to confirm: I have double checked that the node has had available memory, so it's definitely not an actual OOMKill.
Using kubetach v2.12.0, on K8s 1.32.10. Standard setup, except we're using Kops which could also contribute to the node shutdown mechanism.
During a node scale-down, first all regular pods and then finally the critical pods (e.g. kube-proxy, ebs-csi-node) are deleted by Kubernetes. We have experienced infrequently that Robusta through the event forwarder reported that these critical pods are killed by
OOMKilledevent.We have another Kubernetes event tracking pipeline through Alloy that has left no trace of this, so I'm suspecting it could be something within
kubewatch. Haven't seen any other issue, so created this.Screenshot of the message:

And an example screenshot:

Just to confirm: I have double checked that the node has had available memory, so it's definitely not an actual OOMKill.
Using kubetach v2.12.0, on K8s 1.32.10. Standard setup, except we're using Kops which could also contribute to the node shutdown mechanism.