We are observing OOM on scheduler and scheduler extension pods on clusters with large number of nodes (1500+) with only 1 or 2 nodes being inferentia nodes. Our current hypothesis is the scheduler trying to add all nodes in the main memory.
We can keep on increasing the memory but is there any provision in the scheduler config to not scan and add non inferentia nodes?
Maybe a note in the docs will help
--
Also as a general curiosity, is the scheduler and extension source code available somewhere?
We are observing OOM on scheduler and scheduler extension pods on clusters with large number of nodes (1500+) with only 1 or 2 nodes being inferentia nodes. Our current hypothesis is the scheduler trying to add all nodes in the main memory.
We can keep on increasing the memory but is there any provision in the scheduler config to not scan and add non inferentia nodes?
Maybe a note in the docs will help
--
Also as a general curiosity, is the scheduler and extension source code available somewhere?