Skip to content

Conversation

@jacekn
Copy link
Contributor

@jacekn jacekn commented Dec 4, 2025

This PR bumps parallel catchup worker pod RAM requests from 2 to 8G.

I noticed that the pods burst quite a lot and having requests so low when compared to 16G limit means k8s may need to kill pods sometimes due to memory contention.

This PR bumps prallel catchup worker pod RAM requests
from 2 to 8G.

I noticed that the pods burst quite a lot and having requests
so low when compared to 16G limit means k8s may need to
kill pods sometimes due to memory contention.
Copy link
Contributor

@jayz22 jayz22 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's okay to bump this, especially if we are using more than it anyway. We need to monitor it case the cluster don't have enough memory to guarantee for all pods.

@jayz22
Copy link
Contributor

jayz22 commented Dec 4, 2025

Approved, we can merge this after cutting core v25-rc2.

@jacekn jacekn merged commit ce0b1e7 into stellar:main Dec 9, 2025
3 checks passed
@jacekn jacekn deleted the resources branch December 9, 2025 09:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants