Skip to content

Conversation

@EmileSonneveld
Copy link
Member

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 0832adafa8

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

- entryname: "arguments.json"
entry: $(inputs)
ResourceRequirement:
ramMin: 4096

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ramMin and ramMax are preferebly set the same because ramMax won't be taken into consideration when scheduling workload so if requirements grow to ramMax you might end up OOM on the underlying hardware and then your job fails. So having different values for min and max for memory requirements reduces the reliability.

The same does not uphold for CPU sharing. For CPU the min is important to have a certain baseline performance and the max is useful to avoid taking too many resources but if you have a high max but not as many CPU us available your process won't die it will just be slower. The default CPU is 1 so you should at least increase cpuMin otherwise you will have poor performance and probably best to set cpuMax to some high value like 7.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it ok to avoid coresMax instead of giving it a high value?

@EmileSonneveld
Copy link
Member Author

Running sar_coherence with 3 dates took 6.5Gb ram at peak:

cat /sys/fs/cgroup/system.slice/docker-e37599bf5e4d8bf941816c2fa29b3db349e2ca5499baa40bf04cf8fd9c8dd9eb.scope/memory.peak
6534795264

@EmileSonneveld
Copy link
Member Author

EmileSonneveld commented Jan 19, 2026

Ran 10 CWL jobs at the same time with this settings, and none crashed
Ran 10 CWL jobs without the settings, and all failed

@EmileSonneveld EmileSonneveld merged commit 75136b1 into main Jan 19, 2026
3 of 4 checks passed
@EmileSonneveld EmileSonneveld deleted the ram_limit branch January 19, 2026 13:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants