Skip to content

Conversation

@GeoSinoLMU
Copy link
Contributor

Because there is a limit on the maximum number of submitted jobs per user in the partition, I added a function "jobs_on_remote_site" in order to avoid the script will be killed at the deep night because of the above limitation.

Because there is a limit on the maximum number of submitted jobs per user in the partition, I added a function "jobs_on_remote_site" in order to avoid the script will be killed at the deep night because of the above limitation.
@solvithrastar
Copy link
Owner

This limit is related to your specific cluster right?

@GeoSinoLMU
Copy link
Contributor Author

I am not sure if there is the limit on piz daint, but on supermuc, a user is allowed to have less than 30 compute jobs in the sum over all queues at a time, which often causes script is killed when with lots of jobs. I think this case may be also common in other cluster

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants