Open
Conversation
…inge task execution.
406d3cc to
bf817b4
Compare
Collaborator
Author
|
Hi @rtpro! I have pushed commit to remove stalled jobs before Procrastination worker start. This error raised by constraint that was created by Procrastination. Procrastination knows about it and correctly handling it in Python code. You can consider it as INFO level message. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
We have two setting for Procrastination tasks:
queueing_lock- this blocks spam of tasks in queue. It allows only one task with statustodo. This means that in DB at any time can be one(doing) or two tasks(doing+todo).lock- forbidden only onedoingtask in any time.If we use:
Only
queueing_lockwill stoptodotask spam. But we will have 2 executing same tasks.Only
lockwill have only one executing of some kind task, but will havetodotask spam.queueing_lock+lockgives as optimal result we have no spam in task queue and always only one executing task.Now about problem: Logic of working lock pretty simple if task with status
doingexists is skips start of executing new one. And this leas to "dead" task after server restart. Worker killed, statusdoingand new execution will never start.This is significant issue that need to be discus. For that reason I have created this MR as draft.