Skip to content

Fix #433 Prevent per-page hangs & avoid killing job on maxbackoff#438

Open
akshan-main wants to merge 1 commit intoallenai:mainfrom
akshan-main:request_timeout_and_backoff_fix
Open

Fix #433 Prevent per-page hangs & avoid killing job on maxbackoff#438
akshan-main wants to merge 1 commit intoallenai:mainfrom
akshan-main:request_timeout_and_backoff_fix

Conversation

@akshan-main
Copy link

Closes #433

Changes proposed in this pull request:

  • apost() now takes a timeout_s param and wraps the entire network path in asyncio.timeout(), so a stalled server cant block forever
  • When max backoff is exhausted, we return None instead of sys.exit(1) - the existing fallback path (make_fallback_result) handles it from there, so the rest of the PDF still gets processed
  • New --request_timeout_s CLI flag (default 120s) to control per-request timeout

Before submitting

  • I've read and followed all steps in the Making a pull request
    section of the CONTRIBUTING docs.
  • I've updated or added any relevant docstrings following the syntax described in the
    Writing docstrings section of the CONTRIBUTING docs.
  • If this PR fixes a bug, I've added a test that will fail without my fix.
  • If this PR adds a new feature, I've added tests that sufficiently cover my new functionality.

@akshan-main
Copy link
Author

@jakep-allenai

@jakep-allenai
Copy link
Collaborator

Thanks for this suggestion, let me think on it for a day or two. The reason the job exits now is because in these giant huge runs we do with hundreds of millions of documents, I found it easier to have the job die and have this show up as an obvious error right away, compared to having half complete or empty files get generated, if some consistent backend issue occurred. It happened to us that there could be weird cluster issues where jobs worked fine, then produced empty or almost incomplete jsonl result files, then went back to working and that wasn't fun.

Can you explain more about the cases you ran into?

@akshan-main
Copy link
Author

akshan-main commented Feb 16, 2026

Hey, I get why you’d rather crash early in giant runs. In my case, it wasn’t bad output, but there was no output because of hang. apost() waits on socket reads without timeout, so if the server stalls mid-response, the coroutine blocks forever(no per request deadline). With concurrency effectively at 1, it looks like it’s stuck on the last page, but it’s really just whichever page hit the wedged request first. That’s why I think the timeout is important. For the max-backoff, I changed sys.exit(1) because there is already fallback handling, and I wanted a one-off failure to not kill the entire PDF. But let me know if its better to make that behavior opt-in (using a flag) or put a threshold in so repeated failures still stop the job loudly. I can align my solution based on that and create a pr for that as well

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[bug] Tends to get stuck at the last few pages

2 participants