Skip to content

Conversation

@eKathleenCarter
Copy link

made small changes to the llm_submit.py function:

  1. Chunking now takes in to consideration the max token size of the model selected
  2. Output is saved at the end of each succesful chunk (in the case of running out of credits or failing mid process)
  3. If failure happens, the function can now pick up from where it left off.
  4. all results are combined at the end of a successful run and the per-chunk output is cleaned up.

@waTeim
Copy link
Collaborator

waTeim commented Nov 13, 2025

Cool, but can you merge it to develop instead of to main?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants