Skip to content

Make threadpool queue size configurable #210

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
erip opened this issue Sep 20, 2023 · 2 comments
Open

Make threadpool queue size configurable #210

erip opened this issue Sep 20, 2023 · 2 comments

Comments

@erip
Copy link

erip commented Sep 20, 2023

Analysis here suggests that the queue size is too small to be effective for parallel processing. It's a bit hard to follow the code path, but it seems to be capped at 8*num_threads in the case of LDA. While it isn't obvious what a good default is, this being hard coded seems to be causing issues. It should be a kwarg from infer so users can tune inference.

@erip
Copy link
Author

erip commented Sep 20, 2023

I guess for what it's worth: I wanted to submit a PR for this change, but can't quite get the code in main to build locally on my WSL rig...

@jgb-hda
Copy link

jgb-hda commented Oct 19, 2023

For your information: I did manage to compile tomotopy with your proposed changes, however this did not improve the slow inference.

I don't have precise timings or any nice graphs. but it felt as it still took as long as before. I did change 8 to 128. No idea if that is actually a good value, but it didn't seem to change anything anyways, as the CPU was still under high load but not doing much, as it didn't got warm, so I guess it still did spend most of its time waiting …

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants