-
Notifications
You must be signed in to change notification settings - Fork 0
Description
As an alternative to #10, we could switch from requests directly to h2 (which is the HTTP/2 protocol driver httpx uses). We might not only get rid of threading, but all asynchronous I/O altogether: Somewhat perversely, the model with httpx would be to create a single TCP/TLS session, then create N threads (one for each Bot instance), then have them fight to become the designated reader thread (the one holding HTTP2Connection._read_lock and blocking in HTTP2Connection._read_incoming_data), which would be responsible for reading all data, but would then transfer each Bot's data to that Bot's thread somehow, which would in turn put it into the effectively global MultiBot.loop.queue, to finally be processed by the main thread.
An alternative would be to stick to not just one thread, but one worker altogether: Explicitly create a single TCP/TLS session to api.telegram.org, send polls for each Bot instance through it, then just have the main thread block waiting for data from that single connection (and dispatch all received updates before going back to blocking waiting for data).
The only complication would be things that aren't triggered by data coming back from TBA (like m.m.reminders.modinit.periodic). It might still be worth it and just have two or three threads:
- Dedicated network I/O thread, just receiving HTTP2 events and putting them into a queue.
- Dedicated periodic thread, just putting timed events into the same queue (to be processed immediately).
- Dedicated queue runner (the current main thread).
where 1 and 3 can be combined if we decide it's okay for periodic events to wait in the queue until the next network event (like a long poll timing out, which currently happens no longer than every 10 seconds):
- Dedicated network I/O thread, receiving HTTP2 events and putting them into a queue, then running the queue (everything already present plus everything it just enqueued).
- Dedicated periodic thread, just putting timed events into the same queue (to be processed after the next network event).