You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- during `setup`, instead of loading models in the processor
directly, instantiate and spawn a singleton predictor subprocess
with the given parameters (after resolving the model path name),
communicating via shared (task and result) queues to synchronize
processor and predictor processes;
the predictor will then load models in its own address space
- at runtime, the processor merely calls the predictor with the
respective arguments for that page, which translates into
- putting the arguments on the task queue
- getting the results from the result queue, blocking
- at runtime, the predictor loops into:
- receiving inputs from the task queue, blocking
- calling `predict` on them
- putting outputs on the result queue
- in the predictor, tasks and results are identified via page id,
so results get retrieved for their respective task only,
implemented via shared dict to synchronize forked processor workers
- during `shutdown`, tell the predictor to shut down as well
(terminating the subprocess);
the predictor will then exit its loop and close the queues
- abstract from kraken.pageseg, kraken.blla, and kraken.rpred
differences in initialization phase and inference phase via
shared `common.KrakenPredictor` class, override specifics in
- `recognize.KrakenRecognizePredictor`:
- during `setup`, after loading the model, submit a special "task"
to query the model's `one_channel_mode` attribute
- at runtime, translate the model into a `defaultdict` for `mm_rpred`,
but picklable to be compatible with mp.Queue; for the same reason,
exhaust the result generator immediately
- `segment.KrakenSegmentPredictor`: during `setup`, map the given
parameters and inputs to kwargs as applicable by either `pageseg.segment`
or `blla.segment`
0 commit comments