OwOCR is a text recognition tool that continuously scans for images and performs OCR (Optical Character Recognition) on them. Its main focus is Japanese, but it works for many other languages.
output.webm
output.webm
Easy to install Windows and macOS packages can be downloaded here.
- On Windows, just extract the zip anywhere and double click on "owocr". It might take a while to start up the first time.
- On macOS, just double click on the dmg and drag the owocr app to the Application folder, like most macOS apps. You will be asked to grant two permissions to owocr the first time it starts (Accessibility and Screen Capture), just follow the prompts to do so.
- A "Log Viewer" window will shop up, showing information messages. After loading finishes, a tray icon in the macOS menu bar/Windows task bar will show up. You can close the log viewer if you want.
- By default owocr monitors the clipboard for images and outputs recognized text back to the clipboard. You can change this from the configuration, accessible from the tray icon.
- With a left click on the tray icon you can pause/unpause on Windows, from the right click menu (left click on macOS) you can change the engine, pause/unpause, change the screen capture area selection, take a screenshot of the selected screen/window, launch the configuration, and reopen the log viewer if you closed it. The icon will be dimmed to show when owocr is paused.
- In these versions all the OCR engines and features are already available, you don't need to install anything else. The tray icon is always enabled and can't be turned off.
OwOCR has been tested on Python 3.11, 3.12 and 3.13. It can be installed with pip install owocr after you install Python. You also need to have one or more OCR engines, check the list below for instructions. I recommend installing at least Google Lens on any operating system, and OneOCR if you are on Windows. Bing is pre-installed, Apple Vision and Live Text come pre-installed on macOS.
owocrThis default behavior monitors the clipboard for images and outputs recognized text back to the clipboard.
owocr_configThis opens the interface where you can change all the options.
From the terminal window you can pause/unpause with p or terminate with t/q, switch between engines with s or the engine-specific keys (from the engine list below).
The tray icon can also be used as explained above.
All command-line options and their descriptions can be viewed with: owocr -h.
- Multiple input sources: clipboard, folders, websockets, unix domain socket, and screen capture
- Multiple output destinations: clipboard, text files, and websockets
- Integrates well with Windows, macOS and Linux, supporting operating system features like notifications and a tray icon
- Capture from specific screen areas, windows, of areas within windows (window capture is only supported on Windows/macOS/Wayland). This also tries to capture entire sentences and filter all repetitions. If you use an online engine like Lens I recommend setting a secondary local engine (OneOCR on Windows, Apple Live Text on macOS and meikiocr on Linux). With this "two pass" system only the changed areas are sent to the online service, allowing for both speed and accuracy
- Control from the tray icon or the terminal window
- Control from anywhere through keyboard shortcuts: you can set hotkeys for pausing, switching engines, taking a screenshot of the selected screen/window and changing the screen capture area selection
- Read from a unix domain socket
/tmp/owocr.sockon macOS/Linux - Furigana filter, works by default with Japanese text (both vertical and horizontal)
The configuration file is stored in ~/.config/owocr_config.ini on Linux/macOS, or C:\Users\yourusername\.config\owocr_config.ini on Windows.
A sample config file is available at: owocr_config.ini
While I've done all I could to support Linux (specifically Wayland), not everything might work with all setups. Specifically:
- There are two ways of reading images from and writing text to the clipboard on Wayland. One requires a compositor which supports the extension "ext-data-control" and this should work out of the box with owocr by default. ext_data_control compatibility chart (worth noting GNOME/Mutter doesn't support it, but e.g. KDE/KWin does).
The alternative is throughwl-clipboard(preinstalled in most distributions), but this will try to steal your focus constantly (due to Wayland's security design), limiting usability.
To switch to wl-clipboard, enablewayland_use_wlclipboardinowocr_config-> Advanced. - Reading from screen capture works on Wayland. The way it's designed is that your monitor/monitor selection/window selection in the operating system popup counts as a "virtual screen" to owocr.
By default the automatic coordinate selector will be launched to select one/more areas, as explained above.
Using "whole screen" 1 in the configuration/owocr -r=screencapture -sa=screen_1will use the whole selection.
Using manual window names is not supported and will be ignored. - Keyboard combos/keyboard inputs in the coordinate selector might not work on Wayland. From my own testing they work on KDE (if you enable keyboard access in "Legacy X11 App Support" under "Application Permissions") but not GNOME. A workaround involves running pynput with the uinput backend, but this requires exposing your input devices (they will be accessible without root):
sudo chmod u+s $(which dumpkeys)
sudo usermod -a -G $(stat -c %G /dev/input/event0) $(whoami)
Then launch owocr with:PYNPUT_BACKEND_KEYBOARD=uinput owocr -r screencaptureor addPYNPUT_BACKEND_KEYBOARD=uinputto your environment variables. - The tray icon requires installing this extension on GNOME (works out of the box on KDE)
- X11 partially works but uses more resources for scanning the clipboard and doesn't support window capturing at all (only screens/screen selections).
- Manga OCR (with optional comic-text-detector as segmenter) → Terminal: install with
pip install "owocr[mangaocr]", keys:m(regular, ideal for small text areas),n(segmented, ideal for manga panels/larger images with multiple text areas) - EasyOCR → Terminal: install with
pip install "owocr[easyocr]", key:e - RapidOCR → Terminal: install with
pip install "owocr[rapidocr]", key:r - Apple Vision framework - macOS only - Older version of Live Text. → Terminal key:
a - Apple Live Text (VisionKit framework) - macOS only - Recommended - Probably the best local engine to date. It should be the same as Vision except that in Sonoma Apple added vertical text reading. → Terminal key:
d - WinRT OCR: Windows 10/11 only - It can also be used by installing winocr on a Windows virtual machine and running the server there (
winocr_serve) and specifying the IP address of the Windows VM/machine in the config file. → Terminal: install withpip install "owocr[winocr]", key:w - OneOCR - Windows 10/11 only - Recommended - Close second local best to the Apple one. On Windows 10 you need to copy 3 system files from Windows 11 to use it, refer to the readme here. It can also be used by installing oneocr on a Windows virtual machine and running the server there (
oneocr_serve) and specifying the IP address of the Windows VM/machine in the config file. → Terminal: install withpip install "owocr[oneocr]", key:z - meikiocr - Recommended - Comparable to OneOCR in accuracy and CPU latency, best local option for Linux users. Can't process vertical text and is limited to 64 text lines and 48 characters per line. → Terminal: install with
pip install "owocr[meikiocr]", if you have a Nvidia GPU you can dopip uninstall onnxruntime && pip install onnxruntime-gpuwhich makes it the fastest OCR available. Key:k
- Google Lens - Recommended - Arguably the best OCR engine to date. → Terminal: install with
pip install "owocr[lens]", key:l - Bing - Recommended - Close second best. → Terminal key:
b - Google Vision - You need a service account .json file named google_vision.json in
user directory/.config/→ Terminal: install withpip install "owocr[gvision]", key:g - Azure Image Analysis - You need to specify an api key and an endpoint in the config file → Terminal: install with
pip install "owocr[azure]", key:v - OCRSpace - You need to specify an api key in the config file. → Terminal key:
o
This uses code from/references these people/projects:
- Viola for working on the Google Lens implementation (twice!) and helping with the pyobjc VisionKit code!
- @rtr46 for contributing a big overhaul allowing for coordinate support and JSON output
- @bpwhelan for contributing code for other language support and for his ideas (like two pass processing) originally implemented in the Game Sentence Miner fork of owocr
- @bropines for the Bing code (Github issue)
- @ronaldoussoren for helping with the pyobjc VisionKit code
- Manga OCR for inspiring and being the project owocr was originally derived from
- Mokuro for the comic text detector integration code
- ocrmac for the Apple Vision framework API
- ccylin2000_lipboard_monitor for the Windows clipboard polling code
- vicky for the demo videos in this readme!
- nao for the awesome icon!
- Steffo for all his help in automating packaging/distribution with Github Actions!