This project uses Apple's ARKit and Vision framewoks to compare different display and interaction pardigms for realtime text enhancement in real-world applications.
This code uses the Apple vision API, specifically the neural accellerated VNRecognizeTextRequest API to recognize text in real time with high accuracy.
This prototype has been designed to be run on a 12.9" iPad running iOS 14 or later. To build, import the bundle into a local organization and build targeting the latest iOS SDK.
Portions of the code for this project were adapted from the following open source example: WWDC 2019 Session 234: Text Recognition in Vision Framework.