-
Notifications
You must be signed in to change notification settings - Fork 0
Description
The current way we're attempting to implement the software back end is heavily reliant on the t9 system of prediction. This created a chicken and egg problem of calibrating an unreliable sensor, using a fingermapping, and mapping the fingers, using a reliable sensor.
What if we sidestepped our current sensor data -> finger -> letter design, and directly attempted to go from sensor data to letter? Who cares what finger the user moved if we know what letter they typed?
The calibration process would now be a simple set of sentences. For the initial seed data, we could work off of a huge data set collected off sensor data from 3 or 4 people. From there, we could have individual people do a much smaller data gathering calibration process (which would work the same way as I planned finger mapping to work) that would hopefully take no more than 5 minutes (ideally as long as it takes to get an iPhone 5S to learn your fingerprint).
On the data structure side, the structure of the trie would no longer be based on fingers. It would be based on the number of different "buckets" that we could sort sensor data into.
This might be a little more dangerous, so I suggest we work off the sensor -> finger -> letter system first, and then once the machine learning system is relatively robust, we can try this out in a different branch.