Made to detect hand-to-mouth eating using accelerometer and gyroscope data, using the following general steps:
- Collect Data: Collect accelerometer and gyroscope data while performing hand-to-mouth eating actions as well as other non-eating activities. For the training of data,
- Preprocess Data: Annotate the data, segmenting it into eating and non-eating. The program will automatically normalize such data.
- The variant LSTM networks from Recurrent Neural Networks (RNNs) was used to train on the labeled data.
- Model Evaluation: The model's performance can be tested on a test set.
- Deploy Model: Use the trained model to detect hand-to-mouth eating in real-time.
To see how it impletmented, Run the motion_sensor_data_lstm.ipynb in Github codespace or Google Colab
To setup Github codespace install Python extension first, then run following command in terminal:
python3 -m pip install tensorflow[and-cuda]
Video demonstrating training of model
Video demonstrating trained model detecting the hand to mouth motion from new data