Skip to content

felmoreno1726/Expresso-AI

Repository files navigation

Expresso AI: A Framework for Explainable Expression AI models

Introduction

This is a transfer learning model for Pytorch based on the re-implementation of the 2016 CVPR paper, "Eye Tracking for Everyone". Implemented by Petr Kellnhofer ( https://people.csail.mit.edu/pkellnho/ ). Refer to https://github.com/CSAILVision/GazeCapture for more info.

Dependencies

Requires CUDA and Python 3.6+
Requires ffmpeg
Requires libavdevice-dev to use torchvision video models
If you are using pip, make sure that you are using the latest version by running pip install --upgrade pip

To install OpenCV libraries for python

Install python3-opencv by running sudo apt-get install python3-opencv (For Ubuntu based distributions)

To install python package requirements run (exact version may not be necessary):

pip install -r requirements.txt

To install python requirements for 3D CNN models run:

pip install -r requirements_latest.txt

How to use

Dataset preparation

  1. Download the dataset and extract to a an "original_dataset" directory.
  2. Prepare dataset to standarize the filestructure. The output directory must contain a labels.csv file which maps file paths to unique "id"s and labels. See classifier_dataset/labels.csv or regression_dataset/labels.csv as examples. Prepare the dataset by running:
python scripts/prepare_dataset.py --input_path=[A = original_dataset directory] --output_path=[B = prepared_dataset directory]

The resulting file structure would looks as follows:

data
\---id_0
    \---original.avi
\---id_1
    \---original.avi
...

Processing the dataset

Landmark Extraction

This routine, extracts crops of the ROIs (face, left_eye, right_eye) proportional to the inputted dimensions and then scales them to match the desired dimensions. python scripts/process_dataset.py --data_path=./prepared_dataset --input_width=112 --output_width=112

Action Unit Extraction
  • Install Docker. Installation instructions
  • Download and run OpenFace Docker image by running:
    docker run -it --rm algebr/openface:latest
    This should spawn a root@docker_container_id bash shell.
  • From another terminal (outside the container), upload the data to be processed and script to process action unit by running:
    docker cp ./prepared_dataset_directory docker_container_id@/home
    docker cp ./scripts docker_container_id:/home
  • Process action units for the dataset. Run from within the container:
    scripts/extract_action_units.py --data_path=./data_path
  • Extract computed action units from the container. From ouside the container run:
    docker cp docker_container_id:/home/AUs ./

Environment configuration

Set the environment variables which are dependent on the machine specifications (GPU, memory and cores) in environment_config.py Run cp environment_config_example.py environment_config.py
Then modify environment_config.py with your hardware parameters.

Training/Testing/Interpreting

For usage, run

python main.py [-h or --help]

Training

python main.py --mode=[RNN_TANH, RNN_RELU, LSTM, GRU] --train --data_path=[path/to/data/ defaults to ./data]

Testing

python main.py --mode=[RNN_TANH, RNN_RELU, LSTM, GRU] --test --data_path=[path/to/data/ defaults to ./data]

Interpreting

python main.py --interpret

Note that routines can be concatenated using: --train --test --interpret.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •