This is a transfer learning model for Pytorch based on the re-implementation of the 2016 CVPR paper, "Eye Tracking for Everyone". Implemented by Petr Kellnhofer ( https://people.csail.mit.edu/pkellnho/ ). Refer to https://github.com/CSAILVision/GazeCapture for more info.
Requires CUDA and Python 3.6+
Requires ffmpeg
Requires libavdevice-dev to use torchvision video models
If you are using pip, make sure that you are using the latest version by running pip install --upgrade pip
Install python3-opencv by running sudo apt-get install python3-opencv (For Ubuntu based distributions)
pip install -r requirements.txt
pip install -r requirements_latest.txt
- Download the dataset and extract to a an "original_dataset" directory.
- Prepare dataset to standarize the filestructure. The output directory must contain a labels.csv file which maps file paths to unique "id"s and labels. See
classifier_dataset/labels.csvorregression_dataset/labels.csvas examples. Prepare the dataset by running:
python scripts/prepare_dataset.py --input_path=[A = original_dataset directory] --output_path=[B = prepared_dataset directory]
The resulting file structure would looks as follows:
data
\---id_0
\---original.avi
\---id_1
\---original.avi
...
This routine, extracts crops of the ROIs (face, left_eye, right_eye) proportional to the inputted dimensions and then scales them to match the desired dimensions.
python scripts/process_dataset.py --data_path=./prepared_dataset --input_width=112 --output_width=112
- Install Docker. Installation instructions
- Download and run OpenFace Docker image by running:
docker run -it --rm algebr/openface:latest
This should spawn aroot@docker_container_idbash shell. - From another terminal (outside the container), upload the data to be processed and script to process action unit by running:
docker cp ./prepared_dataset_directory docker_container_id@/home
docker cp ./scripts docker_container_id:/home - Process action units for the dataset. Run from within the container:
scripts/extract_action_units.py --data_path=./data_path - Extract computed action units from the container. From ouside the container run:
docker cp docker_container_id:/home/AUs ./
Set the environment variables which are dependent on the machine specifications (GPU, memory and cores) in environment_config.py
Run
cp environment_config_example.py environment_config.py
Then modify environment_config.py with your hardware parameters.
For usage, run
python main.py [-h or --help]
python main.py --mode=[RNN_TANH, RNN_RELU, LSTM, GRU] --train --data_path=[path/to/data/ defaults to ./data]
python main.py --mode=[RNN_TANH, RNN_RELU, LSTM, GRU] --test --data_path=[path/to/data/ defaults to ./data]
python main.py --interpret
Note that routines can be concatenated using: --train --test --interpret.