This repository contains the official implementation for GazeSCRNN: Event-based Near-eye Gaze Tracking using a Spiking Neural Network.
GazeSCRNN is a spiking convolutional recurrent neural network designed for event-based near-eye gaze tracking. This repository contains the official implementation of the model, training scripts, and evaluation metrics.
If you find our paper or this repository helpful, please consider citing:
@misc{groenen2025gazescrnn,
title={GazeSCRNN: Event-based Near-eye Gaze Tracking using a Spiking Neural Network},
author={Stijn Groenen and Marzieh Hassanshahi Varposhti and Mahyar Shahsavari},
year={2025},
eprint={2503.16012},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2503.16012},
}
To install the required dependencies, run:
pip install -r requirements.txtBesides the Python dependencies, training and testing the GazeSCRNN models requires the EV-Eye dataset to be present in the EV_Eye_dataset directory. The EV-Eye dataset can be obtained by following the steps here.
To train the GazeSCRNN model, run the train.py script with the desired parameters. For example:
python train.py Experiment1 --data_preload --gpus 0 --fpttAlternatively, you can train the GazeSCRNN model with one of the predefined configurations. For example:
xargs python train.py --gpus 0 < configs/GazeSCRNN-Events300-FPTT-Backprop8.txtTo test a checkpoint of the GazeSCRNN model, run the test.py script with the desired parameters:
python test.py <experiment_name> <path_to_checkpoint_file> --gpus <gpu_id>This will output the evaluation metrics such as Mean Angle Error (MAE), Mean Pupil Error (MPE), and Mean Firing Rate (MFR).
This project is licensed under the MIT License. See the LICENSE file for more details.