This repository contains the introductions and the codes for IMWUT 2024 paper PrivateGaze: Preserving User Privacy in Black-box Mobile Gaze Tracking Services by Lingyu Du, Jinyuan Jia, Xucong Zhang, and Guohao Lan. If you have any questions, please send an email to Lingyu.Du AT tudelft.nl.
Eye gaze contains rich information about human attention and cognitive processes. This capability makes the underlying technology, known as gaze tracking, a critical enabler for many ubiquitous applications and has triggered the development of easy-to-use gaze estimation services. Indeed, by utilizing the ubiquitous cameras on tablets and smartphones, users can readily access many gaze estimation services. In using these services, users must provide their full-face images to the gaze estimator, which is often a black box. This poses significant privacy threats to the users, especially when a malicious service provider gathers a large collection of face images to classify sensitive user attributes. In this work, we present PrivateGaze, the first approach that can effectively preserve users' privacy in black-box gaze tracking services without compromising gaze estimation performance. Specifically, we proposed a novel framework to train a privacy preserver that converts full-face images into obfuscated counterparts, which are effective for gaze estimation while containing no privacy information. Evaluation on four datasets shows that the obfuscated image can protect users' private information, such as identity and gender, against unauthorized attribute classification. Meanwhile, when used directly by the black-box gaze estimator as inputs, the obfuscated images lead to comparable tracking performance to the conventional, unprotected full-face images.
Raw images are captured by cameras from subjects; obfuscated images are converted from raw images by the trained privacy preserver; estimated gazes visualize the gaze direction estimated from obfuscated images by the black-box gaze estimator.
The illustration of PrivateGaze is shown in the following figure. The core of PrivateGaze is the privacy preserver, which transforms the original privacy-sensitive full-face image into an obfuscated version as input for the untrusted gaze estimation services. During the training stage, we train the privacy preserver with the assistance of a pre-trained surrogate gaze estimator. After training, the privacy preserve is deployed on the user's device to generate obfuscated images that can be used by the black-box gaze estimation services. This ensures accurate gaze estimation while preventing the user's private attributes, such as gender and identity, from being inferred by the service provider.
We consider a more practical case where the gaze estimator
We assume the malicious service provider can stealthily collect a dataset
In this work, we envision a trustworthy party that provides a privacy preserver
- The obfuscated image
$x'$ cannot be used to correctly classify private attributes of the user, such as identity and gender, even if the malicious service provider trains deep learning-based classifiers on$\mathcal{D}_p$ , i.e., a set of$x'$ with accurate labels for these confidential user attributes. - The obfuscated image
$x'$ can be directly used by$\mathcal{G}_b(\cdot)$ without any adaption needed from the service provider's side. The gaze estimation performance of$\mathcal{G}_b(\cdot)$ with$x'$ should be similar to the original full-face images.
To achieve the design goals, we propose a novel framework PrivateGaze consisting of a privacy preserver, an anchor image generation module, and the surrogate gaze estimator as shown in the following figure. The privacy preserver
To achieve the utility goal, the privacy preserver
We propose a novel method for generating the anchor image
A major challenge in achieving this utility goal is training
The anchor images generated on GazeCapture for EfficientNet, MobileNet, ResNet, ShuffleNet, and VGG respectively are shown as follows.
The structure of privacy preserver
- Black-box gaze estimators trained on ETH-XGaze and the surrogate gaze estimator trained on GazeCapture are available at black-box gaze estimators.
- Anchor images are available in figures/.
- TrainPrivacyPreserver is the main file for training a privacy preserver given a black-box gaze estimator.
- unet_models_v2.py defines the architectures of the privacy preserver.
Please cite the following paper in your publications if the code helps your research.
@article{du2024privategaze,
title={PrivateGaze: Preserving User Privacy in Black-box Mobile Gaze Tracking Services},
author={Du, Lingyu and Jia, Jinyuan and Zhang, Xucong and Lan, Guohao},
journal={Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies},
volume={8},
number={3},
year={2024},
publisher={ACM New York, NY, USA}
}







