Official Implementation of the paper The Adversarial Implications of Variable-Time Inference.
This work presents novel findings by demonstrating the potential to enhance a decision-based attack.
Our adversary simply measures the execution time of an algorithm applied to post-process the predictions of the ML model under attack.
We focus our investigation on leakage in the NMS algorithm, ubiquitous in object detectors. We demonstrate attackers against the YOLOv3 detector, that use timing to evade object detection with adversarial examples.
Our adversary wishes to evade detection by performing adversarial perturbations on an image.
We recommend using conda to install the required libraries:
- Setup conda:
conda deactivate
conda create --name timing_attack python=3.6
conda activate timing_attack
- Other dependencies:
pip install tensorflow==2.0.0
pip install keras==2.3.0
pip install matplotlib==3.2.2
pip install pillow==7.2.0
pip install scipy==1.1.0
pip install h5py==2.10.0
- Set the folders for placing the pre-trained model:
cd Adversarial-Implications-Variable-Time-Inference
mkdir YOLO
mkdir YOLO/model
1.1. Please download the pre-trained model from the following link and put it into YOLO/model/:
- yolov3 model trained on COCO dataset.
- Set the folder for placing the images you want to attack:
mkdir COCO
2.2. Place the images you want to attack into COCO/. You can select any image you want with the png or jpg extension (to add more options you need to make code changes to the main file). Our recommendation, work with images from the COCO-MS dataset.
- Run demo:
python main.py
After running the program you can find your outputs under the time_attack_samples/ directory.
If you find our work useful in your research, please consider citing:
@inproceedings{biton2023adversarial,
title={The Adversarial Implications of Variable-Time Inference},
author={Biton, Dudi and Misra, Aditi and Levy, Efrat and Kotak, Jaidip and Bitton, Ron and Schuster, Roei and Papernot, Nicolas and Elovici, Yuval and Nassi, Ben},
booktitle={Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security},
pages={103--114},
year={2023}
}
Distributed under the MIT License. See the LICENSE for more information.
