This repository contains the official code and dataset for the paper:
"A Physics-based Simulation Framework and Dataset for LED Flicker in Automotive Imaging" Accepted at the Electronic Imaging (EI) 2026 conference.
Paper Link: EI 2026 Proceedings
Dataset Link: Stanford Digital Repository
LED flicker is a persistent artifact in automotive imaging where lights modulated via Pulse Width Modulation (PWM) produce temporal intensity variations in captured video. Our framework provides an open-source, physics-based simulation pipeline that integrates an analytical PWM flicker model with active camera transforms to simulate realistic motion blur and flicker artifacts.

Figure: Sample scenes from the dataset illustrating the motion-flicker trade-off. Top row (LPD) shows short-exposure flickering; bottom row (SPD) shows long-exposure signal recovery with motion blur at various ego-velocities (v).
We provide a camera-agnostic dataset of 200 automotive scenes for the prominence of LED sources. The dataset utilizes a post-render light control (Option B) approach (described in the paper), enabling researchers to modulate flicker parameters post-rendering.
The dataset is organized by SceneID. Each folder contains decomposed light group radiance maps (EXR) to facilitate flexible sensor simulation (by combining these light groups with different weights):
dataset/
└── <SceneID>/
├── <SceneID>_{lightgroup}/ # {headlights, streetlights, otherlights, skymap}
│ ├── lpd/ # Short-exposure (3–5 ms) radiance; 3 frames
│ │ ├── 01.exr ... 03.exr
│ └── spd/ # Long-exposure (11.11 ms) radiance; 3 frames
│ ├── 01.exr ... 03.exr
├── <SceneID>_lf.mat # Metadata: Depth map, LPD exposure (3-5ms), and velocity (20-50m/s)
└── <SceneID>_skymap.png # Thumbnail: Quick-view using the skymap image
To utilize the post-render light control and sensor simulation, the following toolboxes are required:
- ISETCam: Core image system simulation and sensor modeling.
- isethdrsensor: Specialized support for light group control, split-pixel and multi-exposure HDR sensor architectures.
- (Optional) iset3d-tiny: Required only if you intend to run pre-render (Option A) light control. Please note that if you choose to do so and have access to the ISETAuto assets, follow the
fix-geometrybranch on my fork or refer to my PR to ISET3D. This branch fixes critical bugs in ISET3D which otherwise break repeated scene rendering (like, over a frame sequence).
The framework is designed for use in MATLAB (tested on R2024b). Note that the MATLAB UI is required for ISETCam visualization components (ipWindow). Please note that you will have to run ieInit.m from MATLAB to initialize the ISET toolbox.
We utilize the post-render light control approach to enable researchers to modulate flicker parameters (such as duty cycle and frequency) after the ray-tracing step.
- Download the Dataset: Place the
<SceneID>directories in youriset-lfm/local/data folder. - Set Paths: Add the
scripts/folder and the required dependencies mentioned above to your MATLAB path. - Run the Simulation: Execute the main processing script for a specific scene:
% sceneID: Unique ID of the scene (e.g., '1112154733') % Nframes: Number of frames to process (maximum 3 available in dataset) lf_RunCam_LightCtrl(sceneID, Nframes)
Please refer to flicker_analysis.ipynb for some examples on how to use the flicker model, simulate flicker over frames, and the rolling shutter banding effect. The notebook also validates our flicker model with previously published results.
If you use this framework or dataset in your research, please cite our EI 2026 paper and dataset:
Ayush M. Jamdar, "ISET-LFM: A Physics-based Simulation Framework and Dataset for LED Flicker in Automotive Imaging" in Electronic Imaging, 2026, pp 105-1 - 105-7, https://doi.org/10.2352/EI.2026.38.16.AVM-105
@misc{jamdar2026iset,
author = {Jamdar, Ayush.},
title = {{ISET-LFM: A Physics-based Synthetic Radiance Dataset for LED Flicker Mitigation in Automotive Imaging}},
howpublished = {Stanford Digital Repository},
year = {2026},
url = {https://purl.stanford.edu/wd776hn7919},
note = {Available at \url{https://purl.stanford.edu/wd776hn7919}}
}This work was supported by Omnivision Technologies through the 2025 Summer Internship Program as an open-source collaboration with the Stanford VISTA Lab. I am deeply grateful to Dr. Ramakrishna Kakarala from OVT and Prof. Brian Wandell from Stanford for their technical guidance and mentorship. This research was built upon the robust foundation of the ISET open-source ecosystem.