Detection Limits of Artificial Intelligence (AI) based Object Detection from Scanning Electron Microscopy (SEM) Images
This project is about designing a methodology for quantifying and relating detection limits of AI model-based measurements from SEM images to digital image quality metrics and to human observers.
This software supports the methodology for quantifying detection limits for AI-based measurements from SEM images. The detection limits establish the relationship between the quality of SEM images and human or AI model detection performance The software is actively developed.
The repository contains the software for
- extracting image quality metrics from simulated SEM image collections
- merging image quality metrics with AI model accuracy metrics, where the AI model was trained on the same simulated SEM image collection
- plotting image quality metrics and AI model accuracy metrics as a function of SEM Image simulation noise and contrast parameters
- plotting relationships between detection limits of human (eye model) and numerical (AI model) observers
- plotting relationships between AI model single-valued metrics (Dice, FNR, FPR) and multiple SNR definitions
- interactively interrogating all plots via web interface
The repository contains two folders:
- src folder: contains all Python scripts
- web folder: contains all HTML, JavaScript, and CSS files together with the plots
To set up the environment and run the Python scripts, follow these steps:
- Create a virtual environment (recommended):
-
Using
venv:foo@bar:~$ python -m venv detection_limits
-
Activate the virtual environment:
# On Linux/macOS: foo@bar:~$ source detection_limits/bin/activate
# On Windows: foo@bar:~$ detection_limits\Scripts\activate
-
-
Or using conda:
foo@bar:~$ conda create -n detection_limits python=3.8
- Activate the conda environment:
foo@bar:~$ conda activate detection_limits
- Activate the conda environment:
- Install dependencies using the provided
requirements.txt. Ensure you are in the root directory of the repository and run:
foo@bar:~$ pip install -r requirements.txt- Run the Python scripts as described in the workflow section.
To interactively view and explore the plots, you can either use the hosted pages.nist.gov instance or download the web folder and open index.html in your web browser.
- Step 1: compute data quality metrics using generate_metrics.py
- Step 2: plot data quality metrics as a function of contrast and noise using plot_image_quality.py
- Step 3: train UNet model on set 1 - set 5 (Web Image Processing Workflow)
- Step 4: infer image masks for set 6 using the trained UNet model and evaluate its accuracy (Web Image Processing Workflow)
- Step 5: merge the data quality metrics and AI model accuracy metrics using match_ai_data.py
- Step 6: plot relationships between data quality metrics and AI model accuracy metrics using plot_ai_model_predictions.py
- Step 7: support decisions to obtain a trusted AI-based measurement by applying an AI model with a user-defined minimum accuracy requirement to an input SEM image with minimum SNR characteristics defined by the graph generated using dice_to_SNR.py
- Peter Bajcsy, ITL NIST, Software and Systems Division, Information Systems Group
- Contact email address at NIST: peter dot bajcsy at nist dot gov
- Peter Bajcsy, Brycie Wiseman, Michael Majurski, and Andras E. Vladar, "Detection Limits of AI-based SEM Dimensional Metrology", Proceedings of SPIE conference on Advanced Lithography + Patterning, 23 - 27 February 2025, San Jose, California, US, URL
- Peter Bajcsy, Pushkar Sathe, and Andras E. Vladar, "Relating human and AI-based detection limits in SEM dimensional metrology", Under review.
- The version of LICENSE.md included in this repository is approved for use.
- Updated language on the [Licensing Statement][nist-open] page supersedes the copy in this repository. You may transcribe the language from the appropriate "blue box" on that page into your README.
-
We used ARTIMAGEN SEM Simulation Software to generate images with varying contrast and noise level.
- Cizmar P., Vladár A., Postek M. “Optimization of accurate SEM imaging by use of artificial images”, Proc. SPIE 7378, Scanning Microscopy, 737815, 2009, URL
- Project URL and GitHub Repo URL
- License: As this software was developed as part of work done by the United States Government, it is not subject to copyright, and is in the public domain. Note that according to GNU.org public domain is compatible with GPL.
-
We used UNet Convolutional Neural Network (CNN) AI model implementation By Michael Majurski (NIST) for training and inference of image segmentation
- Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. "U-net: Convolutional networks for biomedical image segmentation." International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015.
- Name: WIPP UNet CNN Training Plugin and WIPP UNet CNN Inference Plugin
- Title: WIPP UNet CNN Training Plugin, Version: 1.0.0, Repository, Container image: wipp/wipp-unet-cnn-train-plugin:1.0.0
- Title: WIPP UNet CNN Inference Plugin, Version:1.0.0, Repository Container images: wipp/wipp-unet-cnn-inference-plugin:1.0.0
-
The execution and full computational provenance were obtained by using the WIPP scientific workflow software:
- GitHub: WIPP code
- Bajcsy, P. , Chalfoun, J. and Simon, M. (2018), Web Microanalysis of Big Image Data, Springer International Publishing.
- This work was performed with funding from the CHIPS Metrology Program, part of CHIPS for America, National Institute of Standards and Technology, U.S. Department of Commerce.
The file named CODEOWNERS can be viewed to discover which GitHub users are "in charge" of the repository. More crucially, GitHub uses it to assign reviewers on pull requests. GitHub documents the file (and how to write one) [here][gh-cdo].
Project metadata is captured in CODEMETA.yaml, used by the NIST
Software Portal to sort the GitHub work under the appropriate thematic
homepage.