The most advanced transistors in current high-end integrated circuits are the fin field-effect transistors (FinFETs). It is believed that the gate-all-around field effect transistors (GAAFETs) fabricated on vertical nanopillars will be the new architecture that can improve the performance and efficiency of future electronic devices. However, fabricating a patterned array of tall nanopillars is challenging as they collapse during fabrication. Here, we implement a fast neural network architecture to identify and count collapsed nanopillars from top down transmission electron microscope (TEM) images with high accuracy.
The location os nanopillars in TEM is identified and a small region is cropped around it and resized to 64x64 pixels (./dataset/allImages.tar.gz). Then this image is manually labelled as upright or collapsed. We labelled a total of 4537 nanopillars (./dataset/labelledDataset.dat) out of which 2155 were collapsed and 2382 were upright.
Data aurgmentation was performed by
- Rotating each labelled image by 90, 180, and 270 degrees.
- Flipping each image around the vertical axis. After data augmentation the labelled dataset had 36296 images (17240 – collapsed, 19056 – upright). Finally, this dataset was partitioned into two sets – 90% of the labelled images were used for training and the remaining 10% were used for validation of the model.
For training the labelled images were resized to 32x32 pixels. We trained three different model architectures to classify nanopillars as collapsed or upright.
- A fully connected neural network (DNN) with four dense layers of size 512, 512, 256, and 2. The first 3 layers used ReLU activation, while the last layer used softmax activation.
- A convolution neural network (CNN) with 2 convolution layers of size 5x5, followed by half-maxpooling. Following this the feature map was flattened and connected to three dense layers of size 256, 128, and 2. The convolution layers, and the first two layers used ReLU activation, while the last layer used softmax activation.
- Transfer learning using VGG16 model imagenet weights. Two dense layers of size 100 and 2 with ReLU and softmax activation respectively were appended at the end of convolution and maxpooling layers of VGG16 model. The training was done only on the last two layers.
- DNN model - classification accuracy = 99.41%
| DNN | (N | = | 36296) |
|---|---|---|---|
| Prediction | Prediction | ||
| Collapse | Upright | ||
| Actual | Collapse | 17110 | 130 |
| Actual | Upright | 83 | 18973 |
- CNN model - classification accuracy = 99.84%
| CNN | (N | = | 36296) |
|---|---|---|---|
| Prediction | Prediction | ||
| Collapse | Upright | ||
| Actual | Collapse | 17195 | 45 |
| Actual | Upright | 11 | 19045 |
- VGG16 model - classification accuracy = 99.94%
| VGG16 | (N | = | 36296) |
|---|---|---|---|
| Prediction | Prediction | ||
| Collapse | Upright | ||
| Actual | Collapse | 17225 | 15 |
| Actual | Upright | 8 | 19048 |
The image processing algorithm marks the pixels represented by nanopillars as 1 and the background as 0 to make a binary image. Then the aspect-ratio of the nanopillar, and its distance to the nearest nanopillar is calculated. If the aspect-ratio is more than 1.5 and the distance is more than 16 pixels, it is marked as a collapsed nanopillar. The classification accuracy using this method is 98.39%.
Utkarsh Anand, Tanmay Ghosh, Zainul Aabdin, Nandi Vrancken, Hongwei Yan, XiuMei Xu, Frank Holsteyns, and Utkur Mirsaidov, "Deep Learning-Based High Throughput Inspection in 3D Nanofabrication and Defect Reversal in Nanopillar Arrays: Implications for Next Generation Transistors". ACS Appl. Nano Mater. 2021, 4, 3, 2664–2672. DOI: https://doi.org/10.1021/acsanm.0c03283.