[NeurIPS2025] ADPretrain: Advancing Industrial Anomaly Detection via Anomaly Representation Pretraining
The current mainstream and state-of-the-art anomaly detection (AD) methods are substantially established on pretrained feature networks yielded by ImageNet pretraining. However, regardless of supervised or self-supervised pretraining, the pretraining process on ImageNet does not match the goal of anomaly detection (i.e., pretraining in natural images doesn’t aim to distinguish between normal and abnormal). Moreover, natural images and industrial image data in AD scenarios typically have the distribution shift. The two issues can cause ImageNet-pretrained features to be suboptimal for AD tasks. To further promote the development of the AD field, pretrained representations specially for AD tasks are eager and very valuable. To this end, we propose a novel AD representation learning framework specially designed for learning robust and discriminative pretrained representations for industrial anomaly detection. Specifically, closely surrounding the goal of anomaly detection (i.e., focus on discrepancies between normals and anomalies), we propose angle- and norm-oriented contrastive losses to maximize the angle size and norm difference between normal and abnormal features simultaneously. To avoid the distribution shift from natural images to AD images, our pretraining is performed on a large-scale AD dataset, RealIAD. To further alleviate the potential shift between pretraining data and downstream AD datasets, we learn the pretrained AD representations based on the class-generalizable representation, residual features. For evaluation, based on five embedding-based AD methods, we simply replace their original features with our pretrained representations. Extensive experiments on five AD datasets and five backbones consistently show the superiority of our pretrained features.
You can refer to the requirements.txt to install required environments.
pip install -r requirements.txt
Experiments are conducted on NVIDIA GeForce RTX 4090 (24GB).
Please download MVTecAD dataset from MVTecAD dataset, VisA dataset from VisA dataset, BTAD dataset from BTAD dataset, and MVTec3D dataset from MVTec3D dataset, MPDD dataset from MPDD dataset. You can put these datasets in the data diretory.
You can also alter it according to your need, just remember to modify the corresponding data_path in the code.
Contact the authors of Real-IAD URL to get the download link.
Download and unzip realiad_512 and realiad_jsons in ./data/Real-IAD.
./data/Real-IAD will be like:
|-- Real-IAD
|-- realiad_512
|-- audiokack
|-- bottle_cap
|-- ....
|-- realiad_jsons
|-- realiad_jsons
|-- realiad_jsons_sv
|-- realiad_jsons_fuiad_0.0
|-- ....
Then, you should download the few-shot reference normal samples. Please download the few-shot normal reference samples from ref-data and put the data in the ./data directory.
We provide our pretrained weights. You can download them and please place them in the checkpoints folder.
| Backbone | Input Size | Weights |
|---|---|---|
| DINOv2-base | R2242-C2242 | model |
| DINOv2-large | R2242-C2242 | model |
| CLIP-base | R2242-C2242 | model |
| CLIP-large | R2242-C2242 | model |
| ImageBind | R2242-C2242 | model |
Please run the following code for extracting reference features for generating residual features.
MVTecAD
python extract_ref_features.py --backbone dinov2-large --dataset mvtecVisA
python extract_ref_features.py --backbone dinov2-large --dataset visaBTAD
python extract_ref_features.py --backbone dinov2-large --dataset btadMVTec3D
python extract_ref_features.py --backbone dinov2-large --dataset mvtec3dMPDD
python extract_ref_features.py --backbone dinov2-large --dataset mpddThe backbone here and in the following can be dinov2-base, dinov2-large, clip-base, clip-large and imagebind.
Please run the following code for obtaining center features used in the angle-oriented contrastive loss.
python get_center_features.py --backbone dinov2-large --data_path ./data/Real-IAD --save_path ./centers/dino_large_rfeature_centers.npyPlease run the following code for pretraining on the Real-IAD dataset.
python main.py --train_dataset_dir ./data/Real-IAD --test_dataset_dir ./data/mvtec_anomaly_detection --backbone dinov2-large --checkpoint_path ./checkpoints/dinov2-large --test_ref_feature_dir ./ref_features/dinov2-large/mvtec_8shot --feature_centers ./centers/dino_large_rfeature_centers.npyPaDiM
python train_val_padim.py --backbone dinov2-large --with_pretrained --pretrained_weights ./checkpoints/dinov2-large/checkpoints_pro_angle.pth --dataset mvtec --dataset_dir ./data/mvtec_anomaly_detection --ref_feature_dir ./ref_features/dinov2-large/mvtec_8shotPatchCore
python train_val_patchcore.py --backbone dinov2-large --with_pretrained --pretrained_weights ./checkpoints/dinov2-large/checkpoints_pro_angle.pth --dataset mvtec --dataset_dir ./data/mvtec_anomaly_detection --ref_feature_dir ./ref_features/dinov2-large/mvtec_8shotCFLOW
python train_val_cflow.py --backbone dinov2-large --with_pretrained --pretrained_weights ./checkpoints/dinov2-large/checkpoints_pro_norm.pth --dataset mvtec --dataset_dir ./data/mvtec_anomaly_detection --ref_feature_dir ./ref_features/dinov2-large/mvtec_8shotGLASS
For GLASS, you should download the foreground masks from fg mask and put them in the ./ad_models/glass directory.
DTD is an auxiliary texture dataset used for data augmentation in GLASS. You should download from here DTD and put DTD dataset in ./data directory.
python train_val_glass.py --backbone dinov2-large --with_pretrained --pretrained_weights ./checkpoints/dinov2-large/checkpoints_pro_norm.pth --dataset mvtec --dataset_dir ./data/mvtec_anomaly_detection --ref_feature_dir ./ref_features/dinov2-large/mvtec_8shotUniAD
python train_val_uniad.py --backbone dinov2-large --with_pretrained --pretrained_weights ./checkpoints/dinov2-large/checkpoints_pro_norm.pth --dataset mvtec --dataset_dir ./data/mvtec_anomaly_detection --ref_feature_dir ./ref_features/dinov2-large/mvtec_8shotFeatureNorm
python val_norm.py --backbone dinov2-large --with_pretrained --pretrained_weights ./checkpoints/dinov2-large/checkpoints_pro_norm.pth --dataset mvtec --dataset_dir ./data/mvtec_anomaly_detection --ref_feature_dir ./ref_features/dinov2-large/mvtec_8shotIf our work is helpful for your research, please consider citing:
@InProceedings{yao2025ADPretrain,
title={ADPretrain: Advancing Industrial Anomaly Detection via Anomaly Representation Pretraining},
author={Xincheng Yao and Yan Luo and Zefeng Qian and Chongyang Zhang},
year={2024},
booktitle={Thirty-Ninth Annual Conference on Neural Information Processing Systems, NeurIPS 2025},
url={https://arxiv.org/abs/2511.05245},
primaryClass={cs.CV}
}
If you are interested in our work, you may can also see our previous works: BGAD (CVPR2023), PMAD (AAAI2023), FOD (ICCV2023), HGAD (ECCV2024), ResAD (NeurIPS2024).

