Abstract: Images captured under different weather conditions exhibit different degradation characteristics and patterns. However, existing all-in-one adverse weather removal methods mainly focus on learning shared generic knowledge of multiple weather conditions via fixed network parameters, which fails to adjust for different instances to fit exclusive features characterization of specific weather conditions. To tackle this issue, we propose a novel dynamic weights generation network (DwGN) that can adaptively mine and extract instance-exclusive degradation features for different weather conditions via dynamically generated convolutional weights. Specifically, we first propose two fundamental dynamic weights convolutions, which can automatically generate optimal convolutional weights for distinct pending features via a lightweight yet efficient mapping layer. The predicted convolutional weights are then incorporated into the convolution operation to mine and extract instance-exclusive features for different weather conditions. Building upon the dynamic weights convolutions, we further devise two core modules for network construction: half-dynamic multi-head cross-attention (HDMC) that performs exclusive-generic feature interaction, and half-dynamic feed-forward network (HDFN) that performs selected exclusive-generic feature transformation and aggregation. Considering communal features shared between different weather conditions (e.g., background representation), both HDMC and HDFN deploy only half of the dynamic weights convolutions for instance-exclusive feature mining, while still deploying half of the static convolutions to mine generic features. Through adaptive weight tuning, our DwGN can adaptively adapt to different weather scenarios and effectively mine the instance-exclusive degradation features, thus enjoying better flexibility and adaptability under all-in-one adverse weather removal. Extensive experimental results demonstrate that our DwGN performs favorably against state-of-the-art algorithms.
|
Overall Framework of Our Proposed Method |
The model is built in PyTorch 1.1.0 and tested on Ubuntu 16.04 environment (Python3.7, CUDA9.0, cuDNN7.5).
For installing, follow these intructions
conda create -n pytorch1 python=3.7
conda activate pytorch1
conda install pytorch=1.1 torchvision=0.3 cudatoolkit=9.0 -c pytorch
pip install matplotlib scikit-image opencv-python timm einops ptflops PIL argparse
- Download the dataset and run
cd dataset
python prepare.py
- Train the model with default arguments by running
python train.py
-
Download the pre-trained model and place it in
./checkpoints/ -
Download the dataset and place it in
./datasets/ -
Run
python test.py
- Visual results wii be saved in results
|

