Skip to content

This repository is the official implementation of the paper "Learning Pixel-Adaptive Weights for Portrait Photo Retouching"

Notifications You must be signed in to change notification settings

CodeMonsterPHD/PWA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 

Repository files navigation

Learning Pixel-Adaptive Weights for Portrait Photo Retouching

  • This repository is the official implementation of the paper "Learning Pixel-Adaptive Weights for Portrait Photo Retouching"

Data Preparation

Download raw portrait photos and retouching target from QuarkNetdisk (Code: 68JM). If you want to use a more complete dataset, you can go to the original author's GitHub.

Please ensure the data structure is as below
├── PPR10K_dataset
   └── train
       ├── source_aug(or source)
           ├── 0_0.tif
           ├── 0_0_1.tif
           └── ...
       ├── target
           ├── 0_0.tif
           ├── 0_1.tif
           └── ...
       ├── mask_360P
           ├── 0_0.png
           ├── 0_1.png
           └── ...    
   └── val
       ├── source
           ├── 1356_0.tif
           ├── 1356_1.tif
           └── ...
       ├── target
           ├── 1356_0.tif
           ├── 1356_1.tif
           └── ...
       ├── mask
           ├── 1356_0.png
           ├── 1356_1.png
           └── ...  

##Environment Preparation Requirements

Python3.7, environment.yaml
conda env create -n ppr -f env.yaml

Build. Modify the CUDA path in trilinear_cpp/setup.sh adaptively and

cd trilinear_cpp
sh trilinear_cpp/setup.sh

Training

To train our method on the PPR dataset, please run this command:

Training with LAM and without GAM, save models:

python train_LAM.py --data_path [path_to_dataset] --gpu_id [gpu_id] --use_mask True --output_dir [path_to_save_models]

Training with both LAM and GAM, save models:

python train_GAM.py --data_path [path_to_dataset] --gpu_id [gpu_id] --use_mask True --output_dir [path_to_save_models]

Evaluation

To evaluate our model on the PPR dataset, run:

Generate the retouched results:

python validation.py --data_path [path_to_dataset] --gpu_id [gpu_id] --model_dir [path_to_models]

Use matlab to calculate the measures in our paper, Please input the address of the photos generated by the model, the address of the expert retouched target photos and the address of the portrait area mask respectively.

source_dir='';
target_dir='';
mask_dir='';

Pre-trained Models

You can download pretrained models here (code:68JM).

Results

Our model achieves the following performance on PPR10K dataset:

threshold PSNR △Eab PSNR^HC △Eab^HC MGLC
LAM - a 26.23 6.62 29.53 4.29 10.77
LAM + GAM - a 25.66 7.42 28.96 4.82 6.67
LAM - b 25.35 7.31 28.63 4.73 9.29
LAM + GAM - b 24.97 8.10 28.27 5.24 6.56
LAM - c 25.65 7.39 28.95 4.80 14.62
LAM + GAM - c 25.31 8.00 28.61 5.19 8.87

About

This repository is the official implementation of the paper "Learning Pixel-Adaptive Weights for Portrait Photo Retouching"

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published